chash
stringlengths
16
16
content
stringlengths
267
674k
d24273330b997b37
The R-matrix theory in nuclear and atomic physics From Scholarpedia Daniel Baye and Pierre Descouvemont (2013), Scholarpedia, 8(1):12360. doi:10.4249/scholarpedia.12360 revision #129959 [link to/cite this article] Jump to: navigation, search Post-publication activity Curator: Daniel Baye Initially the $R$-matrix theory was aimed at describing resonances in nuclear reactions. At present, the main aim of the $R$-matrix theory is to describe the scattering and reactions resulting from the interaction of particles or systems of particles, which can be nucleons, nuclei, electrons, atoms, or molecules. Cross sections in atomic and nuclear physics may present rapid variations that are known as resonances. The shapes of these resonances often differ strongly and a parametrization of the cross sections in their vicinity as a function of energy and scattering angle is impossible without understanding the underlying physics. An elegant and rather simple solution to this problem was obtained with the $R$-matrix theory. Unexpected consequences of its development were a technique for parametrizing the cross sections of various reactions valid even far from resonances and also, much more surprisingly, an efficient technique for solving coupled-channel Schrödinger equations in the continuum. The basic idea was introduced by Kapur and Peierls (1938): the configuration space is divided into two regions. The external region allows satisfying scattering properties. The internal region simplifies the treatment of the wave functions at short distances by the use of a square-integrable basis. The basis proposed by Kapur and Peierls however suffered from a complicated energy dependence. The $R$-matrix theory was introduced by Wigner and Eisenbud (1947) according to the same principle, but with a crucial simplification, the use of an energy-independent basis. The advantage of this simplification is that the pivotal quantity in the theory, the $R$-matrix, only involves real energy-independent parameters. As mentioned before, the $R$-matrix principle relies on a division of the configuration space into the internal and external regions. The boundary between these regions is a parameter known as the channel radius. This radius is chosen large enough so that, in the external region, the different parts of the studied system interact only through known long-range forces and antisymmetrization effects due to the identity of some particles can be neglected. The scattering wave function is approximated in the external region by its asymptotic expression which is known except for coefficients related to the scattering matrix. In the internal region, the system is considered as confined. Its eigenstates thus form a discrete basis. A well chosen square-integrable basis can provide accurate approximations of scattering wave functions over the internal region. Then, the $R$ matrix, which is the inverse of the logarithmic derivative of the wave function at the boundary, can be calculated. A matching with the solution in the external region provides the scattering matrix. This method can also provide the bound states of the system. In this case, the external solution behaves as a decreasing exponential. Since the exponential decrease depends on the unknown binding energy, an iteration is then necessary. Double evolution of the theory The $R$-matrix theory was developed into two different directions with little exchange between these variants. Many of its practitioners often ignore the progresses about the other aspect of this double-faced method. As already mentioned, the original goal was to provide an efficient theory for the treatment of nuclear resonances (Wigner and Eisenbud, 1947; Lane and Thomas, 1958). From information on bound states and low-energy resonances, it soon became clear that the $R$-matrix theory offers an efficient way for accurately parametrizing not only resonances but also the non-resonant part of low-energy cross sections with a small number of parameters (Lane and Thomas, 1958). An important advantage is that most of these parameters have a physical meaning. This first variant of the method is still very important and much employed, in particular to parametrize the low-energy cross sections relevant in nuclear astrophysics. Such parametrizations are useful to provide cross sections at angles and energies where they have not been measured (at least at energies below the maximum energy where measurements took place). In the context of nuclear astrophysics, it is important to reliably extrapolate measured cross sections to the very low energies encountered in stars, at which direct measurements are in general impossible. This version of the $R$-matrix theory will be called hereafter the phenomenological $R$ matrix. Its properties are reviewed in (Lane and Thomas, 1958; Breit, 1959; Descouvemont and Baye 2010). The other aspect of the $R$-matrix theory is that it can provide a simple and elegant way for solving the Schrödinger equation. It is especially competitive in coupled-channel problems with large numbers of open channels where a direct numerical integration may become unstable. Moreover, the influence of closed channels can be taken into account in a simple way. An additional advantage is that narrow resonances which can escape a purely numerical treatment are easily studied. This other facet of the $R$-matrix theory has been mostly developed in atomic physics although it can also be very useful for nuclear-physics applications. This variant will be called hereafter the calculable or computational $R$ matrix. Its properties are reviewed in (Burke and Robb, 1975; Barrett et al., 1983; Descouvemont and Baye 2010; Burke, 2011). A very comprehensive review of the phenomenological $R$-matrix method has been given in 1958 by Lane and Thomas. Their article contains most of the important aspects of the phenomenological applications of the $R$ matrix to nuclear physics. Many of their results can also be useful for the calculable $R$ matrix. However, in 1957, just before that review appeared in print, an important improvement of the method was published, which is, therefore, not used in their review. Bloch introduced a singular operator defined on the boundary between the two regions, now known as the Bloch operator, which allows a more elegant and compact presentation of the method (Bloch, 1957). The main interest of the Bloch operator is however that its use led to more general treatments of the resolution of the Schrödinger equation in the internal region and opened the way to accurate methods of resolution in atomic and nuclear physics. Several reviews on the computational $R$ matrix have been published in the context of nuclear physics (Barrett et al., 1973) and of atomic physics (Burke and Robb, 1975; Burke and Berrington (eds), 1993; Burke, 2011). References (Barrett et al., 1983; Descouvemont and Baye 2010) deal with both aspects. The phenomenological $R$-matrix The $R$ matrix allows parametrizing various physical processes and its determination provides collision matrices and cross sections. For each set of good quantum numbers, i.e. total angular momentum and parity, the dimension of the phenomenological $R$ matrix is equal to the number of channels relevant to the physical properties. When a single channel is considered, the $R$ matrix for a partial wave with orbital momentum $l$ (for simplicity, we consider zero spins) is a function of the energy $E$ parametrized by the formula \[\tag{1} R_{l}(E) = \sum_{n=1}^N \frac{\gamma_{nl}^2} {E_{nl} - E} \] where the $\gamma_{nl}^2$ are the reduced widths (Lane and Thomas, 1958). In principle, the $R$ matrix possesses an infinity of poles at the real energies $E_{nl}$ but only a limited number $N$ of such poles affect the low-energy cross sections. The lowest poles are closely related to bound states at negative energies or to narrow resonances at positive energies. Nevertheless, the poles and the energies of physical states are slightly different (see Eq. (14) below for an explanation). Because of this shift, the determination of these parameters from data requires some skill. The real parameters $\gamma_{nl}$ are known as the reduced width amplitudes because their square is a crucial factor of the width of non-overlapping resonances (see Eq. (13) below). Contrary to the reduced width, the width is also very sensitive to the effects of transmission through the Coulomb barrier. The parameters $E_{nl}$ and $\gamma_{nl}$, and thus the $R$ matrix, depend on the chosen channel radius $a$. The $R$ matrix allows calculating the phase shift $\delta_l$ of the $l$th partial wave at an energy $E$ corresponding to the wavenumber $k$ and velocity $v$, \[\tag{2} \tan \delta_{l} = - \frac{F_l(\eta,ka) - ka R_{l}(E) F_l'(\eta,ka)} {G_l(\eta,ka) - ka R_{l}(E) G_l'(\eta,ka)} \] where $F_l$ and $G_l$ are the regular and irregular Coulomb functions and $\eta = Z_1Z_2e^2/\hbar v$ is the dimensionless Sommerfeld parameter. Low-energy elastic cross sections require fitting all phase shits up to some value $l_{\rm max}$ depending on the maximum energy considered. Expression (2) is often written with an additional parameter, the boundary parameter $B$. As shown by Barker, 1972, the results are independent of the choice of $B$ and the simple value $B=0$ can be chosen without loss of generality. When studying a reaction, it is necessary to consider more than one channel. In this case the good quantum numbers are the total angular momentum $J$ and parity $\pi$. The spins of the particles in the different channels play a role. The number $N_c$ of channels on which the $R$ matrix depends is in general larger than the number of open physical channels. At energy $E$, the multichannel $R$ matrix is defined as \[\tag{3} R_{c c'}(E) = \sum_{n=1}^N \frac{\gamma_{nc}\gamma_{nc'}}{E_n-E} \] where $c$, $c'$ are channel indices varying from $1$ to $N_c$. Typically, notation $c$ combines the definition of the reaction channels (nature and spins of the nuclei) and the relative orbital momentum. For given $J^\pi$, the $R$ matrix depends on $N$ real poles $E_n$ and $N N_c$ reduced-width amplitudes $\gamma_{nc}$ of pole $E_n$ in channel $c$. The total number of parameters is thus $N (N_c +1)$. It is more difficult to adjust such a number of parameters to available data, specially when several partial waves contribute. For this reason, the number of channels considered in practice is often restricted to $N_c = 2$. The unitary collision matrix is obtained with \[\tag{4}\newcommand{\ve}[1]{\boldsymbol{#1}} \ve{U}=\ve{Z}^{-1} \ve{Z}^*. \] An element of matrix $\ve{Z}$ reads \[\tag{5} Z_{c c'} = (k_{c'} a)^{-1/2} \Bigl[ O_{c} (\eta_c, k_{c} a)\delta_{c c '} - k_{c'} a R_{c c'} O'_{c'} (\eta_c, k_{c'} a) \Bigr], \] where $k_c = \sqrt{2\mu_c (E-E_c)/\hbar^2}$ is the wavenumber in open channel $c$ with reduced mass $\mu_c$ and threshold energy $E_c < E$, $\eta_c$ is the corresponding Sommerfeld parameter and $O_c (\eta_c,x) = G_{l_c} (\eta_c,x) + i F_{l_c} (\eta_c,x)$ is an outgoing Coulomb wave with orbital momentum $l_c$. The size of the collision matrix depends on the number of open channels. It may be smaller than the size of the $R$ matrix, which may take into account the influence of closed channels. For complex potentials as encountered in the optical model, expression (4) must be modified into \[\tag{6} \ve{U}=\ve{Z}_O^{-1} \ve{Z}_I, \] where $\ve{Z}_O = \ve{Z}$ and $\ve{Z}_I \neq \ve{Z}_O^*$ is given by a similar expression with outgoing functions replaced by incoming ones $I_c (\eta_c,x) = G_{l_c} (\eta_c,x) - i F_{l_c} (\eta_c,x)$ (Descouvemont and Baye 2010; Druet et al., 2010). The collision matrix $\ve{U}$ is then not unitary. Resonances arise at energies close to a pole $E_{nl}$ of the $R$ matrix. The $R$ matrix is approximated by a single term \[\tag{7} R_l(E) \approx \frac{\gamma_{nl}^2}{E_{nl} - E}. \] The phase shift becomes \[\tag{8} \delta_l(E) \approx \phi_l(E) + \arctan \frac{\gamma_{nl}^2 P_l (E)} {E_{nl} - \gamma_{nl}^2 S_l (E) - E} \] where \[\tag{9} \phi_l(E) = -\arctan [F_l(\eta,ka)/G_l (\eta,ka)] \] is the hard-sphere phase shift, \[\tag{10} P_l (E) = \frac{ka}{F_l(\eta,ka)^2 + G_l (\eta,ka)^2} \] is the penetration factor and \[\tag{11} S_l (E) = P_l (E) [F_l(\eta,ka) F_l'(\eta,ka) + G_l (\eta,ka) G_l'(\eta,ka)] \] is the shift factor. Expression (8) is close to the Breit-Wigner form of a resonant phase shift \[\tag{12}\newcommand{\dem}{\frac{1}{2}} \delta_l^{\rm BW}(E) \approx \phi_l(E) + \arctan \frac{\dem \Gamma(E)}{E_R - E} \] where the so-called formal width is defined by \[\tag{13} \Gamma(E) = 2 \gamma_{nl}^2 P_l (E). \] While the reduced width and penetration factor depend on $a$, the width is a physical quantity and may thus not depend on the channel radius. It is an energy-dependent quantity whose asymmetric shape depends on the behaviour of $P_l$. Roughly, one can interpret $\gamma_{nl}^2$ as the internal (or nuclear) component of the width and $P_l$ as its Coulomb component. The resonance energy $E_R$ is given by \[\tag{14} E_R = E_{nl} - \gamma_{nl}^2 S_l (E_R) \] and is thus shifted with respect to the pole energy. The factor $S_l (E_R)$ in (14) is calculated at the resonance energy $E_R$. Hence, $E_R$ is defined by an implicit equation, which can be solved by iteration. Because of the energy dependence of $S_l$, the value of the formal width (13) cannot be directly compared with experiment. Since the shift factor weakly depends on energy, one can make the Thomas approximation (Lane and Thomas, 1958), i.e. use a Taylor expansion as a function of $E - E_R$ limited at first order, \[\tag{15} S_l (E) \approx S_l (E_R) + (E - E_R) \left( \frac{dS_l}{dE} \right)_{E_R}. \] Equation (8) becomes \[\tag{16} \delta_l(E) \approx \phi_l(E) + \arctan \frac{\gamma_{nl}^2 P_l (E)} {(E_R - E) \left[1 + \gamma_{nl}^2 \left( \frac{dS_l}{dE} \right)_{E_R} \right]}. \] One recovers the standard Breit-Wigner expression (12) with an energy-dependent width called observed width given by \[\tag{17} \Gamma_{\rm obs}(E) = \frac{2 \gamma_{nl}^2 P_l (E)} {1 + \gamma_{nl}^2 \left( \frac{dS_l}{dE} \right)_{E_R}} \] which is directly comparable with experiment. In many cases, the reduced width is small and the difference between formal and observed width is negligible with respect to the experimental error bar (see section XII.$3$ of (Lane and Thomas, 1958) for a discussion). A counterexample can be found in (Delbar et al., 1992). An example: $^{12}$C + p elastic scattering As an example, let us consider the elastic scattering of protons by $^{12}$C. Three resonances are known in the energy range covered by the data: $1/2^+$ at $0.424$ MeV, $3/2^-$ at $1.558$ MeV and $5/2^+$ at $1.604$ MeV. Data sets are available at the c.m. angles $\theta=89.1^{\circ}$ and $146.9^{\circ}$. They are fitted simultaneously by using $E_R$ and $\Gamma_R$ of the resonant partial waves as adjustable parameters in the single-pole approximation ($N = 1$) of (1). For other partial waves, the hard-sphere phase shift $\phi_l$ is used. Table 1: $R$-matrix parameters from a simultaneous fit of $^{12}$C+p scattering data (Meyer et al., 1976) at $\theta=89.1^{\circ}$ and $146.9^{\circ}$. Resonance energies $E_R$ are expressed in MeV and widths $\Gamma_R$ in keV $J^{\pi}=1/2^+$ $J^{\pi}=3/2^-$ $J^{\pi}=5/2^+$ $E_R$ $\Gamma_R$ $E_R$ $\Gamma_R$ $E_R$ $\Gamma_R$ $a=4$ fm $0.427$ $33.8$ $1.560$ $51.4$ $1.603$ $48.1$ $a=5$ fm $0.427$ $32.9$ $1.559$ $51.4$ $1.604$ $48.1$ $a=6$ fm $0.427$ $30.9$ $1.558$ $51.3$ $1.606$ $47.8$ Exp. (Meyer et al., 1976) $0.424$ $33$ $1.558$ $55$ $1.604$ $50$ The fitted resonance properties are given in Table 1 for different channel radii, $a = 4$, $5$ and $6$ fm (Descouvemont and Baye, 2010). The results are almost independent of $a$. The corresponding cross sections are shown in Figure 1. The $R$-matrix parametrization reproduces the data very well, not only in the vicinity of the resonances, but also between them, where the process is mostly non-resonant. It provides a good approximation of the cross section at any angle for $E$ smaller than, or close to, $2$ MeV. The three channel radii provide fits which are indistinguishable at the scale of the figure. The width of the $1/2^+$ resonance depends a little on the channel radius: it decreases from $33.8$ keV at $a = 4$ fm to $30.9$ keV at $a = 6$ fm. This indicates that experimental widths must sometimes be considered with caution. They may depend on the way they are derived from the data. Figure 1: $R$-matrix fits of $^{12}$C+p experimental excitation functions at two c.m. angles (Meyer et al., 1976) with the parameters of Table 1 (Descouvemont and Baye, 2010). A drawback of the phenomenological $R$ matrix is that the poles and widths depend on the choice of channel radius, i.e. on a rather arbitrary value. This aspect of the $R$ matrix has been criticized by a number of authors and has led to further developments of the competing phenomenological $K$ matrix (Humblet, 1990). The $K$ matrix, which provides an alternative formulation of the collision matrix, is also expanded in a series involving an infinity of poles. This approach is based on a delicate treatment of Coulomb functions. In spite of the fact that the $K$ matrix does not contain an arbitrary parameter such as the channel radius, its parametrization is more difficult because its parameters are not necessarily real and may have a less direct physical interpretation. The phenomenological $R$ matrix and most of its applications were already exhaustively described fifty years ago in (Lane and Thomas, 1958). Among these applications, let us mention a detailed study of resonances and an extension of the method to the description of electromagnetic processes. All these applications have been made in nuclear physics, i.e. for the scattering of neutrons on nuclei or of nuclei on nuclei with the presence of a repulsive Coulomb barrier. Nevertheless, using the method still revealed a number of difficulties. In a series of papers, Barker and collaborators provided practical solutions to the determination of the $R$-matrix parameters from experimental data (Barker et al., 1968; Barker, 1972; Barker, 1988) and applied this framework to the spectroscopy and reactions of light nuclei (Barker, 1987b; Barker and Ferdous, 1980). They also explained non-intuitive effects such as the Thomas-Ehrman shift (Barker and Ferdous, 1980), ghosts of resonances (Barker et al., 1976) and extended the method to further processes such as radiative-capture reactions (Barker, 1980; Barker, 1987a; Barker and Kajino, 1991) and delayed $\beta$ decay (Barker, 1989). The approach developed by Barker and collaborators has become a standard tool for the analysis of low-energy radiative-capture reactions useful in astrophysics. Progresses in the adjustment of $R$-matrix parameters in the single-channel and multichannel cases have been performed in (Angulo and Descouvemont, 2000; Brune, 2002). More recently, the phenomenological $R$-matrix method was used to analyze scattering experiments involving radioactive beams. These analyses essentially provide information on resonance properties (spin, energy, width). In (Pellegriti et al., 2008), the $^{18}$Ne+p elastic cross section and the $^{18}$Ne(p,p') inelastic cross section were measured simultaneously, and interpreted within the $R$-matrix formalism. This analysis provided evidence for $^{19}$Na states where core excitations are important. Other recent applications on elastic scattering can be found for example in (deBoer et al., 2012; Jung et al., 2012). In nuclear astrophysics, the main goal of the $R$-matrix formalism (Azuma et al., 2010) is to fit available data in order to extrapolate them to stellar energies, which are much lower than the Coulomb barrier. At those energies, the cross sections are too small to be measured in the laboratory. The important $^{12}$C($\alpha,\gamma)^{16}$O reaction fixes the $^{12}$C/$^{16}$O ratio in the galaxy. Simultaneous fits of the capture and elastic cross sections, and of the $^{16}$N $\beta$-decay spectrum to $^{12}$C$+\alpha$ (Azuma et al., 1994) provide strong constraints on the astrophysical $S$ factor at 300 keV, the relevant energy for this capture reaction in stellar models. Another important reaction in nuclear astrophysics is $^{18}$F(p,$\alpha)^{15}$O (see (Mountford et al., 2012) and references therein). The $\beta$-delayed spectra of $^{12}$B and $^{12}$N are analyzed in (Hyldegaard et al., 2010) with the $R$-matrix formalism to derive information on unbound states of $^{12}$C. The calculable $R$-matrix The aim of the calculable $R$ matrix is to provide an efficient way of solving the Schrödinger equation both at positive and negative energies. It was proposed in 1965 by Haglund and Robson and applied to a two-channel problem involving square-well potentials (Haglund and Robson, 1965). An expansion over a finite basis was introduced by Buttle (Buttle, 1967). He performed the first realistic application on $^{12}$C + n scattering (Buttle, 1967). He also proposed a correction to the truncation of the $R$ matrix to a finite number of poles, that is now named after him. A more serious problem is a discontinuity of the derivative of the wave function at the boundary between the regions that occurs with the traditional choice of basis states inspired by the original ideas in (Wigner and Eisenbud, 1947). Various solutions to the lack of matching at the boundary have been suggested (see Barrett et al., 1983 for a review). This apparent problem has attracted a lot of attention even long after an efficient technique where it does not occur was introduced (Lane and Robson, 1966; Lane and Robson, 1969). By dropping an unnecessary condition, the $R$-matrix method can be very accurate without matching problems and without need for a Buttle correction (see Choice of basis and misconceptions and Descouvemont and Baye, 2010 for details). Some users of the phenomenological $R$ matrix consider the channel radius as a parameter, which must be optimized when fitting the data. Even if this dependence on a parameter without strong physical meaning is weak, this is a drawback that would not be acceptable when aiming at accurately solve the Schrödinger equation. Hence a crucial test of the results of the calculable $R$ matrix is an almost perfect independence with respect to the choice of channel radius. This test provides a measure of the accuracy of the calculations. Calculation of the $R$-matrix for scattering by a potential After separation of the angular part, the radial Schrödinger equation in partial wave $l$ can be written as \[\tag{18} (H_l - E) u_l = 0. \] In this expression, the radial Hamiltonian $H_l$ is defined as \[\tag{19} H_l = T_l + V = -\frac{\hbar^2}{2\mu} \left( \frac{d^2}{dr^2} - \frac{l(l+1)}{r^2} \right) + V(r), \] where $\mu$ is the reduced mass and potential $V$ is well approximated by the Coulomb potential beyond the channel radius $a$. The bounded radial solutions $u_l$ vanish at the origin and have at positive energies the asymptotic behaviour \[\tag{20}\newcommand{\arrow}[2]{\ _\overrightarrow{#1 \rightarrow #2}\ } u_{l} (r) \arrow{r}{\infty} \cos \delta_l\, F_l(\eta,kr) + \sin \delta_l\,G_l (\eta,kr). \] An important progress of the calculable $R$-matrix method was the introduction of the Bloch operator (Bloch, 1957) \[\tag{21}\newcommand{\cL}{\cal L} {\cL} = \frac{\hbar^2}{2\mu}\, \delta (r-a) \left( \frac{d}{dr} - \frac{B}{r} \right). \] The boundary parameter $B$ can be chosen arbitrarily and the physical results are independent of its value (Descouvemont and Baye, 2010). The wave function $\newcommand{\uint}{u_{l}^{\rm int}}\uint$ in the internal region is approximated by solutions of the inhomogeneous Bloch-Schrödinger equation \[\tag{22}\newcommand{\uext}{u_{l}^{\rm ext}} (H_l + {\cL} - E) \uint = {\cL} \uext \] where the external solution $\uext$ approximated by (20) is used in the right-hand member. This equation is complemented by the continuity condition \[\tag{23} \uint (a) = \uext (a) \] at the boundary. The Bloch operator has two important advantages: (i) it corrects the non-Hermiticity of the Hamiltonian $H_l$ over the internal region and (ii) it matches the logarithmic derivatives of $\uint$ and $\uext$ at $r=a$, thus providing a smooth connection between the two parts of the radial function. In the internal region, the wave function $\uint$ is expanded over some finite basis involving $N$ linearly independent square integrable functions $\varphi_i$ as \[\tag{24} \uint (r) = \sum_{i=1}^N c_i \varphi_i (r). \] The functions $\varphi_i$ vanish at the origin but are not necessarily orthogonal. At $r = a$, they are not assumed to satisfy a specific boundary condition. (Many papers impose such a condition, but it has unfavourable effects on the convergence; see Choice of basis and misconceptions and Descouvemont and Baye (2010) for a discussion.) For $B = 0$, the $R$ matrix at energy $E$ is defined by \[\tag{25} u_l (a) = R_l(E) a u_l' (a). \] The inverse of the $R$ matrix is thus the dimensionless logarithmic derivative of the radial wave function at the boundary between both regions. This "matrix" has dimension $1$ in a single-channel case and is just a function of energy. It also depends on the channel radius. Expansion (24) is introduced in (22) and the resulting equation is projected on $\varphi_i (r)$. The calculable $R$ matrix is then obtained as \[\tag{26} R_l (E) = \frac{\hbar^2}{2\mu a} \sum_{i,j=1}^N \varphi_i(a) (\ve{C}^{-1})_{ij} \varphi_j(a), \] where the elements of the symmetric matrix $\ve{C}$ are defined as \[\tag{27}\newcommand{\la}{\langle}\newcommand{\ra}{\rangle} C_{ij}(E) = \la \varphi_i | T_l + {\cL} + V - E | \varphi_j \ra. \] Dirac brackets correspond here to one-dimensional integrals over the variable $r$ from $0$ to $a$. When the basis functions $\varphi_i (r)$ are orthonormal, the eigenvalues $E_{nl}$ and the corresponding normalized eigenvectors $\ve{v}_{nl}$ of matrix $\ve{C}(0)$ lead to the spectral decomposition \[\tag{28} [\ve{C}(E)]^{-1} = \sum_{n=1}^{N} \frac{\ve{v}_{nl} \ve{v}_{nl}^{\rm T}}{E_{nl} - E} \] where T means transposition. The $R$ matrix (26) can be written as in Eq. (1), \[\tag{29} R_l(E) = \sum_{n=1}^N \frac{\gamma_{nl}^2}{E_{nl} - E}. \] In this expression, the reduced width amplitudes are given by \[\tag{30} \gamma_{nl} = \left( \frac{\hbar^2}{2\mu a} \right)^{1/2} \phi_{nl} (a) \] with \[\tag{31} \phi_{nl} (r) = \sum_{i=1}^N v_{nl,i} \varphi_i(r), \] where $v_{nl,i}$ is the $i$th component of $\ve{v}_{nl}$. The reduced width amplitudes are proportional to the value at the channel radius of variational approximations $\phi_{nl}$ of the eigenfunctions of the Hermitian operator $H_l + {\cL}$. The functions $\phi_{nl}$ corresponding to the lowest energies $E_{nl}$ thus represent approximate eigenfunctions of the physical problem confined over the interval $(0,a)$ with a fixed logarithmic derivative at $r = a$. The other functions, though unphysical, are important to ensure a smooth matching between the internal and external wave functions (Baye et al., 1998). The traditional expression for the theoretical $R$ matrix is obtained when $N$ tends towards infinity in a complete basis as \[\tag{32} R_l(E) = \sum_{n=1}^\infty \frac{\gamma_{nl}^2}{E_{nl} - E}. \] The energies $E_{nl}$ are now the exact eigenvalues of the operator $H_l + {\cL}$ and the reduced width amplitudes $\gamma_{nl}$ are related to the values at $r=a$ of its exact eigenfunctions. The $R$ matrix is a real function when $V$ is real. It has an infinity of real simple poles, bounded from below. Its derivative is always positive at regular points. The $R$-matrix theory can be extended to several channels and to non-local potentials. A partial wave $JM\pi$ of the total wave function of the collision at energy $E$ can be written as \[\tag{33} \Psi_{(c_0)}^{JM\pi} = \sum_{c} |c \ra r^{-1} u_{c(c_0)} (r), \] where $|c \ra$ is a channel state describing the internal properties of the colliding nuclei and the angular part of their relative motion (Descouvemont and Baye, 2010), and index $c_0$ refers to the entrance channel. Introducing (33) in the Schrödinger equation and projecting over $|c\ra$ leads to the coupled equations \[\tag{34} \left(T_c + E_{c} - E \right) u_{c(c_0)}(r) + \sum_{c'} V_{cc'}(r) u_{c'(c_0)}(r) = 0 \] for $c = 1$ to $N_c$. In this expression, $E_c$ is the threshold energy of channel $c$, \[\tag{35} T_c = -\frac{\hbar^2}{2\mu_c} \left( \frac{d^2}{dr^2} - \frac{l_c(l_c+1)}{r^2} \right) \] is the kinetic energy operator for a channel with reduced mass $\mu_c$ and orbital momentum $l_c$, and $V_{cc'}$ is an element of a potential matrix with the property $V_{cc'} (r) \rightarrow Z_{1c}Z_{2c}e^2 r^{-1} \delta_{cc'}$ when $r$ tends to infinity. The potential matrix could be non local without much additional complication. The asymptotic form of the radial wave functions $u_{c(c_0)}(r)$ and the radial wave functions in the external region at energy $E$ are defined as \[\tag{36} u^{\rm ext}_{c(c_0)}(r)= \left\{\begin{array}{ll} v_{c}^{-1/2} \Bigl( I_{c} (k_{c} r)\delta_{c c_0} - U_{c c_0} O_{c} (k_{c} r) \Bigr) & {\rm \ for\ } E > E_c \\ A_{cc_0} W_c(2\kappa_{c}r) & {\rm \ for\ } E < E_c, \end{array} \right . \] where $c_0$ is the entrance channel ($E_{c_0} < E$) and $W_c(x) \equiv W_{-\eta_{c},l_c+\frac{1}{2}}(x)$ is a Whittaker function. Here closed channels must be taken into account. In these cases, the real parameters $\kappa_c$ and $\eta_c$ are calculated with the positive energy difference $E_c - E$. The Bloch operator is defined in the multichannel formalism as \[\tag{37} {\cL} = \sum_{c} |c\ra {\cL}_{c} \la c |,\ \ \ \ \ \ \ \ \ \ \ {\cL}_{c}= \frac{\hbar^2}{2\mu_{c}} \, \delta (r-a) \left( \frac{d}{dr} - \frac{B_{c}}{r} \right), \] where coefficients $B_{c}$ are chosen as zero for open channels and as \[\tag{38} B_c = 2\kappa_c a \frac{W_c'(2\kappa_c a)}{W_c(2\kappa_c a)} \] for closed channels because this choice suppresses the right-hand side of the Bloch-Schrödinger system of equations (39) in channel $c$. Notice that these coefficients then depend on energy for closed channels. The Bloch-Schrödinger system of equations is given by \[\tag{39} \sum_{c'} \Bigl[(T_{c}+{\cL}_{c}+E_c-E)\delta_{c c'} + V_{c c'} \Bigr] u^{\rm int}_{c'}(r) = {\cL}_{c} u^{\rm ext}_{c}. \] The internal parts of the radial wave functions are expanded over a basis $\varphi_i(r)$, \[\tag{40} u^{\rm int}_c (r) = \sum_{i=1}^N c_{c i} \varphi_i(r). \] The $R$ matrix is symmetric, with elements given by \[\tag{41} R_{c c'}(E) = \frac{\hbar^2}{2\sqrt{\mu_{c}\mu_{c'}} a} \sum_{i,i'=1}^N \varphi_i(a) (\ve{C}^{-1})_{c i, c'i'} \varphi_{i'}(a), \] where \[\tag{42} C_{c i, c'i'} = \la \varphi_{i}| T_{c}+{\cL}_{c}+E_c-E |\varphi_{i'} \ra \delta_{c c'} + \la \varphi_{i}| V_{c c'} |\varphi_{i'} \ra. \] The spectral decomposition of the symmetric matrix $\ve{C}$ for an orthonormal basis provides the canonical form (3) of the multichannel $R$ matrix, where the real poles $E_n$ are the eigenvalues of $\ve{C}$ and the reduced-width amplitude of pole $E_n$ in channel $c$ is expressed as a function of the components of the corresponding normed eigenvector $\ve{v}_n$ as \[\tag{43} \gamma_{nc} = \left( \frac{\hbar^2}{2\mu_{c} a} \right)^{1/2} \sum_{i=1}^N v_{n,ci} \varphi_i(a). \] The size of calculable $R$ matrices is often much larger than the size of phenomenological $R$ matrices. Indeed, the size of the calculable $R$ matrix is determined by a physical choice, the number of configurations included in the calculation. It is only limited by the available computation times. The size of the phenomenological $R$ matrix is limited by the availability of experimental data. Such data are rarely available for more than two channels. Choice of basis and misconceptions Considerable confusion exists in the literature about the properties that basis states $\varphi_i$ should have. Improper choices have led to the introduction of corrections and to attempts to use the boundary parameter $B$ to correct drawbacks of the basis. However single-channel results are independent of the choice of $B$. This confusion has sometimes led to an undeserved reputation of poor convergence for the calculable $R$-matrix method. In their seminal paper, Wigner and Eisenbud wanted to provide a phenomenological description of resonances (Wigner and Eisenbud, 1947). They did not intend to propose a technique of resolution of the Schrödinger equation. To reach their goal they assume that the basis functions all satisfy the boundary conditions $\varphi_i(0)=0$ and \[\tag{44} a\varphi'_i(a) -B \varphi_i(a) = 0. \] This procedure leads to $R$ matrix (32). When used as a technique of resolution, the finite-basis $R$ matrix (26) or (29) obtained with this procedure does not converge uniformly. The reason is simple. The first derivative of the wave function suffers from a discontinuity at $r=a$ (Szmytkowski and Hinze, 1996). The limit of ${\uint}'$ when $r$ tends towards $a$ to the left is equal to ${\uext}'(a)$ but not to ${\uint}'(a)$, \[\tag{45} \lim_{r \rightarrow a^-} {\uint}'(r) = {\uext}'(a) \ne {\uint}'(a). \] For example, if $B = 0$, $\varphi'_i (a)$ vanishes for all $i$ values and one readily sees that ${\uint}'(a) = 0$ at all energies. This property has unfavourable consequences on the convergence of numerical methods when the basis is truncated since the logarithmic derivative of the external solution depends on the phase shift (and thus on energy) and can not be matched with the internal solution (see Figure 3). Buttle (Buttle, 1967) has proposed a correction to the $R$-matrix truncation. Although this correction improves the phase shifts, it does not really solve the problem because it does not improve the wave functions. This problem received a solution with the works of Lane and Robson (Lane and Robson, 1966; Lane and Robson, 1969; Philpott, 1975). Their method was successfully applied in nuclear physics where traditional basis functions do not satisfy (44) and, on the contrary, display a variety of behaviours at the channel radius. With oscillator basis functions, accurate results for neutron-nucleus scattering could be obtained (Philpott, 1975; Philpott and George, 1974). At the same time, a microscopic extension of the $R$ matrix using a fully antisymmetrized two-centre harmonic-oscillator model provided accurate phase shifts for collisions between light nuclei with few basis states (Baye and Heenen, 1974; Baye et al., 1977). The success of these calculations relies on the fact that the Bloch operator makes condition (44) unnecessary. Since the results do not depend on $B$, the choice $B=0$ can be used. A general though economical method for solving coupled-channel problems is described in (Hesse et al., 1998). The negative role of condition (44) remained long unnoticed in atomic physics where in many cases basis states were imposed to satisfy (44). To illustrate this negative role, let us compare two families of basis functions, satisfying or not this condition. Since many types of basis functions exist and are used in practical calculations, the present choice is specifically made to emphasize the drawbacks of condition (44). Sine basis functions are given for $i = 1$ to $N$ by \[\tag{46} \varphi_i(r)=\sin \left[ \frac{\pi r}{a}(i-1/2) \right]. \] Their derivatives vanish at $r=a$ and thus satisfy the unnecessary condition (44) with $B = 0$. The Legendre basis is defined for $i = 1$ to $N$ by \[\tag{47} \varphi_i(r)=r P_{i-1}(2r/a-1), \] where $P_n$ is the Legendre polynomial of degree $n$. The factor $r$ ensures that the wave function vanishes at the origin. The logarithmic derivatives of these basis functions present a variety of behaviours at $r=a$. In fact, it is convenient to replace these Legendre polynomials by the equivalent Lagrange-Legendre basis (Hesse et al., 2002) \[\tag{48} \varphi_i (r)=(-1)^{N+i} \sqrt{\frac{1-x_i}{a x_i}}\, \frac{r P_N(2r/a-1)}{r-a x_i}, \] where the $x_i$ are the $N$ zeros of \[\tag{49} P_N(2x_i-1)=0. \] This basis is equivalent to the Legendre basis since it also involves $N$ linearly independent polynomials of degree at most $N$ vanishing at the origin. The Lagrange basis is particularly useful when an additional approximation, which does essentially not cost any loss of accuracy, is made (Baye et al., 2002) (see Appendix). The $R$-matrix method on a Lagrange mesh (Malegat, 1994; Baye et al., 1998; Hesse et al., 1998; Hesse et al., 2002) is accurate and economical. It is very convenient in calculations where the matrix elements of the potential, local or non local, are long to compute such as in ab initio calculations (Quaglioni and Navrátil, 2008). An example: $^{12}$C + p scattering As an example, let us again consider the elastic scattering of protons by $^{12}$C. As seen in the preceding example, the $^{12}$C+p system presents a narrow $1/2^+$ resonance at low energies (see Table 1). The nuclear and Coulomb potentials \[\tag{50} V_N(r)=-73.8 \exp[-(r/2.70)^2],\\ V_C(r)=6e^2/r, \] reproduce the resonant behaviour of the $s$ phase shifts near 0.42 MeV. The units in the potential are fm and MeV for lengths and energies, respectively ($e^2 = 1.44$ MeV fm). Figure 2: $^{12}$C+p $R$-matrix phase shifts (in degrees) as a function of the energy $E$ for different bases and conditions ($l=0$): (a) Lagrange functions at $a=8$ fm (the exact results are superimposed to the $N=10$ curve), (b) sine functions at $a=8$ fm, (c) Lagrange functions for $N=15$ (the $a=7$ fm curve corresponds to the exact results) (Descouvemont and Baye, 2010). Exact phase shifts from numerical resolutions of the Schrödinger equation are compared with $R$-matrix calculations using Lagrange (a) and sine (b) functions in Figure 2. The channel radius is chosen as $a=8$ fm, where the nuclear interaction is negligible. Figure 2 (a) shows that with the Lagrange functions (48), the convergence is reached with $N \geq 10$. The value $N=7$ is an example of a number of points not large enough. Figure 2 (b) illustrates the sine functions (46), poorly adapted to a good matching since the left derivative of the wave function at $r=a$ is zero. Even $N=20$ is far from the exact calculation. Figure 2 (c) presents the convergence as a function of the channel radius with the Lagrange functions ($N=15$ is fixed). As could be expected, $a=5$ fm is too small ($\left|V_N(a)/V_C(a)\right|=1.4$). To obtain a good stability, channel radii larger than 6 fm should be used. The matching problem is illustrated in Figure 3, where is shown the wave function at $E=2$ MeV with $a=8$ fm and $N=15$. With the Lagrange functions, the matching between the internal and external wave functions is quite smooth. For sine functions, the matching is poor, which has a direct impact on the phase shift. Figure 3: $^{12}$C+p $l=0$ wave functions with Lagrange and sine functions at $E=2$ MeV ($a=8$ fm, $N=15$). Solid lines represent the internal wave function, and dotted lines the external ones. The Lagrange wave function is superimposed to the exact result (Descouvemont and Baye, 2010). After its introduction for nuclear-physics problems (Haglund and Robson, 1965; Buttle, 1967), the $R$-matrix method was first extensively developed to study electron (or positron) collisions on atoms and molecules (Burke and Robb, 1975; Burke and Berrington (eds), 1993; Burke, 2011). Important and difficult aspects of these atomic-physics problems are the non-locality of the interaction due to electron exchanges and the long-range nature of the interactions, due to polarization effects. The non-locality is well treated in the $R$-matrix approach. The long range of the force implies that the asymptotic behaviour of the solution is only reached for very large values of the interparticle distance. To avoid using a very large channel radius, propagation methods have been introduced (Light and Walker, 1976). They involve an intermediate region where the interaction can be simplified, for example with an asymptotic expansion. The applications of the $R$ matrix in atomic physics are reviewed in (Burke, 2011). In nuclear physics, the computational $R$ matrix is less used. The microscopic $R$-matrix method is much applied in microscopic cluster calculations in which the $A$-body Schrödinger equation is solved in an approximate way by assuming that clusters exist. These clusters are substructures such as an $\alpha$ particle for which a frozen internal wave function is employed in the model. The difficult antisymmetrization is taken into account exactly, but in the internal region only (Baye and Heenen, 1974; Baye et al., 1977). Many microscopic calculations of elastic and inelastic scattering can be performed. The versatility of the $R$ matrix finds interesting applications in processes where bound and scattering states are mixed, such as radiative capture or delayed $\beta$ decay (Baye and Descouvemont, 1983; Baye and Descouvemont, 1987b). The $R$ matrix can be very useful in large coupled-channel calculations. A significant simplification occurs when the Lagrange functions (48) are chosen as basis functions and the consistent Gauss quadrature is used as explained in the Appendix (Baye et al., 1998; Hesse et al., 1998; Descouvemont and Baye, 2010). Coupled-channel calculations are simplified because calculations of matrix elements of the potentials are avoided (Malegat, 1994; Baye et al., 1998; Hesse et al., 1998). This approach has been extended to non-local interactions (Hesse et al., 2002). It is applied in recent ab initio calculations (Quaglioni and Navrátil, 2008). The $R$ matrix has recently been applied to the Continuum Discretized Coupled Channel (CDCC) method. The CDCC method was suggested by Rawitscher (Rawitscher, 1974) and developed and used by several groups (Austern et al., 1987; Nunes and Thompson, 1999). The purpose of the CDCC method is to determine, as accurately as possible, the scattering and dissociation cross sections of a nucleus which can be easily broken up in the nuclear and Coulomb fields of a target. The final states may thus involve three or more particles: the target and the fragments of the projectile. The principle of the CDCC method is to replace the continuum states describing the relative motion of the fragments of the projectile by square integrable approximations at discrete energies. The CDCC equations then take the form of standard coupled-channel equations. The $R$ matrix can be useful at two different levels (Descouvemont and Baye, 2010; Druet et al., 2010). On one hand, it can be used to describe the discretized continuum states in a simple way by averaging internal wave functions over energy in the form of bins. On the other hand, as mentioned above, it can be used to solve the coupled-channel equations in an efficient way. Here propagation methods (Light and Walker, 1976; Descouvemont and Baye, 2010) can be useful. Another recent application concerns three-body scattering. When three unbound particles interact, such as in the final state of a breakup process, the scattering is described by three-body collision matrices. These matrices are infinite because of the infinity of ways the three particles can share the total angular momentum. A truncation is thus necessary in practice. The $R$ matrix is used as a tool to derive the three-body phase shifts in a practical way and to provide manageable wave functions (Descouvemont et al., 2006; Descouvemont and Baye, 2010). Propagation methods are necessary, given the large value of the channel radius. These wave functions are useful to describe breakup reactions (Baye et al., 2009; Pinilla et al., 2012). Conclusion and outlook In this overview, we have presented the two variants of the $R$-matrix method. To date, the phenomenological $R$ matrix remains close to its original spirit and is very much used in nuclear physics to parametrize low-energy cross sections. Its main merits are that all parameters are real and have a physical meaning. Although resonances often play a crucial role in these parametrizations, non-resonant cross sections are accurately described as well. The calculable $R$ matrix is an efficient technique to solve the Schrödinger equation in various situations as well as its relativistic extensions. It underwent many of its developments in atomic physics but becomes more and more useful in nuclear physics. Most progresses can be expected in the calculable $R$ matrix. A challenge in scattering problems is the occurrence of large coupled-channel systems. This situation is often met in CDCC problems (see for example Druet and Descouvemont (2012)), and in particular when the projectile presents a three-body structure (see for example Matsumoto et al. (2004)). Large coupled-channel systems also show up in three-body continuum calculations (see for example Descouvemont et al. (2006)). The problem of a large number of coupled equations has been addressed in finite-difference methods, where the resolution of the coupled-channel system is replaced by a single inhomogeneous equation (Raynal, 1972; Rhoades-Brown et al., 1980). An iterative procedure provides specific elements of the collision matrix (Thompson, 1988). Although it may not converge for strong coupling potentials, its main advantage is to deal with a single equation. For weak coupling potentials, the convergence can be accelerated by using Padé approximants (Rhoades-Brown et al., 1980). These techniques might be introduced in the $R$-matrix formalism and provide an efficient way to deal with large coupled-channel systems. Optimizing the propagation technique is another challenge. The propagation of the $R$ matrix is necessary in problems where the channel radius is too large. One or several intermediate regions are introduced between the internal and external regions. The $R$ matrix must be accurately propagated from boundary to boundary. Reducing the computational cost of propagation while keeping the accuracy is an important issue. Appendix: The Lagrange-mesh method The Lagrange-mesh method combines the basis (48) with the Gauss quadrature associated with the mesh points defined from (49). The Lagrange-Legendre basis functions (48) satisfy the Lagrange conditions \[\tag{51} \varphi_i(a x_j)=(a \lambda_i)^{-1/2}\delta_{ij}, \] where $\lambda_i$ is the weight of the Gauss-Legendre quadrature corresponding to the $[0,1]$ interval. If the matrix elements with basis functions (48) are computed at the Gauss approximation of order $N$, consistent with the $N$ mesh points $ax_i$, their calculation is strongly simplified. At this approximation, the overlap is given by \[\tag{52} \langle \varphi_i|\varphi_j\rangle =\int_0^a \varphi_i(r)\varphi_j(r) dr\approx \delta_{ij}. \] The local-potential matrix is diagonal with elements simply given by the value of the potential at the mesh points, \[\tag{53} \langle \varphi_i|V|\varphi_j\rangle =\int_0^a \varphi_i(r) V(r) \varphi_j(r) dr \approx V(a x_i)\delta_{ij}. \] This introduces an important simplification since no integrals involving the potential need be calculated. This method is easily extended to non-local potentials (Hesse et al., 2002). Internal references Further reading External links See also Nuclear astrophysics, Bloch operator, K-matrix, Hamiltonian Retrieved from "" Personal tools Focal areas
9025bffda0dcd38f
Do hidden variables exist for quantum systems? Viewpoint: Yes, hidden variables are necessary to explain the contradictions and paradoxes inherent in quantum theory. Viewpoint: No, experimental evidence, including the work of John Bell, Alain Aspect, and Nicolas Gisin, has continually shown that hidden variables do not exist. Quantum physics is a daunting subject that often seems to be beyond comprehension. Nobel prize-winning quantum physicist Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics," and there would be few, if any, who would disagree with him. Quantum theory contains many ideas that defy common sense. The popular concept of the atom is that of a tiny planetary system, with a nucleus "sun," and electron "planets" orbiting. However, quantum theory describes atoms and particles as having wavelike properties and avoids talking about specific positions and energies for particles, using instead ideas of probability. In quantum theory quantities such as energy can only exist in specific values, which contradicts the generally held notion that quantities have a continuous range and that any value in that range is possible. Nobel prize-winning physicist Albert Einstein vehemently disliked many aspects of quantum physics, particularly the seemingly random and probabilistic nature of reality that the discipline implies, which he dismissed with his famous quote "God does not play dice." In 1932 Einstein, along with two colleagues, Boris Podolsky and Nathan Rosen, published a paper directly challenging some of the fundamental aspects of quantum theory. The EPR paper, as it came to be known, uses a thought experiment—an experiment that cannot be physically attempted, only imagined—to prove that quantum physics is an incomplete theory. The three scientists argued that the missing bits that made quantum theory incomplete were "hidden variables" that would enable a more deterministic description of reality. Essentially, these scientists, and others, worried that quantum theory contains a number of scientific and philosophical problems and paradoxes. Examples include the infamous Schrödinger's Cat paradox, another thought experiment, in which quantum theory predicts that a cat exists in both dead and alive states until observed, or the two-slit experiment, which appears to break down the barriers of logic when single particles are used. The Copenhagen interpretation of quantum physics, as formulated by Danish physicist Niels Bohr, German physicist Werner Heisenberg, and others, took the view that reality at the quantum level does not exist until it is measured. For example, a particle such as an electron orbiting an atomic nucleus could be in many different locations at a particular point in time. Quantum mechanics allows one to calculate the probabilities of each viable location of the electron as a wave function. However, the theory goes further, saying that until the electron is observed, it is in all possible positions, until the wave function that describes it is collapsed to a specific location by an observation. This creates some interesting philosophical problems and has been seen by some as implying that human beings create reality. Hidden variables, the EPR papers argue, would overcome these problems and allow for reality to be described with the same certainty that applies in Newtonian physics. Hidden variables would also remove the need for "spooky" forces, as Einstein termed them—forces that act instantaneously at great distances, thereby breaking the most cherished rule of relativity theory, that nothing can travel faster than the speed of light. For example, quantum theory implies that measurement of one particle can instantaneously change another particle that may be light years away, if the particles are an entangled pair. Entangled particles are identical entities that share a common origin and have the same properties. Somehow, according to quantum theory, these particles remain in instantaneous contact with each other, no matter how far apart they separate. Hidden variables would allow two entangled particles to have specific values upon creation, thereby doing away with the need for them to be in communication with each other in some mysterious way. The EPR paper caused many concerns for quantum theorists, but as the experiments it describes cannot be performed, the paper presented more of a philosophical problem than a scientific one. However, the later work of John Bell implied that there were specific tests that could be applied to determine whether the "spooky" forces were real or not, and therefore whether there are hidden variables after all. In the 1980s the first such experiment was performed, and many more have been done since. The results imply that "spooky" actions-at-a-distance do indeed exist. Some scientists have challenged the validity of these experiments, and there is still some room for debate. These experiments only mean that "local" hidden variables do not exist, but would still allow "non-local" hidden variables. In this case, local effects are those that occur at or below the speed of light. You can think of the locality of an object as a sphere around it that expands at the speed of light. Outside of this sphere only non-local effects can take place, as nothing can travel faster than the speed of light. Non-local hidden variables, therefore, would have the same spookiness that the EPR paper was trying to avoid. The debate over "hidden variables" is in some sense an argument over the completeness of quantum theory. Newton's laws once seemed to describe all motion, from particles to planets. However, the laws were found to be incomplete and were replaced by relativity, with regards to planets and other large-scale objects such as humans, and by quantum physics, with regards to particles and other very small-scale objects. It seems likely that one day relativity and quantum physics will also be replaced by other theories, if only because the two of them, while explaining their respective areas extremely well, are not compatible with one another. In another sense, the hidden variables debate is a philosophical argument over whether the universe is deterministic and concrete, or merely probabilistic and somewhat spooky. Einstein and others have argued that reality must, on some deeper level, be fixed and solid. The majority of physicists, however, see no need for this desire for physical determinism, arguing that quantum mechanics can currently explain the world of the small-scale very well without the need to add in extras such as "hidden variables." The modern understanding of the nature and behavior of particles is most thoroughly explained by quantum theory. The description of particles as quantum mechanical waves replaces the age-old notion of particles as "balls" or "bullets" in motion. With important limitations or uncertainties, the quantum wave interpretation of nature, and the mathematical description of the wave attributes of particles, allow accurate predictions about the state (e.g., attributes such as velocity and position) and behavior of particles. Yet, Albert Einstein and others have asserted that the quantum mechanical system is an incomplete description of nature and that there must be undiscovered internal variables, to explain what Einstein termed "spooky" forces that, in contradiction to special relativity, seemingly act instantly over great distances. Without hidden variables, quantum theory also presents a paradox of prediction because the sought-after attributes of a particle can, in fact, determine the collapse of the quantum wave itself. The Quantum Theory This quantum mechanical view of nature is counter-intuitive, and stretches the language used to describe theory itself. Essentially, reality, as it relates to the existence and attributes of particles, is, according to quantum theory, dependent upon whether an event is observed. Unlike other measurable waveforms, however, the quantum wave is not easily measured as a discrete entity. The very act of measurement disturbs quantum systems. The attempted observation or measurement of a quantum wave changes the wave in often mathematically indeterminate and, therefore, unpredictable ways. In fact, the act of measurement leads to the collapse of the quantum wave into traditional observations of velocity and position. Despite this counter-intuitive nature of quantum mechanics, it is undoubtedly successful in accurately predicting the behavior of particles. Well-tested, highly verified quantum concepts serve as a cornerstone of modern physics. Although highly successful at predicting the observed properties of atomic line spectra and the results of various interference experiments, there remains, however, problems with simply adopting the irreducibility of quantum mechanisms and the philosophical acceptance of an inherently statistical interpretation of nature that must exist if there are no hidden variables in quantum systems. The EPR Argument Einstein, Boris Podolsky, and Nathan Rosen treated the problems of quantum mechanics with great detail in their 1935 classic paper titled "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" Eventually their arguments became known as the Einstein-Podolsky-Rosen (EPR) paradox. At the heart of the argument advanced by the three was an attempt to set forth a definition of reality. The EPR definition of reality is well grounded in both classical and relativistic physics (descriptions reconciled with relativity theory) and asserts that physical reality exists (as measured by physical quantities such as velocity and position) when, without disturbing a system, there is a certainty in the ability to predict the value of the physical quantity in question. Although this definition of reality is intuitive (makes sense with our common understandings based upon experience), it then required Einstein, Podolsky, and Rosen to set forth a method by which one could observe a system without disturbing that system. The EPR paper created a thought experiment to meet that challenge. In the EPR example, two bound particles were at rest relative to the observer in a closed system. If the particles were then to suddenly separate and begin moving in opposite directions, the total momentum of the closed system must, in accordance with the law of conservation of momentum, be conserved. Given that the two particles in their bound-together state were at rest relative to the observer, the initial momentum of the system was zero. Accordingly, as the particles move in different directions, their momentum must be equal and opposite so that the sum of the particle momenta always remains zero. Because the particles move in opposite directions, it is possible that they carry the same magnitude of momentum cut with differing signs (positive or negative) related to the coordinate systems in use to describe the particle motion (i.e., one particle moves in the positive direction as the other particle moves in the negative direction). If the sum of the two particles' momentum were to exceed zero, this condition would violate the law of conservation of momentum. Because the sum of the momenta of the particles must equal zero, Einstein, Podolsky, and Rosen argued that by measuring the momentum of one particle, the momentum of the other particle can be determined with absolute certainty. If the velocity of one particle is known, the velocity of the other particle can be exactly determined without uncertainty. Correspondingly, if the position of one particle is known, the other can also be exactly determined. Given observation of one particle, no interaction on the part of the observer with the second particle is required to know with certainty the state or relevant physical quantity of the particle. In essence, in opposition to fundamental quantum theory assertions, no observation is necessary to determine the state of the particle (e.g., either the particle's velocity or position). In accord with the uncertainty principle, it remains impossible to simultaneously determine both the velocity and the position of the second particle because the measurement of the first particle's velocity would alter that particle's velocity, and then subject it to different conditions than the second particle—essentially rupturing the special bound relationship of the two particles in which the sum of their respective momenta must remain zero. Niels Bohr (Bohr, Niels, photograph. The Library of Congress.) Niels Bohr ( Bohr, Niels, photograph . The Library of Congress. In the EPR experiment, the fact that the velocity and position of the second particle can be specified imparts a physical reality to the second particle. More importantly, that reality (the physical states of the second particle) is determined apart from any influence of the observer. These factors directly challenge and stand in contrast to the inability of quantum theory to provide descriptions of the state of the second particle. Quantum theory can only describe these attributes in terms of the quantum wave. The EPR paradox then directly challenges this inability of quantum theory by asserting that some unknown variables must exist, unaccounted for by quantum theory, that allow for the determination of the second particle's state. Some quantum theorists respond with the tautology (circular reasoning) that the observation of the first particle somehow determines the state of the second particle—without accounting for a lack of observer interaction or other mechanism of determination. Hidden variable proponents, however, counter that that argument only strengthens the assertion that hidden variables, unaccounted for by quantum theory, must operate to determine the state of the second particle. An attack on the EPR premise and definition that physical reality exists when physical states are independent of observation is an inadequate response to the EPR paradox because it simply leaves open the definition of reality without providing a testable alternative. More importantly, if, as quantum theory dictates, the observation of the first particle serves to determine the state of the second particle, there is no accounting for the distance between the particles and the fact that the determination of state in the second particle must be instantaneous with any change in the state of the first particle. Given the speed of light limitations of relativity theory, any transmission over any distance that is simultaneous (i.e., requires zero time of transmission) violates relativity theory. Hidden variable interpretations of quantum theory accept the validity and utility of quantum predictions while maintaining that the theory remains an incomplete description of nature. In accord with deterministic physical theory, these hidden variables lie inside the so-called black box of quantum theory and are determinant to the states currently described only in terms of statistical probability or the quantum wave. Moreover, the sum influence of these quantum hidden variables becomes the quantum wave. The Limitations of Quantum Theory Although quantum theory is mathematically complete, the assertion that no hidden variables exist leaves an inherently non-deterministic, probability-based explanation of the physical world. Hidden variable proponents, while granting that quantum mechanics represents the best and most useful model for predicting the behavior of particles, assert that only the discovery and explanation of the hidden variables in quantum theory will allow a complete and deterministic (where known causes lead to known effects) account of particle behavior that will remove the statistical uncertainties that lie at the heart of modern quantum theory. The reliance on an indeterminate probability-based foundation for quantum theory rests heavily on the work of physicist and mathematician John von Neumann's 1932 elegant mathematical proof that deterministic mechanisms are not compatible with quantum theory. Other physicists and mathematicians, however, were able to construct and assert models based upon deterministic hidden variables that also completely explained the empirical results. David Bohm in the 1950s argued that von Neumann's assumptions, upon which his proof rested, may not have been entirely correct and that hidden variables could exist—but only under certain conditions not empirically demonstrated. Although subsequently John Bell's theorem and experiments testing Bell's inequalities are often touted as definitive proof that hidden variables do not exist, Bell's inequalities test only for local hidden variables and are, therefore, more properly only a test of locality. Bell went on to revisit the EPR paradox and compare it to several popular hidden variable models. Bell's work demonstrated that for certain experiments, classical (i.e., deterministic) hidden variable theories predicted different results than those predicted by standard quantum theory. Although Bell's results were heralded as decisive in favor of quantum theory, without the need for hidden variables, they did not completely explain quantum entanglements, nor did they rule out non-local hidden variables. As a result, Bell's findings properly assert only that if hidden variables exist, they must be non-local (i.e., an effect in one reference frame that has the ability to simultaneously influence an event in another reference frame, even of the two reference frames are light years apart). The acceptance of the argument that there are no hidden variables also entails the acceptance of quantum entanglement and superposition wherein particles may exist in a number of different states at the same time. These "Schrödinger's cat" arguments (e.g., that under a given set of circumstances a cat could be both dead and alive) when applied to particle behavior mean that particles can, for example with regard to radioactive decay, exist simultaneously in a decayed and non-decayed state. Moreover, the particle can also exist in innumerable superpositioned states where it exists in all possible states or decay. Although investigation of quantum entanglement holds great promise for communication systems and advances in thermodynamics, the exact extent to which these entangled states can be used or manipulated remains unknown. Although the EPR paradox powerfully argues that quantum entanglement means that quantum theory is incomplete and that hidden variables must exist, the fact that these variables must violate special relativity assertions is an admittedly powerful reason for modern physicists to assert that hidden variables do not exist. Despite the weight of empirical evidence against the existence of hidden variables, it is philosophically important to remember that both relativity theory and quantum theory must be fully correct to assert that there are no undiscovered hidden variables. Without hidden variables, quantum theory remains a statistical, probability-based description of particle theory without the completeness of classical deterministic physics. Although the standard model of quantum physics offers a theoretically and mathematically sound model of particle behavior consistent with experiment, the possible existence of hidden variables in quantum theory remained a subject of serious scientific debate during the twentieth century. Based upon our everyday experience, well explained by the deterministic concepts of classical physics, it is intuitive that there be hidden variables to determine quantum states. Nature is not, however, obliged to act in accord with what is convenient or easy to understand. Although the existence and understanding of heretofore hidden variables might seemingly explain Albert Einstein's "spooky" forces, the existence of such variables would simply provide the need to determine whether they, too, included their own hidden variables. Quantum theory breaks this never-ending chain of causality by asserting (with substantial empirical evidence) that there are no hidden variables. Moreover, quantum theory replaces the need for a deterministic evaluation of natural phenomena with an understanding of particles and particle behavior based upon statistical probabilities. Although some philosophers and philosophically minded physicists would like to keep the hidden variable argument alive, the experimental evidence is persuasive, compelling, and conclusive that such hidden variables do not exist. The EPR Paradox The classic 1935 paper written by Einstein, Boris Podolsky, and Nathan Rosen (EPR) and titled "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" presented a Gedanken-experiment (German for "thought experiment") that seemingly mandates hidden variables. What eventually became known as the EPR paradox struck at the ability of particles to remain correlated in entangled states even though those particles might be separated by a great distance. Quantum entanglement is a concept of quantum theory that relies on the superposition of possible states for particles. In a two-particle entangled system, the act of measuring one of the entangled particles causes that particle's quantum wave to collapse to a definite state (e.g., a defined velocity or position). Simultaneous with the collapse of the first particle's wave state, the quantum wave of the second particle also collapses to a definite state. Such correlations must be instantaneous, and EPR argued that if there were any distance between the particles, any force acting between the particles would have to exceed the speed of light. Einstein called these forces "spooky actions at a distance." EPR specifically identified three main problems with the standard interpretations of quantum Albert Einstein (Einstein, Albert, photograph. The Library of Congress.) Albert Einstein ( Einstein, Albert, photograph . The Library of Congress. mechanics that did not allow for the existence of hidden variables. Because of the limitations of special relativity, EPR argued that there could be no transacting force that instantaneously determines the state of the second particle in a two-particle system where the particles were separated and moving in opposite directions. EPR also challenged the uncertainty limitations found in quantum systems wherein the measurement of one state (e.g., velocity) makes impossible the exact determination of a second state (e.g., position). Most importantly, the EPR paper challenged the quantum view of nature as, at the quantum level, a universe explained only by probability rather than classical deterministic predictability where known causes produce known results. Einstein in particular objected to the inherent fundamental randomness of quantum theory (explaining his often quoted "God does not play dice!" challenge to Niels Bohr and other quantum theorists) and argued that for all its empirical usefulness in predicting line spectra and other physical phenomena, quantum theory was incomplete and that the discovery of hidden variables would eventually force modifications to the theory that would bring it into accord with relativity theory (especially concerning the absolute limitation of the speed of light). Quantum theory, highly dependent on mathematical descriptions, depicts the wave nature of matter with a wave function (quantum waves). The wave function is used to calculate probabilities associated with finding a particle in a given state (e.g., position or velocity). When an observer interacts with a particle by attempting to measure a particular state, the wave function collapses and the particle takes on a determinable state that can be measured with a high degree of accuracy. If a fundamental particle such as an electron is depicted as a quantum wave, then it has a certain probability of being at any two points at the same time. If, however, an observer attempts to determine the location of the particle and determines it to be at a certain point, then the wave function has collapsed in that the probability of finding the electron at any other location is, in this measured state, zero. The EPR paradox seemingly demands that for the wave function to collapse at the second point, some signal must be, in violation of special relativity, instantaneously transmitted from the point of measurement (i.e., the point of interaction between the observer and the particle) to any other point, no matter how far away that point may be, so that at that point the wave function collapses to zero. David Bohm's subsequent support of EPR through a reconciliation of quantum theory with relativity theory was based upon the existence of local hidden variables. Bohm's hypothesis, however, suffering from a lack of empirical validation, smoldered on the back burners of theoretical physics until John Bell's inequalities provided a mechanism to empirically test the hidden variable hypothesis versus the standard interpretation of quantum mechanics. John Bell's Inequalities Bell's theorem (a set of inequalities) and work dispelled the idea that there are undiscovered hidden variables in quantum theory that determine particle states. Bell's inequalities, verified by subsequent studies of photon behavior, predicted testable differences between entangled photon pairs that were in superposition and entangled photons whose subsequent states were determined by local hidden variables. Most importantly, Bell provided a very specific mechanism, based upon the polarization of photons, to test Bohm's local hidden variable hypothesis. Polarized photons are created by passing photons through optical filters or prisms that allow the transmission of light polarized in one direction (a particular orientation of the planes of the perpendicular electromagnetic wave) while blocking differently oriented photons. Most useful to tests of the EPR assertions are polarized photons produced by atomic cascades. Such photons are produced as electron decay from higher energy orbitals toward their ground state via a series of quantum jumps from one allowable orbital level to another. The law of the conservation of energy dictates that as electrons instantaneously transition from one orbital level to another they must give off a photon of light with exactly the same amount of energy as the difference between the two orbitals. An electron moving toward the ground state that makes that transition through two discreet orbital jumps (e.g., from the fourth orbital to the third and then from the third to the first) must produce two photons with energy (frequency and wavelength differences) directly related to the differences in total energy of the various oribitals. Of particular interest to EPR studies, however, is the fact that in cascades where there is no net rotational motion, the photons produced are quantum-entangled photons with regard to the fact that they must have specifically correlated polarizations. If the polarization of one photon can be determined, the other can be exactly known without any need for measurement. Although the details of the measurement process, based upon the angles of various filters and measurement of arrival times of polarized photon pairs taking different paths, are beyond the scope of this article, the most critical aspect is that the outcomes predicted by standard quantum theory are different than the outcomes predicted if hidden variables exist. This difference in predicted outcomes makes it possible to test Bell's inequalities, and, in fact, a number of experiments have been performed to exactly test for these differences. In every experiment to date, the results are consistent with the predictions made by the standard interpretation of quantum mechanics and inconsistent for the existence of any local hidden variables as proposed by Bohm. In 1982, the French physicist Alain Aspect, along with others, performed a series of experiments that demonstrated that between photons separated by short distances there was "action at a distance." In 1997, Nicolas Gisin and colleagues at the University of Geneva extended the distances between entangled photons to a few kilometers. Measurements of particle states at the two laboratory sites showed that the photons adopted the correct state faster than light could have possibly traveled between the two laboratories. Any Empirical Evidence? In modern physics, Einstein's "spooky" actions underpin the concept of non-locality. Local, in this context, means forces that operate within the photons. Although Bell's inequality does not rule out the existence of non-local hidden variables that could act instantaneously over even great distances, such non-local hidden variables or forces would have a seemingly impossible theoretical and empirical barrier to surmount. If such non-local hidden variables exist, they must act or move faster than the speed of light and this, of course, would violate one of the fundamental assertions of special relativity. Just as quantum theory is well supported by empirical evidence, so too is relativity theory. Accordingly, for hidden variables to exist, both quantum and relativity theories would need to be rewritten. Granting that quantum and relativity theories are incompatible and that both may become components of a unified theory at some future date, this is certainly not tantamount to evidence for hidden variables. The only hope for hidden variable proponents is if the hidden variables can act non-locally, or if particles have a way to predict their future state and make the needed transformations as appropriate. Such transactional interpretations of quantum theory use a reverse-causality argument to allow the existence of hidden variables that does not violate Bell's inequality. Other "many worlds" interpretations transform the act of measurement into the selection of a physical reality among a myriad of possibilities. Not only is there no empirical evidence to support this hypothesis, but also it severely strains Ockham's razor (the idea that given equal alternative explanations, the simpler is usually correct). In common, hidden variable proponents essentially argue that particles are of unknown rather than undefined state when in apparent superposition. Although the hidden variable, transactional, or "many worlds" interpretations of quantum theory would make the quantum world more understandable in terms of conventional experience and philosophical understanding, there is simply no experimental evidence that such interpretations of quantum theory have any basis or validity. The mere possibility that any argument may be true does not in any way provide evidence that a particular argument is true. In contrast to the apparent EPR paradox, it is a mistake to assume that quantum theory demands or postulates faster-than-light forces or signals (superluminal signals). Both quantum theory and relativity theory preclude the possibility of superluminal transmission, and, to this extent, quantum theory is normalized with relativity theory. For example, the instantaneous transformation of electrons from one allowed orbital (energy state) to another are most properly understood in terms of wave collapse rather than through some faster-than-light travel. The proper mathematical interpretation of the wave collapse completely explains quantum leaps, without any need for faster-than-light forces or signal transmission. Instead of a physical form or independent reality, the waveform is best understood as the state of an observer's knowledge about the state of a particle or system. Most importantly, although current quantum theory does not completely rule out the existence of hidden variables under every set of conceivable circumstances, the mere possibility that hidden variables might exist under such special circumstances is in no way proof that hidden variables do exist. There is simply no empirical evidence that such hidden variables exist. More importantly, quantum theory makes no claim to impart any form of knowing or consciousness on the behavior of particles. Although it is trendy to borrow selected concepts from quantum theory to prop up many New Age interpretations of nature, quantum theory does not provide for any mystical mechanisms. The fact that quantum theory makes accurate depictions and predictions of particle behavior does not mean that the mathematical constructs of quantum theory depict the actual physical reality of the quantum wave. Simply put, there is no demand that the universe present us with easy-to-understand mechanisms of action. Further Reading Bell, J. "On the Einstein Podolsky Rosen Paradox." Physics 1, no. 3 (1964): 195-200. Bohr, N. "Quantum Mechanics and Physical Reality." Nature 136 (1935): 1024-26. Cushing, J. T., and E. McMullin. Philosophical Consequences of Quantum Theory. Notre Dame: University of Notre Dame Press, 1989. Einstein, E., B. Podolski, and N. Rosen. "CanQuantum Mechanical Description of Physical Reality Be Considered Complete?" Physical Review 47 (1935): 776-80. Heisenberg, W. The Physical Principles of the Quantum Theory. Trans. C. Eckart and F. C. Hoyt. New York: Dover, 1930. Popper, K. Quantum Theory and the Schism in Physics. London: Hutchinson, 1982. Schrödinger, E. "Discussion of Probability Relations Between Separated Systems" Proceedings of the Cambridge Philosophical Society 31 (1935a): 555-62. Von Neumann, J. Mathematical Foundations of Quantum Mechanics. Trans. R. Beyer. Princeton: Princeton University Press, 1955. Wheeler, J. A. and W. H. Zurek, eds. Quantum Theory and Measurement. Princeton: Princeton University Press, 1983. Causes precede effects—and there is a clear chain of causality that can be used to explain events. In essence, if a set of conditions is completely known, then an accurate prediction can be made of future events. Correspondingly, behavior of particles could be explained if all of the variables or mechanisms (causes) influencing the behavior of a particle were completely known. The phenomena wherein a force, act (observation), or event in one place simultaneously (instantly) influences an event or particle in another place, even if there is a vast (e.g., light years) distance between them. In quantum theory, not all possible states of matter—attributes such as position, velocity, spin, etc.—have equal probabilities. Although states are undetermined until measured, some are more likely than others. Quantum theory allows predictions of states based upon the probabilities represented in the quantum wave function. The ability to link the states of two particles. Entanglement can be produced by random or probability-based processes (e.g., under special conditions two photons with correlated states can sometimes be produced by passing one photon through a crystal). Quantum entanglement is essentially a test for non-locality. Recently NIST researchers were able to entangle an ion's internal spin to its external motion and then entangle (correlate) those states to the motion and spin of another atom. The concept of quantum entanglement holds great promise for quantum computing. The properties of matter can be described in terms of waves and particles. De Broglie waves describe the wave properties of matter related to momentum. Waves can also be described as a function of probability density. The quantum wave represents all states and all potentialities. The differential equation for quantum waves is the Schrödinger equation (also termed the quantum wave function) that treats time, energy, and position. Faster-than-light transmission or motion. A concept related to quantum entanglement. For example, if one of two particles with opposite spins, that in a combined state would have zero spin, is measured and determined to be spinning in a particular direction, the spin of the other particle must be equal in magnitude but in the opposite direction. Superposition allows a particle to exist in all possible states of spin simultaneously, and the spin of a particle is not determined until measured. User Contributions: Report this comment as inappropriate May 30, 2008 @ 1:01 am is comprehensive enough but how do the quantun system locates electrons Report this comment as inappropriate Sep 12, 2010 @ 12:00 am What is value of the variables inertial referenceframe??does this value of the variables must be the same in every reference frame??? Report this comment as inappropriate Dec 15, 2013 @ 1:13 pm I am conducting investigations on the hidden variables of quantum mechanics at "The general science journal" as an author Spiros Koutandos
7157a132472962ba
Monday, April 24, 2006 A debate about staggered fermions Recently, there have been a number of short papers on the arXiv that discussed some potential problems that the usual procedure of taking the fourth root of the staggered fermion determinant to obtain a single-flavour theory might bring with it. As a little reminder, staggered fermions are obtained from naive fermions by redistributing the spinor degrees of freedom across different lattice sites. As a result, staggered fermions describe a theory with four (rather than the 16 naive) degenerate fermion flavours, usually called "tastes" to distinguish them from real flavours. In order to obtain a theory with a single physical flavour, one usually takes the fourth root of the fermionic determinant for staggered fermions; this is correct in the free theory and in perturbation theory, but nobody really knows whether it makes sense nonperturbatively. In the paper starting this recent debate, Creutz claimed that this procedure leads to unphysical results. His argument is based on the observation that with an odd number of quark flavours, physics is not invariant under a change of sign of the quark mass term, and hence the chiral expansion must contain odd powers of the quark mass. Since the staggered theory is invariant under a change of sign of the quark mass, so will be its fourth-rooted descendant, and hence it can only pick up even terms in the chiral expansion. Thus, Creutz claims, staggered fermions describe incorrect physics. Within a week, there was a reply from Bernard, Golterman, Shamir and Sharpe, who claim that Creutz's argument is flawed since the quark mass in the theory corresponding to the continuum limit of the rooted staggered theory is always positive, regardless of the sign of the original quark mass, and since moreover the nonanalyticity inherent in taking a root leads to the emergence of odd powers of the (positive) mass in the continuum limit. This was followed by third paper by Dürr and Hoelbling, in which they show how one may define "smart" determinant for staggered fermions (by including a phase factor that depends on the topological index of the gauge field background) that allows to reach the regime of negative quark masses. I have to admit that I do not fully understand this work, and enlightenment from readers is appreciated. The debate over the correctness of the fourth root trick for staggered fermions is likely to go on for a while, particularly given the fact that the choice of fermion discretisation has become an almost religious issue within the lattice community. Personally, I certainly hope that staggered fermions give the correct physics, but I am not sure whether I actually have enough evidence or understanding to have an opinion either way. Update: The paper by Creutz has been updated with a reply to the objections raised by Bernard (leading to the rather strange situation of circular citations between papers bearing different date stamps). Creutz now argues that while the problems he mentions may go away in the continuum limit, observables that develop a divergent dependence on a regulator at isolated points (such as the chiral condensate at m=0) are an "absurd behaviour" for a regulator, and that Wilson fermions are preferable in this regard. I am not entirely sure in how far the existence of exceptional configurations is a less absurd behaviour, though. I suppose there may be another round in this debate (with yet more circular citations). Tuesday, April 11, 2006 More on (2+1)d glueballs In a new paper, Leigh, Minic and Yelnikov give a more detailed follow-up on their earlier paper (discussed on this blog here) about the analytical solution of (2+1)-dimensional pure Yang-Mills theory. Their basic setup is as before, but they give a lot more details: They start with the functional Schrödinger picture analysis of (2+1)d pure Yang-Mills theory performed by Karabali, Kim and Nair to re-express the theory in terms of new variables, and then make a generalised Gaussian ansatz for the vacuum wave functional containing an undetermined kernel K(Δ/m2). The Schrödinger equation is then turned into an ordinary differential equation for K(L), which can be solved in terms of Bessel functions. It follows that the glueball masses can be written as products of a sum of Bessel function zeros and the Karabali-Kim-Nair mass. Leigh, Minic and Yelnikov compare their predictions to lattice results and get mostly good agreement (with some uncertainty about the correct identification of excited states in the lattice simulations in a few cases). Finally, they note and discuss the almost degeneracy of the glueball spectrum that follows from the asymptotic form of the Bessel function zeros, as discussed here and here. These are very interesting results and their work may be considered a major breakthrough, although I remain sceptical as to whether we are going to see anything similar in the (3+1)d case anytime soon (or ever).
a8d894b3f04a3d25
DFT: Density Functional Theory DFT: Density Functional Theory The quantum mechanical wave function contains, in principle, all the information about a given system. For the case of a simple 2-D square potential or even a hydrogen atom we can solve the Schrödinger equation exactly in order to get the wave function of the system. We can then determine the allowed energy states of the syatem. Unfortunately it is impossible to solve the Schrödinger equation for a N-body system. Evidently, we must involve some approximations to render the problem soluble albeit tricky. Here we have our simplest definition of DFT: A method of obtaining an approximate solution to the Shrodinger equation of a many-body system. DFT has proved to be highly successful in describing structural and electronic properties in a vast class of materials, ranging from atoms and molecules to simple crystals to complex extended systems (including glasses and liquids). Furthermore DFT is computationally simple. For these reasons DFT has become a common tool in first-principles calculations aimed at describing – or even predicting – properties of molecular and condensed matter systems. DFT computational codes are used in practise to investigate the structural, magnatic and electronic properties of molecules, materials and defects. Methods of DFT A DFT calculation adds an additional step to each major phase of a Hartree-Fock calculation. This step is a numerical integration of the functional or various derivatives of the functional. Thus in addition to the sources of numerical error in Hartree-Fock calculations (integral accuracy, SCF convergence, CPHF convergence), the accuracy of DFT calculations also depends on the number of points used in the numerical integration. The UltraFine integration grid (corresponding to Integral=UltraFine) is the default in Gaussian 16. This grid greatly enhances calculation accuracy at reasonable additional cost. We do not recommend using any smaller grid in production DFT calculations. Note also that it is important to use the same grid for all calculations where you intend to compare energies (e.g., computing energy differences, heats of formation, and so on). Larger grids are available when needed (e.g. tight geometry optimizations of certain kinds of systems). An alternate grid may be selected with the Integral=Grid option in the route section. The second H–K theorem defines energy functional for the system and proves that the correct ground state electron density minimizes this energy functional. Advantages and Limitations of DFT A major challenge in the application of block copolymer directed self-assembly (DSA) to advanced lithography is the exploration of large design spaces, including the selection of confinement shape and size, surface chemistry to affect wetting conditions, copolymer chain length and block fraction. To sweep such large spaces, a computational model is ideally both fast and accurate. In this study, we investigate various incarnations of the density functional theory (DFT) approach and evaluate their suitability to DSA applications. We introduce a new optimization scheme to capitalize on the speed advantages of DFT, while minimizing loss of accuracy relative to the benchmark of self-consistent field theory (SCFT). Although current DFT models afford a 100-fold reduction in computational complexity over SCFT, even the best optimized models fail to match SCFT density profiles and make extremely poor predictions of commensurability windows and defect energetics. These limitations suggest that SCFT will remain the gold standard for DSA simulations in the near future. Successes and Failures of DFT DFT, even in the simplest LDA approximation, turns out to be much more successful than expected. Especially for solids, LDA is computationally much simpler than HF with the true exchange potential and no more complex than Slater’s local exchange approximation. Yet, LDA yields results that compare well to HF results, even in atoms and molecules – highly inhomogeneous systems for which an approximation based on the homogeneous electron gas would hardly look appropriate. The best results are however obtained in solids, whose structural and vibrational properties are in general well described: the correct crystal structure is usually found to have the lowest energy; bond lengths, bulk moduli, phonon frequencies are accurate within a few percent. One may wonder why LDA is so successful, given its resemblance with the not-sopraised Slater approximation to HF. One reason is somewhat fortuitous: LDA contains a fair amount of error compensation between the exchange and correlation parts. A deeper reason is explained in Sec.(2.6): LDA grants a good description of the spherical term of the so-called “exchange-correlation hole”. LDA also has some well-known serious problems. Some can be avoided by using better functionals, some others have a deeper and more fundamental nature.
b72261c06c662c56
Imaginary and Complex Numbers Let’s start with the following calculation: $-2 = (-8)^{1/3} = (-8)^{2/6} = ((-8)^2)^{1/6} = 64^{1/6} = 2$. This must be wrong… but I don’t see why! The brutal answer would be that you should never write $a^p$ when $a$ is negative… But, this would conceal some wonderful underlying mathematics, including and especially the weirdness and awesomeness of imaginary and complex numbers! Let’s discover these mathematical objects in this article! Our approach will be highly geometric and, I think, much more insightful than the one you have learned (or will learn) at school. The Geometry of $n$-th Root To understand what just happened, let’s focus on the first equality, namely $-2 = \sqrt[3]{-8}$. It reads “$-2$ is the cube root of $-8$”. But what does that mean? Doesn’t it mean that $(-2)^3 = -8$? Yes! But, let’s have a geometrical understanding of what you’ve just said! To do so, notice that if we multiply both sides of the equality by any number, then the equality still holds. Indeed, if we multiply by $1$ both sides we obtain $1 \times (-2)^3 = 1 \times (-8)$, and if we multiply by $241$, we have $241 \times (-2)^3 = 241 \times (-8)$. More generally, if $x$ represents any number, we have $x \times (-2)^3 = x \times (-2) \times (-2) \times (-2) = x \times (-8)$. Thus, we can now see the operation “$\times (-2)$” as an operation on numbers, which, when applied three times, is equivalent to the operation “$\times (-8)$”! I’m still not sure where you’re going with it… Here’s the awesome part. These operations correspond to geometrical transformations made to the number line (they’re symmetries)! For instance, multiplying by $(-2)$ corresponds to inverting it, and stretching it by a factor 2. This is what’s done below three times! Multiplication by -2 It might look nice, but I don’t see the point of the geometrical approach… Be patient! Let’s simplify the problem a little bit and consider the equality $-1 = (-1)^{1/3}$. What does it mean geometrically? Well, I guess that “$\times (-1)$” corresponds to just inverting the number line, doesn’t it? Yes! Now, this means that we can translate the algebraic relation $-1=(-1)^{1/3}$ by the geometrical phrase: Inverting the number line three times is equivalent to inverting it once. Sweet, isn’t it? Multiplication by -1 Yes! But I still don’t see the point… Hehe! The key idea of complex numbers lies in the next question… Is $(-1)$ the only one cube root of $(-1)$? I think so… If you take a positive number, its cube will be positive… So the only number that works is $-1$… Don’t think numerically! The whole point of my construction was to consider the problem geometrically! In other words, is there a geometrical operation on the number line, which, when applied three times, corresponds to inverting it? Come on! You can find it! I know! How about rotating the number line by a 6th of a turn? In fact, there are two such 6th-of-a-turn operations, depending on whether the turn is clockwise or anti-clockwise. Below are described these two operations, each applied three times to the number line. Sixth of a Turn In addition to the “$\times (-1)$” operation, this gives us a total of three cube roots of $(-1)$! Does $(-8)$ have several cube roots too? What about the 6th roots of $64$? Great questions! In fact, you should try to answer by yourself! Humm… I guess a cube root of $(-8)$ can be obtained by a 6th of a turn (like $(-1)$), combined with a stretching of the number line, can’t it? Exactly! Similarly, 6th roots of $64$ include dilations by a factor 2 combined with a rotation of one or two 6th of turn, clockwise or anti-clockwise. Plus, there are also operations “$\times (-2)$” and “$\times 2$”. This gives us six 6th roots of $64$. And as you can guess (or prove!), more generally, any number has $n$ $n$-th roots! I’ve heard about the square root of $(-1)$… Is it obtained similarly to what we’ve done?? Once again, you should be the one who gives me the answer! In other words, is there a geometrical transformation which, when applied twice to the number line, is equivalent to simply inverting it? I know! Rotations of a quarter of a turn! There you go! By convention, we refer to the anti-clockwise quarter-of-a-turn rotation as $i$. This $i$ is so important that we have given it different names… which I all dislike! It’s known as the imaginary number (imaginary? number?), the square root of $(-1)$, or, worst of all, $\sqrt{-1}$. What’s wrong with $\sqrt{-1}$? What’s very wrong is that $i$ is not the only square root of $(-1)$. The clockwise quarter-of-a-turn rotation is a square root of $(-1)$ too! Plus, if you can’t write $\sqrt[3]{-8}$, then you definitely can’t write $\sqrt{-1}$! OK… That’s cool but I don’t see how this fixes the paradox of the introduction! Hehe… We can now answer that elegantly! Solution to the Paradox The major flaw lies in the non-unicity of the $n$-th roots. This is what 19th-century French mathematician Évariste Galois called the ambiguity of $n$-th roots. More precisely, there’s not an actual unicity of the cube root of $(-8)$, nor is there a unicity of the 6th root of $64$. In particular, $(-2)$ is a 6th root of $64$, but it’s just not the one we refer to by $\sqrt[6]{64}$. Still, there’s a strong relation between cube roots of $(-8)$ and 6th roots of $64$, isn’t there? Yes. And the other flaw of the formula of the introduction lies in the relation $(-8)^{1/3} = ((-8)^2)^{1/6}$. Literally, it says that a cube root of $(-8)$ is a sixth root of the square of $(-8)$. Humm… I’m not sure I understand… Once again, our salvation will come from geometry! Geometrically, this says that an operation which is equivalent to $\times (-8)$ when applied three times is equal to an operation which, when applied six times, is equivalent to applying $\times (-8)$ twice. Below is a figure which illustrates this statement. Cube Root of -8 and 6th Root of 64 As you can deduce it from the figure above, any cube root of $(-8)$ is also a 6th root of $64$. Indeed, applying the green operation six times will necessary be equivalent to applying “$\times (-8)$” twice. But some of the 6th roots of $64$ aren’t cube roots of $(-8)$! It’s not too hard to prove that these are the cube roots of $8$, the other square root of $64$. I invite to do that as an exercise! Now, by denoting $\sqrt[3]{-8}$ the set of all cube roots of $-8$ and $\sqrt[6]{64}$ the set of all 6th roots of $64$, we can elegantly correct the paradox! These notations are highly non-conventional and I have been blamed for using them. But I believe it to provide an insightful and beautiful solution to the paradox. Also, if you can make the difference between $n$-th roots and the classical notation $\sqrt[n]{x}$ for $x \geq 0$, then you’ll have made a huge breakthrough in the understanding of $n$-th roots. Granted, this could have been proved without involving geometrical operations, as the key aspect is the property of groups in pure algebra, which the geometrical operations form. But I think it’s much more insightful when you consider this equation for geometrical operations. Using our notations of $n$-th roots as defining the set of all $n$-th roots, note that, if $n/m$ is not an irreducible, we have $(a^{n})^{1/m} \neq (a^{1/m})^n$. For instance, when $a=1$, $n=m=2$, we have $(1^2)^{1/2} = \{1, -1\}$, while $(1^{1/2})^2 = \{(-1)^2, 1^2\} = \{1\}$. One natural way to define $a^{n/m}$ would then be $a^{n/m} = (a^{1/m})^n$. I’ll leave you as an exercise to prove that $a^q$ would then be well-defined for any $q \in \mathbb Q$. Homotheties and Rotations Although insightful, the descriptions of complex numbers we have given so far aren’t very rigorous. So what’s the rigorous definition of complex numbers? From a geometrical perspective, complex numbers should actually regarded as a certain collection of transformations of a plane rather than those of a line. This plane is called the complex plane. It is infinite in all directions and has a unique center, called the origin. Plus, one of its axis which goes through the origin is known as the real number line. The axis perpendicular to the real number line is known as the imaginary number line. So what kind of transformations of the plane are we talking about? The transformations which corresponds to a complex numbers are those we have been using so far: homotheties and rotations centered on the origin. These two operations are the symmetries described below: Rotation and Homothety The key aspect of these operations is that any number of rotations and homotheties can be combined, and that the order in which they are combined does not matter. In technical terms we say that all these geometrical transformations are associative and commutative. Another important fact is that all usual numbers can be matched uniquely with one such geometrical operation. For instance, what’s the operation corresponding to the number $2$? And I guess that it corresponds a homothety by a factor 2… Yes, which is also known as “$\times 2$”! What about the number $(-1)$? Multiplication by -2 of the Complex Plane The operation “$\times (-1)$” inverted the number line… So I guess it’s a symmetry along the imaginary axis! Nope… Keep in mind that we can only use homotheties and rotations! Arg… Humm… I know! It’s a half-turn rotation! Excellent! Let me give you one last example: $(-2)$ is a homothety of a factor 2 combined with a rotation of a half turn. Now, more generally, any combination of a homothety and a rotation forms a complex numbers. Wait… A complex number? Yes! Now, the homothety is defined by a positive factor called module and is commonly denoted $\rho$. By convention, the rotation is defined by an angle of anti-clockwise turn called argument and is often denoted $\theta$. Since these two parameters uniquely define the combination of a homothety and a rotation, each complex number can be represented by the couple $(\rho, \theta)$. In terms of pure algebra, we are here defining the set of complex numbers by $\mathbb C = (\mathbb R_+^* \times SO(2)) \cup \{0\} = (\mathbb R_+^* \times \mathbb R/\tau \mathbb Z) \cup \{0\}$. As a topological group, $\mathbb C^* = \mathbb C – \{0\}$ is then trivially isomorphic to $\mathbb R_+^* \times \mathbb S^1$, where $\mathbb S^1 = \mathbb R/\mathbb Z$ is the circle. A clever extension of this approach can then be defined to construct the set of quaternions $\mathbb H$, which nearly equals $(\mathbb R_+^* \times SO(3)) \cup \{0\}$ (technically, the “rotations” of $\mathbb H$ form a double covering of $SO(3)$). Quaternions are a bit more complicated though, as they are not commutative. Indeed, as you can see it if you play Rubik’s cube, two rotations in space do not commute in general. If you can, please write about quaternions! And any number is a complex number? Well, as we’ve said, any positive number is just a homothety. This includes a rotation of angle $0$. Thus, any positive number $x$ is the complex number $(x, 0)$. Now, if $x$ is a positive number, then $(-x)$ corresponds to a homothety of factor $x$, and of a half turn. Since a half turn corresponds to angle $\pi$, the number $(-x)$ is thus the complex number $(x, \pi)$. What? A half turn is $\pi$? Shouldn’t we rather give the full turn a name like $\tau$, and call the half turn $\tau/2$? I know! Some mathematicians even think that $\pi$ should be withdrawn from all equations to be replaced by $\pi = \tau/2$. There’s even a manifest supporting that… as you can see it in the following awesome video by ViHart: I personally much prefer $\tau$ over $\pi$… Since you’ve probably learned $\pi$, I’ll try to insert it in this article, but I’ll be doing most of it with $\tau$. In particular, note that if $x > 0$, then $(-x)$ is the complex number $(x, \tau/2)$. What about the number zero? Humm… Good remark. We need a new transformation which corresponds to zero! This transformation consists in collapsing the whole complex plane onto its origin. One thing troubles me… You’ve been saying that complex numbers are geometrical transformations? In what possible sense are they numbers? Not in an obvious one for sure! But, for one thing, we can multiply complex numbers. This corresponds to performing successively the geometric operations associated to the complex numbers. And what’s beautiful is that this has an algebraic translation! What I mean by that is that multiplying $(\rho_1, \theta_1)$ by $(\rho_2, \theta_2)$ corresponds to two homotheties by factors $\rho_1$ and $\rho_2$ and two rotations of angles $\theta_1$ and $\theta_2$. Now, two homotheties of factors $\rho_1$ and $\rho_2$ combine into a homothety of factor $\rho_1 \times \rho_2$, while two rotations of angles $\theta_1$ and $\theta_2$ result in a rotation of angle $\theta_1+\theta_2$. Thus, we have the product $(\rho_1, \theta_1) \times (\rho_2, \theta_2) = (\rho_1 \times \rho_2, \theta_1 + \theta_2)$. How sweet is that? Note that angles are defined up to $\tau$, which means that the angle $\theta+\tau$ is the same as $\theta$. But let’s not dwell too much on modular algebra. If you can though, please write about it! Now, if you are familiar with modular algebra, then note that, in pure algebra terms, what we’ve done here is defining the group $(\mathbb C^*, \times)$ as the product group $(\mathbb R_+^*, \times) \times (\mathbb R /\tau \mathbb Z, +)$. Euler's Formula (bis) For reasons I won’t be dwelling on here, Leonhard Euler showed that it made sense to write the complex number $(\rho, \theta)$ as $\rho e^{i\theta}$. Given this writing, the multiplication of two complex numbers follows the usual laws of algebra, as $(\rho_1 e^{i\theta_1}) \times (\rho_2 e^{i\theta_2}) = (\rho_1 \rho_2) e^{i(\theta_1+\theta_2)}$. Also, plugging in $\rho=1$ and $\theta=\pi$, we obtain the equality $e^{i\pi} = (1, \pi) = -1$, which is usually rewritten as $e^{i\pi} + 1=0$. This beautiful formula is known as Euler’s identity! But, in spite of probably angering the Swiss scholar, I’d rather have it written with $\tau$ as $e^{i\tau} = 1$. This last formula is much more insightful, as it says that $\tau$ represents a full turn. To understand why complex numbers can be written $\rho e^{i\theta}$, check my article on Euler’s identity! I guess this is a nice construction, but I don’t see the point… The ability complex numbers have to describe rotations is the reason why they are so much used in oscillating problems in physics and engineering. Instead of involving ugly trigonometry, the function $f(t)=e^{it}$ provides an elegant description of these motions, which greatly facilitates computations! But that’s still just the tip of the iceberg. To unveil the true magic of complex numbers, we’ll need to dig deeper! Points in the Complex Plane In the 19th century, German mathematician Carl Friedrich Gauss, the Prince of mathematics, provided a powerful visualization of complex numbers. To get there, notice the incredible fact that $1 \times x=x$ when $x$ is a number. Thus, if I show you a geometrical transformation “$\times x$”, then you can easily find which $x$ I chose by looking at what point the number $1$ is sent to. Similarly, if I give you a geometrical transformation $(\rho, \theta)$, then you can find out the values of $\rho$ and $\theta$, as they’ll be the polar coordinates of the point $1$ is sent to! This point is called the image of 1. The polar coordinates? Can you give an example? Sure! Below is the the combination of a homothety by a factor 2 and a rotation by an angle $2\tau/3$ (2 thirds of a turn). Follow 1 The factor of homothety $\rho$ is the distance between the image of 1 and the origin, while the angle of rotation $\theta$ is the (anti-clockwise) angle from $1$ to its image. These look like the polar coordinates! Exactly! This shows that any geometrical transformation can be translated into a point in the complex plane whose polar coordinates are given by the factor of homothety and the angle of rotation! And this is a one-to-one correspondence between geometrical transformations and points! Thus, we can identify geometrical transformation with points in the complex planes. In pure algebra terms, what we’ve unveiled here is a natural bijection between $\mathbb R_+^* \times SO(2)$ and $\mathbb R^2-\{0\}$. This bijection is a homeomorphism! Plus, it is naturally extended to a homeomorphism $(\mathbb R_+^* \times SO(2)) \cup \{0\} \rightarrow \mathbb R^2$. Thus, we can identify both sets, and we call them both $\mathbb C$. Could we translate these coordinates back in classical Cartesian ones? Yes! But before doing that, let’s first look to which point of the complex plane the geometrical transformation $i$ is associated to: i in the complex plane So, $i$ is the point right above the origin? That’s funny… I know! What’s also particularly interesting is that we can now describe Euclidean planar geometry with complex numbers! Vectors in the Complex Plane To complete the construction of complex numbers, we need to associate any point in the complex plane with a vector. A vector? What the hell is that? A vector is a motion in the complex plane. This motion is often represented by an arrow from an initial point to a final point. But what’s important to keep in mind is that the vector corresponds to the motion, not the arrow. Two arrows may correspond to the same motion even though they don’t start at the same initial points, as displayed by the arrows of same colors in the figure on the right. So how do we associate any point in the complex plane with a vector? Given a point in the complex plane, we can draw the arrow from the origin to that point. This vector associated to this arrow will then be the vector associated to the point in the complex plane. For instance, $i$ is associated to the arrow from $0$ to $i$, which corresponds to the green arrows in the figure on the right. I think I get it… But what’s the point in mapping points to vectors? We can now define the addition of complex numbers! How do we do that? By combining the motions associated to the vectors! For instance, combining the purple and green motions is a motion of one unit to the right and by two units upwards. This is equivalent to the blue motion only! This means that $purple + green = blue$. And this can be visualized geometrically by having the purple and blue arrows starting at the same point, while the green arrow is put at the end of the purple arrow. The purple, green and blue arrows must then form a triangle, as done below: Addition of Vectors And since all vectors correspond to a complex number, we can now do additions of complex numbers by adding their corresponding vectors! Now here comes the key part. All vectors can be decomposed uniquely as a sum of $1$ and $i$. For instance, the purple vector can be obtained by a combination of the vector associated to $1$, and a vector associated to $i$. Thus, $purple = 1+i$. Similarly, the blue vector is a combination of $1$ and two times $i$. Hence, $blue = 1+2i$. So all vectors are a certain number of times $1$ plus a certain number of times $i$? Exactly! And since all vectors correspond to a complex number, all complex numbers can thus be written $a+bi$, where $a$ and $b$ are usual numbers. This decomposition enables simple computations of additions of complex numbers! Indeed, if you consider any two complex numbers $z_1$ and $z_2$, then we know by now that each can be decomposed as $z_1 = a_1+b_1i$ and $z_2 = a_2 + b_2 i$. The sum of $z_1$ and $z_2$ is then given by $z_1 + z_2 = (a_1+b_1i) + (a_2+b_2i) = (a_1+a_2) + (b_1+b_2)i$. I should also mention that each complex number $z$ is also associated with an operation “$+z$” on the points of the complex plane. Geometrically, this operation consists in a translation of a vector which is the one that $z$ corresponds to. In fact, this mapping of $z$ to “$+z$” is an isomorphism of Euclidean vector space between the space of complex numbers and the set of translations of the complex plane. This Cartesian description of complex numbers would have been quite useful if it hadn’t been for the more general approach of linear algebra to define vectors. In pure algebra terms, what we’ve done here is unveiling a natural isomorphism of Euclidean vector space between $\mathbb C$ and $\mathbb R^2$. In particular, this gives the space $(\mathbb C, +)$ a structure of commutative group. This isomorphism is trivial for the classical construction of complex numbers, but it’s quite impressive in the construction of this article! Recall that we introduced $\mathbb C$ as combinations of homotheties and rotations! The Field of Complex Numbers Let’s sum up what we’ve discussed so far. The awesomeness of complex numbers is that they can be identified with several different mathematical objects. They can be seen as combinations of homotheties and rotations of the complex plane, as points in the complex plane and as vectors in the complex planes. The first understanding of complex numbers describes the multiplication, while the third describes the addition. Thinking about each of the meanings of complex numbers separately is already quite mesmerizing, but the truly mind-blowing property of complex numbers occurs when we mix them! What do you mean? Let’s see what happens when we both have a multiplication and an addition! In particular, let’s focus on the simplest possible case, namely $(x+y) \times z$, where $x$, $y$ and $z$ are all complex numbers. Humm… I don’t know where to start! Well, the expression starts with the addition of $x$ and $y$… OK… So to do the addition, we need to think of these complex numbers as vectors, right? Exactly! Let’s draw $x$, $y$, and their sum $x+y$. But then, we need to multiply these terms by $z$. How do we do that? I know! We need to think of $z$ as $\times z$, which is a combination of a rotation and a homothety! Very good! This means that $xz$, $yz$ and $(x+y)z$ will be the image by the geometrical transformation “$\times z$” of $x$, $y$ and $z$. This is what’s drawn below: Now, the magic occurs when we notice that any geometrical transformation $\times z$ preserves the shapes of triangles. As a result, the triangle $x$, $y$, $x+y$ gets transformed into the triangle $xz$, $yz$, $(x+y)z$. And this means that the sum of the sides $xz$ and $yz$ equals the last side $(x+y)z$! In other terms, $xz+yz=(x+y)z$. This is the essential distributivity property which binds the two operations we have defined on complex numbers! It says that the structure of complex numbers is much richer than the structures of geometrical operations and vectors alone. I’m not sure I see what’s so great about that… What’s awesome about that is that all the algebraic manipulations you could do with usual numbers still hold for complex numbers. In particular, operations like $(x+y)^2 = x^2+2xy+y^2$ are still valid for complex numbers! This strong resemblance of manipulations is what led mathematicians to call complex numbers… numbers. In pure algebra terms, we say that complex numbers form a field. To recapitulate, a complex number is very complicated mathematical object, which can be seen through diverse angles. Mainly, it can be seen as a combination of a homothety and a rotation, as a point in the complex plane or as a vector of dimension 2. These three interpretations are displayed below. Complex Numbers But what makes complex numbers so special isn’t the different angles through which they can be seen, but the combination of them all, in sort of the same way that quantum objects aren’t simply classical waves nor classical particles. Namely, the full nature of complex numbers is unveiled as they are considered as a field. In particular, it is to that field that the fundamental theorem of algebra gets applied. What’s that theorem? This theorem, which was first proven by Carl Friedrich Gauss, states that all polynomial complex equations have solutions. It’s as simple as that. This property is also known as the fact that the complex numbers form the algebraic closure of real numbers. In fact, as mentioned briefly in my article on the construction of numbers, the most insightful and natural (but also terribly abstract) way to construct complex numbers is precisely by defining them as the algebraic closure of real numbers. In pure algebra terms, let’s denote $\mathbb R[X]$ the ring of polynomials. Then, it can be shown that $(X^2+1)\mathbb R[X]$ is a maximal ideal. The space of complex numbers is then defined as $\mathbb C = \mathbb R[X]/(X^2+1)\mathbb R[X]$, and it is thus a field. This is so beautiful that I nearly cried when I first saw that! What’s the point of this theorem? Many more equations can now be solved! And I’m not only talking about the polynomial equations. More importantly, natural and simple solutions appear in differential equations, electromagnetism, eigen-value search, Fourier transform and number theory among many other fields. In particular, complex numbers have turned out to be the right structure to describe particle physics! Check my article on the dynamics of the wave function in quantum mechanics to see the complex numbers in action! More on Science4All The Magic of Algebra The Magic of Algebra By Lê Nguyên Hoang | Updated:2016-02 | Views: 3004 Construction and Definition of Numbers Construction and Definition of Numbers By Lê Nguyên Hoang | Updated:2016-02 | Views: 5681 Numbers and Constructibility Numbers and Constructibility By Lê Nguyên Hoang | Updated:2016-02 | Views: 6150 Symmetries and Group Theory Symmetries and Group Theory By Lê Nguyên Hoang | Updated:2016-02 | Views: 2987 Beauty is extremely hard to define. Yet, physicists and artists seem to agree on an important feature of beauty, created by mathematicians: Symmetries. This article aims at introducing the beauty and the concepts on symmetries, from the basic geometrical symmetries to the more abstract fundamental automorphisms. By Lê Nguyên Hoang | Updated:2016-01 | Views: 70537 Dynamics of the Wave Function: Heisenberg, Schrödinger, Collapse Dynamics of the Wave Function: Heisenberg, Schrödinger, Collapse By Lê Nguyên Hoang | Updated:2016-02 | Prerequisites: The Essence of Quantum Mechanics, Imaginary and Complex Numbers, Linear Algebra and Higher Dimensions | Views: 7319 On one hand, the dynamics of the wave function can follow Schrödinger equation and satisfy simple properties like Heisenberg uncertainty principle. But on the other hand, it can be probabilistic. This doesn't mean that it's totally unpredictable, since the unpredictability is amazingly predictable. Find out how these two dynamics work! One comment to “Imaginary and Complex Numbers 1. Inspired introduction to complex numbers! Introducing it that way will nicely pave the way for geometric algebra. Will certainly use this when teaching my kids this concept – only quibble when you write that an “anti-clockwise quarter-of-a-turn rotation [is represented] as i” you illustrate this with the horizontal and then rotated vertical number lines. So far so good, but because the transition arrow is curved, I at first thought it indicated the wrong orientation of the rotation, took me a moment to realize that this is of course not the assigned meaning. May be better to use a straight arrow for that graphic. Leave a Reply
aaca818865a76341
Resume Reading — How Einstein and Schrödinger Conspired to Kill a Cat How Einstein and Schrödinger Conspired to Kill a Cat The rise of fascism shaped Schrödinger’s cat fable. Of all the bizarre facets of quantum theory, few seem stranger than those captured by Erwin Schrödinger’s famous fable about the…By David Kaiser Of all the bizarre facets of quantum theory, few seem stranger than those captured by Erwin Schrödinger’s famous fable about the cat that is neither alive nor dead. It describes a cat locked inside a windowless box, along with some radioactive material. If the radioactive material happens to decay, then a device releases a hammer, which smashes a vial of poison, which kills the cat. If no radioactivity is detected, the cat lives. Schrödinger dreamt up this gruesome scenario to mock what he considered a ludicrous feature of quantum theory. According to proponents of the theory, before anyone opened the box to check on the cat, the cat was neither alive nor dead; it existed in a strange, quintessentially quantum state of alive-and-dead. Today, in our LOLcats-saturated world, Schrödinger’s strange little tale is often played for laughs, with a tone more zany than somber.1 It has also become the standard bearer for a host of quandaries in philosophy and physics. In Schrödinger’s own time, Niels Bohr and Werner Heisenberg proclaimed that hybrid states like the one the cat was supposed to be in were a fundamental feature of nature. Others, like Einstein, insisted that nature must choose: alive or dead, but not both. Although Schrödinger’s cat flourishes as a meme to this day, discussions tend to overlook one key dimension of the fable: the environment in which Schrödinger conceived it in the first place. It’s no coincidence that, in the face of a looming World War, genocide, and the dismantling of German intellectual life, Schrödinger’s thoughts turned to poison, death, and destruction. Schrödinger’s cat, then, should remind us of more than the beguiling strangeness of quantum mechanics. It also reminds us that scientists are, like the rest of us, humans who feel—and fear. the death of knowledge: The disturbing and violent events taking place in Europe in Nazi Germany in the 1930s, including book burnings like this one, impacted all levels of life at the time—right down to what sorts of metaphors scientists used to describe their work.U.S. National Archives Schrödinger crafted his cat scenario during the summer of 1935, in close dialogue with Albert Einstein. The two had solidified their friendship in the late 1920s, when they were both living in Berlin. By that time, Einstein’s theory of relativity had catapulted him to worldwide fame. His schedule became punctuated with earthly concerns—League of Nations committee meetings, stumping for Zionist causes—alongside his scientific pursuits. Schrödinger, a dapper Austrian, had been elevated to a professorship at the University of Berlin in 1927, just one year after introducing his wave equation for quantum mechanics (now known simply as the Schrödinger equation). Together they enjoyed raucus Viennese sausage parties—the Wiener Würstelabende bashes that Schrödinger hosted at his house—and sailing on the lake near Einstein’s summer home. Too soon, their good-natured gatherings came to a halt. Hitler assumed the chancellorship of Germany in January 1933. At the time, Einstein was visiting colleagues in Pasadena, California. While he was away, Nazis raided his Berlin apartment and summer house and froze his bank account. Einstein resigned from the Prussian Academy of Sciences and quickly made arrangements to settle in Princeton, New Jersey, as one of the first members of the brand-new Institute for Advanced Study. Schrödinger replied with a novel twist. In place of gunpowder, there was now a cat. Meanwhile, Schrödinger—who was not Jewish and had kept a lower profile, politically, than Einstein—watched in horror that spring as the Nazis staged massive book-burning rallies and extended race-based restrictions to university instructors. Schrödinger accepted a fellowship at the University of Oxford and left Berlin that summer. (He later settled in Dublin.) In August, he wrote to Einstein from the road, “Unfortunately (like most of us) I have not had enough nervous peace in recent months to work seriously at anything.”2 Before too long their exchanges picked up again, their once-leisurely strolls now replaced by trans-Atlantic post. Prior to the dramatic disruptions of 1933, both physicists had made enormous contributions to quantum theory; indeed, both earned their Nobel Prizes for their work on the subject. Yet both had grown disillusioned with their colleagues’ efforts to make sense of the equations. Armed with paper and postage stamps, they dove back into their intense discussions.3, 4 The Strangeness of Black Holes An Introduction to the Black Hole InstituteFittingly, the Black Hole Initiative (BHI) was founded 100 years after Karl Schwarzschild solved Einstein’s equations for general relativity—a solution that described a black hole decades before the first astronomical evidence that they exist....READ MORE In May 1935, Einstein published a paper with two younger colleagues at the Institute for Advanced Study, Boris Podolsky and Nathan Rosen, charging that quantum mechanics was incomplete. There existed “elements of reality,” they wrote—definite quantities or properties of physical objects—for which quantum theory provided only probabilities.5 In early June Schrödinger wrote to congratulate his friend on the latest paper, lauding Einstein for having “publicly called the dogmatic quantum mechanics to account over those things that we used to discuss so much in Berlin.” Ten days later Einstein responded, venting to Schrödinger that “the epistemology-soaked orgy ought to come to an end”—an “orgy” they each associated with Niels Bohr and his younger acolytes like Werner Heisenberg, who argued that quantum mechanics completely described a nature that was, itself, probabilistic.6 This produced the first stirrings of the soon-to-be-born cat. In a follow-up letter to Schrödinger, Einstein asked his friend to imagine a ball that had been placed in one of two identical, closed boxes. Prior to opening either box, the probability of finding the ball in the first box would be 50 percent. Einstein doubted that this was a complete description, and believed that a proper theory of the atomic domain should be able to calculate a definite value. Calculating only probabilities, to Einstein, meant stopping short. Encouraged by Schrödinger’s enthusiastic reply, Einstein pushed his ball-in-box analogy even further. What if the small-scale processes that physicists were used to talking about were amplified to human sizes? Writing to Schrödinger in early August, Einstein laid out a new scenario: Imagine a charge of gunpowder that was intrinsically unstable, as likely as not to explode over the course of a year. “In principle this can quite easily be represented quantum-mechanically,” he wrote. Whereas solutions to Schrödinger’s own equation might look sensible at early times, “after the course of a year this is no longer the case at all. Rather, the ψ-function”—the wavefunction that Schrödinger had introduced into quantum theory back in 1926—“then describes a sort of blend of not-yet and of already-exploded systems.” Not even Bohr, Einstein crowed in his letter, should accept such nonsense, for “in reality there is just no intermediary between exploded and not-exploded.”7 Nature must choose between such alternatives, Einstein insisted, and so, therefore, should the physicist. Einstein could have reached for many different examples of large-scale effects with which to criticize a quantum-probabilistic description. His particular choice—the unmistakable damage caused by exploding caches of gunpowder—likely reflected the worsening situation in Europe. As early as April 1933, he had written to another colleague to describe his view of how “pathological demagogues” like Hitler had come to power, pausing to note that “I am sure you know how firmly convinced I am of the causality of all events”—quantum and political alike. Later that year he lectured to a packed auditorium in London about “the stark lightning flashes of these tempestuous times.” To a different colleague he observed with horror that “the Germans are secretly rearming on a large scale. Factories are running day and night (airplanes, light bombs, tanks, and heavy ordnance)”—so many explosive charges ready to explode. In 1935, he publicly renounced his own prior commitment to pacifism.8 Perhaps inspired by their latest exchange, Schrödinger began writing a long essay of his own, on “The present situation in quantum mechanics.” A week and a half after receiving Einstein’s letter about the exploding gunpowder, Schrödinger replied with a novel twist. In place of gunpowder, there was now a cat. Against the drumbeat of advancing fascism, little wonder that talk of balls in boxes morphed into explosions, poisons, and morbid calculations of life and death. “Confined in a steel chamber is a Geiger counter prepared with a tiny amount of uranium,” Schrödinger wrote to his friend, “so small that in the next hour it is just as probable to expect one atomic decay as none. An amplified relay provides that the first atomic decay shatters a small bottle of prussic acid. This and—cruelly—a cat is also trapped in the steel chamber.” Just as in Einstein’s example, Schrödinger imagined the appointed time elapsing. Then, according to quantum mechanics, “the living and dead cat are smeared out in equal measure.” Einstein was delighted. “Your cat shows that we are in complete agreement,” he wrote in early September. “A ψ-function that contains the living as well as the dead cat just cannot be taken as a description of the real state of affairs.”9 A few months after Einstein’s September letter, Schrödinger’s now-famous cat example appeared, with nearly identical wording, in the magazine Die Naturwissenschaften.10 But it almost didn’t make it into print. Days after he submitted his draft to the magazine, the founding editor—a Jewish physicist named Arnold Berliner—was fired. Schrödinger thought about retracting the essay in protest, but relented only after Berliner himself interceded.11 Schrödinger’s thoughts that summer were preoccupied with more than just concerns about Berliner’s mistreatment. Schrödinger had made no secret of his distaste of the Nazi regime, and had become downright fatalistic when forced to flee Berlin, musing in his diary, “might it not be the case that I have already learnt enough of this world. And that I am prepared …” Months after arriving in Oxford, a visiting friend noted how unhappy he was, the pressures of displacement compounding the dismal, daily news. In May 1935—just as the Einstein, Podolsky, Rosen paper appeared in print—Schrödinger delivered a 20-minute lecture on BBC radio on “Equality and Relativity of Freedom,” recalling the many times throughout history in which “gallows and stake, sword and cannons have served to free respectable people” from political repression.12 Against the drumbeat of advancing fascism, little wonder that talk of balls in boxes morphed so quickly into explosions, poisons, and morbid calculations of life and death. While his essay was in press, Schrödinger wrote to Bohr, trying again to discern how Bohr and the others could make peace with the bizarre features of quantum mechanics. As with Einstein, Schrödinger longed to discuss such matters with Bohr in person, “but the times are now little suited for pleasure trips.” Larger questions loomed. Schrödinger wrote of his “wish once again to be somewhere permanently, that is, to know with considerable probability what one is to do for the next 5 or 10 years.”13 Living only with probabilities had taken its toll. Yet still Europe sank deeper into darkness. Just a few years after Schrödinger introduced his fable about the quantum cat and prussic acid, Nazi engineers began using the self-same poison—under the trademarked name, “Zyklon B”—in their brutally efficient gas chambers. In March 1942, just before his scheduled deportation to a concentration camp, Schrödinger’s former editor from Die Naturwissenschaften, Arnold Berliner, killed himself—choosing, in the end, a terrible certainty.14 In time, the challenge that Schrödinger thought would undercut quantum mechanics became, instead, one of the most familiar tropes for teaching students about the theory. A central tenet of quantum mechanics is that particles can exist in “superposition” states, partaking of two opposite properties simultaneously. Whereas we often face “either-or” decisions in our everyday lives, nature—at least as described by quantum theory—can adopt “both-and.” Over the decades, physicists have managed to create all manner of Schrödinger-cat states in the laboratory, coaxing microscopic bits of matter into “both-and” superpositions and probing their properties. Despite Schrödinger’s reservations, every single test has been consistent with the predictions from quantum mechanics. In one recent example, colleagues and I demonstrated that neutrinos—subatomic particles that interact very weakly with ordinary matter—can travel hundreds of miles in such cat-like states.15 There is a double irony, then, to Schrödinger’s tale of his twice-fated cat. First, although Schrödinger’s cat remains well known within (and beyond) physics classrooms, few recall that Schrödinger introduced his fable to criticize quantum mechanics rather than elucidate it. Second, and even more telling: Schrödinger’s cat served, in its day, as synecdoche for a broader world that had become too strange—and, at times, too threatening—to understand. David Kaiser is Germeshausen Professor in MIT’s Program in Science, Technology, and Society, and also a professor in MIT’s Department of Physics. 1. Crease, R.P. & Goldhaber, A. The Quantum Moment W.W. Norton & Company, New York, NY (2014). 2. Moore, W. Schrödinger: Life and Thought Cambridge University Press, New York, NY (1989). 3. Fine, A. The Shaky Game: Einstein, Realism, and the Quantum Theory University of Chicago Press, Chicago (1986). 4. Kaiser, D. Bringing the human actors back onstage: The personal context of the Einstein-Bohr debate. British Journal for the History of Science 27, 129-152 (1994). 5. Einstein, A., Podolsky, B., & Rosen, N. Can quantum-mechanical description of physical reality be considered complete? Physical Review 47, 777-780 (1935). 6. Fine, A. The Shaky Game: Einstein, Realism, and the Quantum Theory University of Chicago Press, Chicago (1986). Letter from Schrödinger to Einstein, dated 7 June 1935 and from Einstein to Schrödinger, dated 17 June 1935. 7. Fine, A. The Shaky Game: Einstein, Realism, and the Quantum Theory University of Chicago Press, Chicago (1986). Letter from Einstein to Schrödinger, dated 8 August 1935. 8. Rowe, D.E. & Schulmann, R. Einstein on Politics Princeton University Press, Princeton, NJ (2007). 9. Fine, A. The Shaky Game: Einstein, Realism, and the Quantum Theory University of Chicago Press, Chicago (1986). Letter from Schrödinger to Einstein, dated 19 August 1935 and from Einstein to Schrödinger, dated 4 September 1935. 10. Schrödinger, E. Die gegenwärtige Situation in der Quantenmechanik. Die Naturwissenschaften 23, 807-849 (1935). 11. Dr. Arnold Berliner and Die Naturwissenschaften. Nature 136, 506 (1935). 12. Moore, W. Schrödinger: Life and Thought Cambridge University Press, New York, NY (1989). Quotations from Schrödinger’s diary and 1935 BBC address. 13. Moore, W. Schrödinger: Life and Thought Cambridge University Press, New York, NY (1989). Letter from Schrödinger to Bohr, dated 13 October 1935. 14. Ewald, P.P. & Born, M. Dr. Arnold Berliner (obituary). Nature 150, 284 (1942). 15. Formaggio, J.A., Kaiser, D.I., Murskyj, M.M., & Weiss, T.E. Violation of the Leggett-Garg Inequality in neutrino oscillations. Physical Review Letters 117, 050402 (2016). Join the Discussion
be60c5c0b968771f
Legendre polynomials The six first Legendre polynomials. In physical science and mathematics, Legendre polynomials (named after Adrien-Marie Legendre, who discovered them in 1782) are a system of complete and orthogonal polynomials, with a vast number of mathematically beautiful properties, and innumerable applications. They can be defined in many ways, and the various definitions highlight different aspects as well as suggest generalizations and connections to different mathematical structures and physical and numerical applications. Closely related to the Legendre polynomials are Associated Legendre polynomials, Legendre functions of the second kind , discussed below, and associated Legendre functions. For each of these see the separate Wikipedia articles. Definition by Construction as an Orthogonal SystemEdit In this approach, the polynomials are defined as an orthogonal system with respect to the function   over the interval  , i.e.,   is a polynomial of degree  , such that This determines the polynomials completely up to an overall scale factor, which is fixed by the standardization  . That this is a constructive definition is seen thus:   is the only correctly standardized polynomial of degree 0.   must be orthogonal to  , leading to  ,   is determined by demanding orthogonality to   and  , and so on.   is fixed by demanding orthogonality to all   with  . This gives   conditions, which, along with the standardization   fixes all   coefficients in  . With work, all the coefficients of every polynomial can be systematically determined, leading to the explicit representation in powers of   given below. This definition of the  's is the simplest one. It does not appeal to the theory of differential equations. Second, the completeness of the polynomials follows immediately from the completeness of the powers 1,  . Finally, by defining them via orthogonality with respect to the most obvious weight function on a finite interval, it sets up the Legendre polynomials as one of the three classical orthogonal polynomial systems. The other two are the Laguerre polynomials, which are orthogonal over the half line  , and the Hermite polynomials, orthogonal over the full line  , with weight functions that are the most natural analytic functions that ensure convergence of all integrals. Definition via Generating FunctionEdit The Legendre polynomials can also be defined as the coefficients in a formal expansion in powers of   of the generating function[1] The coefficient of   is a polynomial in   of degree  . Expanding up to   gives Expansion to higher orders gets increasingly cumbersome, but is possible to do systematically, and again leads to one of the explicit forms given below. It is possible to obtain the higher  's without resorting to direct expansion of the Taylor series, however. Eq. 2 is differentiated with respect to t on both sides and rearranged to obtain Replacing the quotient of the square root with its definition in Eq. 2, and equating the coefficients of powers of t in the resulting expansion gives Bonnet’s recursion formula This relation, along with the first two polynomials P0 and P1, allows all the rest to be generated recursively. The generating function approach is directly connected to the multipole expansion in electrostatics, as explained below, and is how the polynomials were first defined by Legendre in 1782. Definition via Differential EquationEdit A third definition is in terms of solutions to Legendre's differential equation This differential equation has regular singular points at x = ±1 so if a solution is sought using the standard Frobenius or power series method, a series about the origin will only converge for |x| < 1 in general. When n is an integer, the solution Pn(x) that is regular at x = 1 is also regular at x = −1, and the series for this solution terminates (i.e. it is a polynomial). The orthogonality and completeness of these solutions is best seen from the viewpoint of Sturm–Liouville theory. We rewrite the differential equation as an eigenvalue problem, with the eigenvalue   in lieu of  . If we demand that the solution be regular at  , the differential operator on the left is Hermitean. The eigenvalues are found to be of the form n(n + 1), with  , and the eigenfunctions are the  . The orthogonality and completeness of this set of solutions follows at once from the larger framework of Sturm-Liouville theory. The differential equation admits another, non-polynomial solution, the Legendre functions of the second kind  , discussed below. A two-parameter generalization of (Eq. 1) is called Legendre's general differential equation, solved by the Associated Legendre polynomials. Legendre functions are solutions of Legendre's differential equation (generalized or not) with non-integer parameters. In physical settings, Legendre's differential equation arises naturally whenever one solves Laplace's equation (and related partial differential equations) by separation of variables in spherical coordinates. From this standpoint, the eigenfunctions of the angular part of the Laplacian operator are the spherical harmonics, of which the Legendre polynomials are (up to a multiplicative constant) the subset that is left invariant by rotations about the polar axis. The polynomials appear as   where   is the polar angle. This approach to the Legendre polynomials provides a deep connection to rotational symmetry. Many of their properties which are found laboriously through the methods of analysis --- for example the addition theorem --- are more easily found using the methods of symmetry and group theory, and acquire profound physical and geometrical meaning. Orthonormality and CompletenessEdit The standardization   fixes the normalization of the Legendre polynomials (with respect to the L2 norm on the interval −1 ≤ x ≤ 1). Since they are also orthogonal with respect to the same norm, the two statements can be combined into the single equation, (where δmn denotes the Kronecker delta, equal to 1 if m = n and to 0 otherwise). This normalization is most readily found by employing Rodrigues' formula, given below. That the polynomials are complete means the following. Given any piecewise continuous function   with finitely many discontinuities in the interval [-1,1], the sequence of sums converges in the mean to   as  , provided we take This completeness property underlies all the expansions discussed in this article, and is often stated in the form with −1 ≤ x ≤ 1 and −1 ≤ y ≤ 1. Rodrigues' formula and other explicit formulasEdit An especially compact expression for the Legendre polynomials is given by Rodrigues' formula: This formula enables derivation of a large number of properties of the  's. Among these are explicit representations such as where the last, which is also immediate from the recursion formula, expresses the Legendre polynomials by simple monomials and involves the multiplicative formula of the binomial coefficient. The first few Legendre polynomials are: Applications of Legendre polynomialsEdit Expanding a 1/r potentialEdit The Legendre polynomials were first introduced in 1782 by Adrien-Marie Legendre[2] as the coefficients in the expansion of the Newtonian potential where r and r are the lengths of the vectors x and x respectively and γ is the angle between those two vectors. The series converges when r > r. The expression gives the gravitational potential associated to a point mass or the Coulomb potential associated to a point charge. The expansion using Legendre polynomials might be useful, for instance, when integrating this expression over a continuous mass or charge distribution. Legendre polynomials occur in the solution of Laplace's equation of the static potential, 2 Φ(x) = 0, in a charge-free region of space, using the method of separation of variables, where the boundary conditions have axial symmetry (no dependence on an azimuthal angle). Where is the axis of symmetry and θ is the angle between the position of the observer and the axis (the zenith angle), the solution for the potential will be Al and Bl are to be determined according to the boundary condition of each problem.[3] They also appear when solving the Schrödinger equation in three dimensions for a central force. Legendre polynomials in multipole expansionsEdit As an example, the electric potential Φ(r,θ) (in spherical coordinates) due to a point charge located on the z-axis at z = a (see diagram right) varies as Legendre polynomials in trigonometryEdit The trigonometric functions cos , also denoted as the Chebyshev polynomials Tn(cos θ) ≡ cos , can also be multipole expanded by the Legendre polynomials Pn(cos θ). The first several orders are as follows: Another property is the expression for sin (n + 1)θ, which is Additional properties of Legendre polynomialsEdit Legendre polynomials have definite parity. That is, they are symmetric or antisymmetric,[4] according to Another useful property is   for  , which follows from considering the orthogonality relation with  . It is convenient when a Legendre series   is used to approximate a function or experimental data: the average of the series over the interval [-1, 1] is simply given by the leading expansion coefficient  . Since the differential equation and the orthogonality property are independent of scaling, the Legendre polynomials' definitions are "standardized" (sometimes called "normalization", but note that the actual norm is not 1) by being scaled so that The derivative at the end point is given by The Askey–Gasper inequality for Legendre polynomials reads where the unit vectors r and r have spherical coordinates (θ,φ) and (θ′,φ′), respectively. Recursion relationsEdit As discussed above, the Legendre polynomials obey the three-term recurrence relation known as Bonnet’s recursion formula Useful for the integration of Legendre polynomials is From the above one can see also that or equivalently where ||Pn|| is the norm over the interval −1 ≤ x ≤ 1 Asymptotically for l → ∞[5] and for arguments of magnitude greater than 1 where J0 and I0 are Bessel functions. All   zeros of   are real, distinct from each other, and lie in the interval  . Further, if we regard them as dividing the interval   into   subintervals, each subinterval will contain exactly one zero of  . This is known as the interlacing property. Because of the parity property it is evident that if   is a zero of  , so is  . These zeros play an important role in numerical integration based on Gaussian quadrature. The specific quadrature based on the  's is known as Gauss-Legendre quadrature. Legendre polynomials with transformed argumentEdit Shifted Legendre polynomialsEdit The shifted Legendre polynomials are defined as Here the "shifting" function x ↦ 2x − 1 is an affine transformation that bijectively maps the interval [0,1] to the interval [−1,1], implying that the polynomials n(x) are orthogonal on [0,1]: An explicit expression for the shifted Legendre polynomials is given by The analogue of Rodrigues' formula for the shifted Legendre polynomials is The first few shifted Legendre polynomials are: Legendre rational functionsEdit The Legendre rational functions are a sequence of orthogonal functions on [0, ∞). They are obtained by composing the Cayley transform with Legendre polynomials. A rational Legendre function of degree n is defined as: They are eigenfunctions of the singular Sturm-Liouville problem: with eigenvalues Legendre functions of the second kind (Qn)Edit As well as polynomial solutions, the Legendre equation has non-polynomial solutions represented by infinite series. These are the Legendre functions of the second kind, denoted by Qn(x). The differential equation has the general solution where A and B are constants. See alsoEdit 1. ^ Arfken & Weber 2005, p.743 2. ^ Legendre, A.-M. (1785) [1782]. "Recherches sur l'attraction des sphéroïdes homogènes". Mémoires de Mathématiques et de Physique, présentés à l'Académie Royale des Sciences, par divers savans, et lus dans ses Assemblées (PDF) (in French). X. Paris. pp. 411–435. Archived from the original (PDF) on 2009-09-20. 3. ^ Jackson, J. D. (1999). Classical Electrodynamics (3rd ed.). Wiley & Sons. p. 103. ISBN 978-0-471-30932-1. 4. ^ Arfken & Weber 2005, p.753 5. ^ 1895-1985., Szegő, Gábor, (1975). Orthogonal polynomials (4th ed.). Providence: American Mathematical Society. pp. 194 (Theorem 8.21.2). ISBN 0821810235. OCLC 1683237. External linksEdit
55410d5a3dbe117f
Dark state From Wikipedia, the free encyclopedia Jump to navigation Jump to search In atomic physics, a dark state refers to a state of an atom or molecule that cannot absorb (or emit) photons. All atoms and molecules are described by quantum states; different states can have different energies and a system can make a transition from one energy level to another by emitting or absorbing one or more photons. However, not all transitions between arbitrary states are allowed. A state that cannot absorb an incident photon is called a dark state. This can occur in experiments using laser light to induce transitions between energy levels, when atoms can spontaneously decay into a state that is not coupled to any other level by the laser light, preventing the atom from absorbing or emitting light from that state. A dark state can also be the result of quantum interference in a three-level system, when an atom is in a coherent superposition of two states, both of which are coupled by lasers at the right frequency to a third state. With the system in a particular superposition of the two states, the system can be made dark to both lasers as the probability of absorbing a photon goes to 0. Two-level systems[edit] In practice[edit] Experiments in atomic physics are often done with a laser of a specific frequency (meaning the photons have a specific energy), so they only couple one set of states with a particular energy to another set of states with an energy . However, the atom can still decay spontaneously into a third state by emitting a photon of a different frequency. The new state with energy of the atom no longer interacts with the laser simply because no photons of the right frequency are present to induce a transition to a different level. In practice, the term dark state is often used for a state that is not accessible by the specific laser in use even though transitions from this state are in principle allowed. In theory[edit] Whether or not we say a transition between a state and a state is allowed often depends on how detailed the model is that we use for the atom-light interaction. From a particular model follow a set of selection rules that determine which transitions are allowed and which are not. Often these selection rules can be boiled down to conservation of angular momentum (the photon has angular momentum). In most cases we only consider an atom interacting with the electric dipole field of the photon. Then some transitions are not allowed at all, others are only allowed for photons of a certain polarization. Let's consider for example the hydrogen atom. The transition from the state with mj=-1/2 to the state with mj=-1/2 is only allowed for light with polarization along the z axis (quantization axis) of the atom. The state with mj=-1/2 therefore appears dark for light of other polarizations. Transitions from the 2S level to the 1S level are not allowed at all. The 2S state can not decay to the ground state by emitting a single photon. It can only decay by collisions with other atoms or by emitting multiple photons. Since these events are rare, the atom can remain in this excited state for a very long time, such an excited state is called a metastable state. Three-level systems[edit] A three-state Λ-type system We start with a three-state Λ-type system, where and are dipole-allowed transitions and is forbidden. In the rotating wave approximation, the semi-classical Hamiltonian is given by where and are the Rabi frequencies of the probe field (of frequency ) and the coupling field (of frequency ) in resonance with the transition frequencies and , respectively, and H.c. stands for the Hermitian conjugate of the entire expression. We will write the atomic wave function as Solving the Schrödinger equation , we obtain the solutions Using the initial condition we can solve these equations to obtain with . We observe that we can choose the initial conditions which gives a time-independent solution to these equations with no probability of the system being in state .[1] This state can also be expressed in terms of a mixing angle as This means that when the atoms are in this state, they will stay in this state indefinitely. This is a dark state, because it can not absorb or emit any photons from the applied fields. It is, therefore, effectively transparent to the probe laser, even when the laser is exactly resonant with the transition. Spontaneous emission from can result in an atom being in this dark state or another coherent state, known as a bright state. Therefore, in a collection of atoms, over time, decay into the dark state will inevitably result in the system being "trapped" coherently in that state, a phenomenon known as coherent population trapping. See also[edit] 1. ^ P. Lambropoulos & D. Petrosyan (2007). Fundamentals of Quantum Optics and Quantum Information. Berlin; New York: Springer.
687d8ae7e322641f
A Commotion in the Stars: The Legacy of Christian Doppler Salzburg Austria Doppler in Prague Prague, Czech Republic Title page of Doppler’s 1842 paper introducing the Doppler Effect. Christian Doppler Voigt’s Transformation is invariant under the transformation Further Reading [2] pg. 30, Eden Science 1916: A Hundred-year Time Capsule Karl Schwarzschild (1873 – 1916) Schwarzschild’s solution to Einstein’s field equations for a point mass. Albert Einstein (1879 – 1955) Relativistic energy density of electromagnetic fields. Einstein’s prediction of gravitational waves. Max Planck (1858 – 1947) Counting microstates by Planck. Max Born (1882 – 1970) Born on liquid crystals. Ferdinand Frobenius (1849 – 1917) Frobenious on groups. Heinrich Rubens (1865 – 1922) Rubens and the infrared spectrum of steam. Emil Warburg (1946 – 1931) Warburg on photochemistry. Schulze,  Alt– und Neuindisches Orth,  Zur Frage nach den Beziehungen des Alkoholismus zur Tuberkulose Schulze,  Die Erhabunen auf der Lippin- und Wangenschleimhaut der Säugetiere von Wilamwitz-Moellendorff, Die Samie des Menandros Engler,  Bericht über das >>Pflanzenreich<< Meinecke,  Germanischer und romanischer Geist im Wandel der deutschen Geschichtsauffassung Einstein,  Eine neue formale Deutung der Maxwellschen Feldgleichungen der Electrodynamic Schwarschild,  Über das Gravitationsfeld eines Massenpunktes nach der Einsteinschen Theorie Helmreich,  Handschriftliche Verbesserungen zu dem Hippokratesglossar des Galen Prager,  Über die Periode des veränderlichen Sterns RR Lyrae Holl,  Die Zeitfolge des ersten origenistischen Streits Lüders,  Zu den Upanisads. I. Die Samvargavidya Hellman,  Über die ägyptischen Witterungsangaben im Kalender von Claudius Ptolemaeus Meyer-Lübke,  Die Diphthonge im Provenzaslischen Diels,  Über die Schrift Antipocras des Nikolaus von Polen Müller und Sieg,  Maitrisimit und >>Tocharisch<< Meyer,  Ein altirischer Heilsegen Brauer,  Die Verbreitung der Hyracoiden Correns,  Untersuchungen über Geschlechtsbestimmung bei Distelarten Brahn,  Weitere Untersuchungen über Fermente in der Lever von Krebskranken Erdmann,  Methodologische Konsequenzen aus der Theorie der Abstraktion Bang,  Studien zur vergleichenden Grammatik der Türksprachen. I. Frobenius,  Über die  Kompositionsreihe einer Gruppe Schwarzschild,  Zur Quantenhypothese Born,  Über anisotrope Flüssigkeiten Planck,  Über die absolute Entropie einatomiger Körper Haberlandt,  Blattepidermis und Lichtperzeption Einstein,  Näherungsweise Integration der Feldgleichungen der Gravitation Lüders,  Die Saubhikas.  Ein Beitrag zur Gecschichte des indischen Dramas Dirac: From Quantum Field Theory to Antimatter Paul Adrian Maurice Dirac (1902 – 1984) was given the moniker of “the strangest man” by Niels Bohr while he was reminiscing about the many great scientists with whom he had worked over the years [1].  It is a moniker that resonates with the innumerable “Dirac stories” that abound in the mythology of the hallways of physics departments around the world.  Dirac was awkward, shy, a loner, rarely said anything, was completely literal, had not the slightest comprehension of art or poetry, nor any clear understanding of human interpersonal interaction.  Dirac was also brilliant, providing the theoretical foundation for the central paradigm of modern physics—quantum field theory.  The discovery of the Higgs boson in 2012, a human achievement that capped nearly a century of scientific endeavor, rests solidly on the theory of quantum fields that permeate space.  The Higgs particle, when it pops into existence at the Large Hadron Collider in Geneva, is a singular quantum excitation of the Higgs field, a field that usually resides in a vacuum state, frothing with quantum fluctuations that imbue all particles—and you and me—with mass.  The Higgs field is Dirac’s legacy. … all of a sudden he had a new equation with four-dimensional space-time symmetry. Copenhagen and Bohr Although Dirac as a young scientist was initially enthralled with relativity theory, he was working under Ralph Fowler (1889 – 1944) in the physics department at Cambridge in 1923 when he had the chance to read advanced proofs of Heisenberg’s matrix mechanics paper.  This chance event launched him on his own trajectory in quantum theory.  After Dirac was awarded his doctorate from Cambridge in 1926, he received a stipend that sent him to work with Niels Bohr (1885 – 1962) in Copenhagen—ground zero of the new physics. During his time there, Dirac became famous for taking long walks across Copenhagen as he played about with things in his mind, performing mental juggling of abstract symbols, envisioning how they would permute and act.  His attention was focused on the electromagnetic field and how it interacted with the quantized states of atoms.  Although the electromagnetic field was the classical field of light, it was also the quantum field of Einstein’s photon, and he wondered how the quantized harmonic oscillators of the electromagnetic field could be generated by quantum wavefunctions acting as operators.  But acting on what?  He decided that, to generate a photon, the wavefunction must operate on a state that had no photons—the ground state of the electromagnetic field known as the vacuum state.             In late 1926, nearing the end of his stay in Copenhagen with Bohr, Dirac put these thoughts into their appropriate mathematical form and began work on two successive manuscripts.  The first manuscript contained the theoretical details of the non-commuting electromagnetic field operators.  He called the process of generating photons out of the vacuum “second quantization”.  This phrase is a bit of a misnomer, because there is no specific “first quantization” per se, although he was probably thinking of the quantized energy levels of Schrödinger and Heisenberg.  In second quantization, the classical field of electromagnetism is converted to an operator that generates quanta of the associated quantum field out of the vacuum (and also annihilates photons back into the vacuum).  The creation operators can be applied again and again to build up an N-photon state containing N photons that obey Bose-Einstein statistics, as they must, as required by their integer spin, agreeing with Planck’s blackbody radiation.              Dirac then went further to show how an interaction of the quantized electromagnetic field with quantized energy levels involved the annihilation and creation of photons as they promoted electrons to higher atomic energy levels, or demoted them through stimulated emission.  Very significantly, Dirac’s new theory explained the spontaneous emission of light from an excited electron level as a direct physical process that creates a photon carrying away the energy as the electron falls to a lower energy level.  Spontaneous emission had been explained first by Einstein more than ten years earlier when he derived the famous A and B coefficients, but Einstein’s arguments were based on the principle of detailed balance, which is a thermodynamic argument.  It is impressive that Einstein’s deep understanding of thermodynamics and statistical mechanics could allow him to derive the necessity of both spontaneous and stimulated emission, but the physical mechanism for these processes was inferred rather than derived. Dirac, in late 1926, had produced the first direct theory of photon exchange with matter.  This was the birth of quantum electrodynamics, known as QED, and the birth of quantum field theory [2]. Fig. 1 Paul Dirac in his early days. Göttingen and Born             Dirac’s next stop on his postodctoral fellowship was in Göttingen to work with Max Born (1882 – 1970) and the large group of theoreticians and mathematicians who were like electrons in a cloud orbiting around the nucleus represented by the new quantum theory.  Göttingen was second only to Copenhagen as the Mecca for quantum theorists.  Hilbert was there and von Neumann too, as well as the brash American J. Robert Oppenheimer (1904 – 1967) who was finishing his PhD with Born.  Dirac and Oppenheimer struck up an awkward friendship.  Oppenheimer was considered arrogant by many others in the group, but he was in awe of Dirac who arrived with his manuscript on quantum electrodynamics ready for submission.  Oppenheimer struggled at first to understand Dirac’s new approach to quantizing fields, but he quickly grasped the importance, as did Pascual Jordan (1902 – 1980), who was also in Göttingen.             Jordan had already worked on ideas very close to Dirac’s on the quantization of fields.  He and Dirac seemed to be going down the same path, independently arriving at very similar conclusions around the same time.  In fact, Jordan was often a step ahead of Dirac, tending to publish just before Dirac, as with non-commuting matrices, transformation theory and the relationship of canonical transformations to second quantization.  However, Dirac’s paper on quantum electrodynamics was a masterpiece in clarity and comprehensiveness, launching a new field in a way that Jordan had not yet achieved with his own work.  But because of the closeness of Jordan’s thinking to Dirac’s, he was able to see immediately how to extend Dirac’s approach.  Within the year, he published a series of papers that established the formalism of quantum electrodynamics as well as quantum field theory.  With Pauli, he systematized the operators for creation and annihilation of photons [3].  With Wigner, he developed second quantization for de Broglie matter waves, defining creation and annihilation operators that obeyed the Pauli exclusion principle of electrons[4].  Jordan was on a roll, forging ahead of Dirac on extensions of quantum electrodynamics and field theory, but Dirac was about to eclipse Jordan once and for all. St. John’s at Cambridge             At the end of the Spring semester in 1927, Dirac was offered a position as a fellow of St. John’s College at Cambridge, which he accepted, returning to England to begin his life as a college professor.  During the summer and into the Fall, Dirac returned to his first passion in physics, relativity, which had yet to be successfully incorporated into quantum physics.  Oskar Klein and Walter Gordon had made initial attempts at formulating relativistic quantum theory, but they could not correctly incorporate the spin properties of the electron, and their wave equation had the bad habit of producing negative probabilities.  Probabilities went negative because the Klein-Gordon equation had two time derivatives instead of one.  The reason it had two (while the non-relativistic Schrödinger equation has only one) is because space-time symmetry required the double space derivative of the Schrödinger equation to be paired with a double time derivative.  Dirac, with creative insight, realized that the problem could be flipped by requiring the single time derivative to be paired with a single space derivative.  The problem was that a single space derivative did not seem to make any sense [5]. St. John’s College at Cambridge             As Dirac puzzled how to get an equation with only single derivatives, he was playing around with Pauli spin matrices and hit on a simple identity that related the spin matrices to the electron momentum.  At first he could not get the identity to apply to four-dimensional relativistic momenta using the usual 2×2 spin matrices.  Then he realized that four-dimensional space-time could be captured if he expanded Pauli’s 2×2 spin matrices to 4×4 spin matrices, and all of a sudden he had a new equation with four-dimensional space-time symmetry with single derivatives on space and time.  As a test of his new equation, he calculated fine details of the experimentally-measured hydrogen spectrum, known as the fine structure, which had resisted theoretical explanation, and he derived answers in close agreement with experiment.  He also showed that the electron had spin-1/2, and he calculated its magnetic moment.  He finished his manuscript at the end of the Fall semester in 1927, and the paper was published in early 1928[6].  His relativistic quantum wave equation was an instant sensation, becoming known for all time as “the Dirac Equation”.  He had succeeded at finding a correct and long-sought relativistic quantum theory where many before had failed.  It was a crowning achievment, placing Dirac firmly in the firmament of the quantum theorists. Fig. 1 The relativistic Dirac equation. The wavefunction is a four-component spinor. The gamma-del product is a 4×4 matrix operator. The time and space derivatives are both first-order operators.             In the process of ridding the Klein-Gordon equation of negative probability, which Dirac found abhorent, his new equation created an infinite number of negative energy states, which he did not find abhorent.  It is perhaps a matter of taste what one theoriest is willing to accept over another, and for Dirac, negative energies were better than negative probabilities.  Even so, one needed to deal with an infinite number of negative energy states in quantum theory, because they are available to quantum transitions.  In 1929 and 1930, as Dirac was writing his famous textbook on quantum theory, he became intrigued by the similarity between the positive and negative electron states of the vacuum and the energy levels of valence electrons on atoms.  An electron in a state outside a filled electron shell behaves very much like a single-electron atom, like sodium and lithium with their single valence electrons.  Conversely, an atomic shell that has one electron less than a full complement can be described as having a “hole” that behaves “as if” it were a positive particle.  It is like a bubble in water.  As water sinks, the bubble rises to the top of the water level.  For electrons, if all the electrons go one way in an electric field, then the hole goes the opposite direction, like a positive charge.              Dirac took this analogy of nearly-filled atomic shells and applied it to the vacuum states of the electron, viewing the filled negative energy states like the filled electron shells of atoms.  If there is a missing electron, a hole in this infinite sea, then it would behave as if it had positive charge.  Initially, Dirac speculated that the “hole” was the proton, and he even wrote a paper on that possibility.  But Oppenheimer pointed out that the idea was inconsistent with observations, especially the inability of the electron and proton to annihilate, and that the ground state of the infinite electron sea must be completely filled. Hermann Weyl further pointed out that the electron-proton theory did not have the correct symmetry, and Dirac had to rethink.  In early 1931 he hit on an audacious solution to the puzzle.  What if the hole in the infinite negative energy sea did not just behave like a positive particle, but actually was a positive particle, a new particle that Dirac dubbed the “anti-electron”?  The anti-electron would have the same mass as the electron, but would have positive charge. He suggested that such particles might be generated in high-energy collisions in vacuum, and he finished his paper with the suggestion that there also could be an anti-proton with the mass of the proton but with negative charge.  In this singular paper, titled “Quantized Singularities of the Electromagnetic Field” published in 1931, Dirac predicted the existence of antimatter.  A year later the positron was discovered by Carl David Anderson at Cal Tech.  Anderson had originally called the particle the positive electron, but a journal editor of the Physical Review changed it to positron, and the new name stuck. Fig. 3 An electron-positron pair is created by the absorption of a photon (gamma ray). Positrons have negative energy and can be viewed as a hole in a sea of filled electron states. (Momentum conservation is satisfied if a near-by heavy particle takes up the recoil momentum.)             The prediction and subsequent experimental validation of antmatter stands out in the history of physics in the 20th Century.  In previous centuries, theory was performed mainly in the service of experiment, explaining interesting new observed phenomena either as consequences of known physics, or creating new physics to explain the observations.  Quantum theory, revolutionary as a way of understanding nature, was developed to explain spectroscopic observations of atoms and molecules and gases.  Similarly, the precession of the perihelion of Mercury was a well-known phenomenon when Einstein used his newly developed general relativity to explain it.  As a counter example, Einstein’s prediction of the deflection of light by the Sun was something new that emerged from theory.  This is one reason why Einstein became so famous after Eddington’s expedition to observe the deflection of apparent star locations during the total eclipse.  Einstein had predicted something that had never been seen before.  Dirac’s prediction of the existence of antimatter similarly is a triumph of rational thought, following the mathematical representation of reality to an inevitable conclusion that cannot be ignored, no matter how wild and initially unimaginable it is.  Dirac went on to receive the Nobel prize in Physics in 1933, sharing the prize that year with Schrödinger (Heisenberg won it the previous year in 1932). [1] Framelo, “The Strangest Man: The Hidden Life of Paul Dirac” (Basic Books, 2011) [2] Dirac, P. A. M. (1927). “The quantum theory of the emission and absorption of radiation.” Proceedings of the Royal Society of London Series A114(767): 243-265.;  Dirac, P. A. M. (1927). “The quantum theory of dispersion.” Proceedings of the Royal Society of London Series A114(769): 710-728. [3] Jordan, P. and W. Pauli, Jr. (1928). “To quantum electrodynamics of free charge fields.” Zeitschrift Fur Physik 47(3-4): 151-173. [4] Jordan, P. and E. Wigner (1928). “About the Pauli’s equivalence prohibited.” Zeitschrift Fur Physik 47(9-10): 631-651. [5] This is because two space derivatives measure the curvative of the wavefunction which is related to the kinetic energy of the electron. [6] Dirac, P. A. M. (1928). “The quantum theory of the electron.” Proceedings of the Royal Society of London Series A 117(778): 610-624.;  Dirac, P. A. M. (1928). “The quantum theory of the electron – Part II.” Proceedings of the Royal Society of London Series A118(779): 351-361.
00d4550059cd5e0e
A Single 3N-Dimensional Universe: Splitting vs. Decoherence A common way of viewing Everettian quantum mechanics is to say that in an act of measurement, the universe splits into two. There is a world in which the electron has x-spin up, the pointer points to “x-spin up,” and we believe the electron has x-spin up. There is another world in which the electron has x-spin down, the pointer points to “x-spin down,” and we believe the electron has x-spin down. This is why Everettian quantum mechanics is often called “the many worlds interpretation.” Because the contrary pointer readings exist in different universes, no one notices that both are read. This way of interpreting Everettian quantum mechanics raises many metaphysical difficulties. Does the pointer itself split in two? Or are there two numerically distinct pointers? If the whole universe splits into two, doesn’t this wildly violate conservation laws? There is now twice as much energy and momentum in the universe than there was just before the measurement. How plausible is it to say that the entire universe splits? Although this “splitting universes” reading of Everett is popular (Deutsch 1985 speaks this way in describing Everett’s view, a reading originally due to Bryce Dewitt), fortunately, a less puzzling interpretation has been developed. This idea is to read Everett’s theory as he originally intended. Fundamentally, there is no splitting, only the evolution of the wave function according to the Shrödinger dynamics. To make this consistent with experience, it must be the case that there are in the quantum state branches corresponding to what we observe. However, as, for example, David Wallace has argued (2003, 2010), we need not view these branches -indeed, the branching process itself- as fundamental. Rather, these many branches or many worlds are patterns in the one universal quantum state that emerge as the result of its evolution. Wallace, building on work by Simon Saunders (1993), argues that there is a kind of dynamical process; the technical name for this process is “decoherence,” that can ground the emergence of quasi-classical branches within the quantum state. Decoherence is a process that involves an interaction between two systems (one of which may be regarded as a system and the other its environment) in which distinct components of the quantum state come to evolve independently of one another. That this occurs is the result of the wave function’s Hamiltonian, the kind of system it is. A wave function that (due to the kind of state it started out in and the Shrödinger dynamics) exhibits decoherence will enter into states capable of representation as a sum of noninteracting terms in particular basis (e.g., a position basis). When this happens, the system’s dynamics will appear classical from the perspective of the individual branches. Note the facts about the quantum state decohering are not built into the fundamental laws. Rather, this is an accidental fact depending on the kind of state our universe started out in. The existence of these quasi-classical states is not a fundamental fact either, but something that emerges from the complex behavior of the fundamental state. The sense in which there are many worlds in this way of understanding Everettian quantum mechanics is therefore not the same as it is on the more naive approach already described. Fundamentally there is just one universe evolving according to the Schrödinger equation (or whatever is its relativistically appropriate analog). However, because of the special way this one world evolves, and in particular because parts of this world do not interfere with each other and can each on their own ground the existence of quasi-classical macro-objects that look like individual universes, it is correct in this sense to say (nonfundamentally) there are many worlds. As metaphysicians, we are interested in the question of what the world is fundamentally like according to quantum mechanics. Some have argued that the answer these accounts give us (setting aside Bohmian mechanics for the moment) is that fundamentally all one needs to believe in is the wave function. What is the wave function? It is something that as we have already stated may be described as a field on configuration space, a space where each point can be taken to correspond to a configuration of particles, a space that has 3N dimensions where N is the number of particles. So, fundamentally, according to these versions of quantum mechanics (orthodox quantum mechanics, Everettian quantum mechanics, spontaneous collapse theories), all there is fundamentally is a wave function, a field in a high-dimensional configuration space. The view that the wave function is a fundamental object and a real, physical field on configuration space is today referred to as “wave function realism.” The view that such a wave function is everything there is fundamentally is wave function monism. To understand wave function monism, it will be helpful to see how it represents the space on which the wave function is spread. We call this space “configuration space,” as is the norm. However, note that on the view just described, this is not an apt name because what is supposed to be fundamental on this view is the wave function, not particles. So, although the points in this space might correspond in a sense to particle configurations, what this space is fundamentally is not a space of particle configurations. Likewise, although we’ve represented the number of dimensions configuration space has as depending on the number N of particles in a system, this space’s dimensionality should not really be construed as dependent on the number of particles in a system. Nevertheless, the wave function monist need not be an eliminativist about particles. As we have seen, for example, in the Everettian approach, wave function monists can allow that there are particles, derivative entities that emerge out of the decoherent behavior of the wave function over time. Wave function monists favoring other solutions to the measurement problem can also allow that there are particles in this derivative sense. But the reason the configuration space on which the wave function is spread has the number of dimensions it does is not, in the final analysis, that there are particles. This is rather a brute fact about the wave function, and this in turn is what grounds the number of particles there are. The Wave Function: Essays on the Metaphysics of Quantum Mechanics. Edited by Alyssa Ney and David Z Albert (pgs. 33-34, 36-37). Leave a Reply
ed796e5001fb6bc1
Classical conformal blocks and isomonodromic deformations Jörg Teschner Department of Mathematics, University of Hamburg, Bundesstrasse 55, 20146 Hamburg, Germany, DESY theory, Notkestrasse 85, 20607 Hamburg, Germany 1 Introduction The classical limit of conformal field theories is of interest for various reasons. It gives the link between a Lagrangian description of a CFT and the abstract representation-theoretic definition of its correlation functions provided by the bootstrap approach. It is also crucial for understanding several aspects of the geometry encoded in the correlation function of conformal field theory. This is relevant in particular for various models of holographic correspondences between two-dimensional CFT and three-dimensional quantum gravity investigated in the context of the -correspondence. We are here going to demonstrate that conformal field theory is related to the isomonodromic deformation problem in the limit . The Schlesinger system describes monodromy preserving deformations of first order matrix differential equations on with regular singularities. It has an alternative description known as the Garnier system describing the isomonodromic deformations of the second order ODE naturally associated to the first order matrix differential equation. We refer to [IKSY] for a review and further references. It will be shown that one may describe the leading asymptotics of Virasoro conformal blocks with a suitable number of insertions of degenerate representations in terms of the generating function for a change of coordinates between two natural sets of Darboux coordinates for the Garnier system. One set of coordinates is natural for the Hamiltonian formulation of the Garnier system [IKSY], the other coordinates will be called complex Fenchel-Nielsen coordinates parameterising the space of monodromy data of the differential equation on . The results of this paper characterise the leading classical asymptotics of Virasoro conformal blocks completely, and clarify in which sense conformal field theory represents a quantisation of the isomonodromic deformation problem. 2 The Garnier system 2.1 Basic definitions The Garnier system describes monodromy preserving deformations of the differential equations with of the form The differential equation has regular singular points at and . The parameters will be fixed once and for all. The singular points at are special. They are called apparent singularities if the parameters are not independent but related through the constraints These constraints imply that the monodromy around is . Indeed, having monodromy is easily seen to be equivalent to the fact that there exists a solution of which has the form , with of the local form satisfying the Ricatti equation . The Ricatti equation determines the coefficients recursively in terms of the expansion coefficients of . When one finds the relation as necessary and sufficient condition for the existence of a solution to the recursion relations following from the Ricatti equation. In order to define the Garnier system we will choose the number of apparent singularities to be . More general values of will be discussed later. We shall furthermore assume that the differential equation is regular at , which implies where . The constraints (2.3) determine three of the , in what follows usually chosen to be , and in terms of , . Equations (2.2) can then be solved allowing us to express as a function , of , and . The “potential” thereby gets determined as a function . It can be shown [Ok, IKSY] that the Hamiltonian equations of motion ensure that the monodromy of the differential equation stays constant under variations of the parameters satisfying (2.4). The coordinates are Darboux coordinates for the natural symplectic structure of the Garnier system. 2.2 Relation to the Schlesinger system More widely known than the Garnier system may be the Schlesinger system describing isomonodromic deformations of holomorphic -connections of the form on , with matrix-valued functions of the form We will assume that satisfy . Allowing that the residues depend on the parameters in a suitable way, one may ensure that the monodromy of does not depend on . The Schlesinger system is the system of nonlinear partial differential equations describing how to cancel variations of by corresponding variations of the residues , . The Garnier system is nothing but the Schlesinger system for in disguise. The relation between these two dynamical systems is found by representing the holomorphic connection containing the dynamical variables of the Schlesinger system in the form , with . It is straightforward to find a matrix function relating to in this way, provided one allows to have square-root branch points at the zeros of . It is furthermore straightforward to show that the function representing the only non-constant matrix element of will have a singularity of the form near each simple zero of . The coefficients and appearing in the Laurent expansion (2.6) must satisfy the relation , which is the necessary and sufficient condition for the solutions to to have monodromy proportional to around . Denoting the zeros generically has by and the corresponding residues by one recovers exactly the form of the function considered in the theory of the Garnier system. The gauge transformation from to described above defines a map from the Schlesinger system to the Garnier system. It is known that this map relates the natural symplectic structures [DM]. 2.3 Complex-Fenchel-Nielsen coordinates for moduli spaces of flat connections The monodromy data represent the conserved quantities which remain constant in the Hamiltonian flows of the Garnier system, by definition. The goal of this subsection is to introduce useful coordinates for the space of monodromy data. Holonomy map and Riemann-Hilbert correspondence between flat connections and representations relate the moduli space of flat -connections on to the so-called character variety . One set of useful sets of coordinates for is given by the trace functions associated to simple closed curves on . Minimal sets of trace functions that can be used to parameterise can be identified using pants decompositions. A pants decomposition is defined by cutting along simple non-intersecting closed curves . Each curve separates two pairs of pants, the union of which will be a four-holed sphere . As it suffices to introduce two coordinates for the flat connections on each . Let us therefore restrict attention to the case in the following. Conjugacy classes of irreducible representations of are uniquely specified by seven invariants generating the algebra of invariant polynomial functions on . The monodromies are associated to the curves depicted in Figure 1. Figure 1: Basis of loops of and the decomposition . These trace functions satisfy the quartic equation The affine algebraic variety defined by (2.8) is a concrete representation for the character variety of . For fixed choices of in (2.7a) equation (2.8) describes the character variety as a cubic surface in . This surface admits a parameterisation in terms of coordinates of the form together with similar formulae for , . Using pants decompositions as described above one may define trace coordinates , and for each four-holed sphere defined above. In this way one may define a pair of coordinates associated to each cutting curve , . Taken together, the tuples and form a system of coordinates for . It is known that the coordinates are a set of Darboux coordinates for the moduli space of flat -connections on [NRS], bringing the natural symplectic structure on this space to the simple form We note that the trace functions are globally well-defined, and that the Hamiltonian flows generated by the functions are linear in the variables . One may therefore view the coordinates as action-angle variables making the integrable structure of the character variety manifest. 3 Classical limit of Virasoro conformal blocks We are now going to explain how the Garnier system arises in the limit of certain Virasoro conformal blocks with degenerate field insertions. 3.1 Conformal blocks with degenerate fields Conformal blocks, the holomorphic building blocks of physical correlation functions in conformal field theories, can be defined as solutions to the conformal Ward identities [BPZ]. Our discussion will be brief, referring for the relevant background on conformal field theory to reviews such as [T17a]. Conformal blocks can be defined for all punctured Riemann surfaces, with representations of the Virasoro algebra assigned to each puncture. We will consider highest weight representations of the Virasoro algebra with central charge with highest weight vector satisfying for , where . It will be convenient to represent the parameter as . We will consider Virasoro conformal blocks on with generic representations assigned to the punctures at , . Representation are assigned to the remaining punctures , , and we will assume that , , to simplify some formulae. Degenerate representations are associated to punctures at , , and degenerate representations are associated to the punctures at and , respectively. The corresponding chiral partition functions satisfy the following differential equations using tuple notations , . Equation (3.12a) reflects the decoupling of the null-vector in the Verma-module associated to the representation assigned to the point , while (3.12b) is equivalent to the decoupling of the null-vectors in the representations associated to the punctures , . Equation (3.12c) simply reflects the global -invariance on the sphere. It turns out that the insertion of the representations modifies the conformal blocks only mildly in the limit . We will use the representation at as a “probe”, exploiting the information provided by the associated differential equation, and the analytic continuation of its solutions. The representation at will only serve the task to define a convenient “base-point”. 3.2 Gluing construction of conformal blocks Useful bases for the spaces of conformal blocks can be constructed by means of the gluing construction. This construction allows one to construct conformal blocks on arbitrary Riemann surfaces from the conformal blocks associated to the three-punctured spheres appearing in a pants decomposition of . For each of the simple closed curves used to define a given pants decomposition one has to specify a representation of the Virasoro algebra. The gluing of the conformal blocks on the pairs of pants to a conformal block on is performed by summing over bases of the representations assigned to the cutting curves. To be specific, let us start from with a fixed pants decomposition defined by cutting along non-intersecting simple closed curves . Out of with the given pants decomposition let us construct a -punctured sphere by first cutting sufficiently small non-intersecting discs around , , out of the pairs of pants appearing in the given pants decomposition for . Then glue twice-punctured discs back in such a way that the resulting surface has punctures at and , . The boundary of will be denoted by . In the pair of pants containing let us finally replace a disc around by a three times punctured disc containing , and , with and contained in a smaller disc inside of . The boundaries of and will be denoted and , respectively. The result of this construction is a sphere with punctures and a fixed pants decomposition. The conformal blocks resulting from the gluing construction are determined uniquely up to normalisation by the assignment of representations to the curves and with and , as well as and . We will assign representations to the curves , , and representations to the curves , . To the remaining curves and we will assign representations and , respectively. The chiral partition functions of the conformal blocks defined in this way will be denoted with . depends holomorphically on all of its variables. 3.3 Classical limit of null vector decoupling equations We will consider the limit with and kept fixed. One may notice that the differential equations (3.12) can be solved in the following form where and . This ansatz will solve equation (3.12a) provided and are two solutions of the differential equation , with of the form (2.1), where the parameters and are obtained from as follows The equations (3.12b) furthermore imply that the parameters are not independent but related through the constraints (2.2). Equations (3.12c) finally reproduce (2.3). If , one has just as many equations as one needs to determine , as functions of , as was done in the definition of the Garnier system. This is how the kinematics of the Garnier system is recovered from the classical limit of Virasoro conformal blocks in this case, as first observed in [T10]. Similar observations have been exploited in [LLNZ]. The cases with will be discussed later. 3.4 Verlinde loop operators Useful additional information is provided by the action of the Verlinde loop operators studied in [AGGTV, DGOT] on spaces of conformal blocks, see [T17a, Section 2.7] for a brief review. The basic idea behind the definition of the Verlinde loop operators is as follows. Given a conformal block on surface , there is a canonical way to define a conformal block on a surface having an extra puncture with vacuum representation assigned to , and vice-versa. A conformal block on similarly defines a conformal block on a surface obtained by replacing a disc in by a disc containing two punctures at and with representations assigned to both punctures, and vacuum representation assigned to the boundary of . If is a conformal block defined by the gluing construction one may use the null vector decoupling equations (3.12a) to compute the analytic continuation of along contours starting and ending at . The contribution to which has vacuum representation assigned to the boundary of may be canonically identified with a conformal block on the original surface . This defines an operator on the space of conformal blocks associated to the surface . The algebra generated by the operators is a non-commutative deformation of the Poisson algebra of trace functions on [DGOT, TV]. We had previously associated trace functions , to each of the curves defining the pants decomposition of . The Verlinde loop operators associated to the contours defining , will be denoted by , for and , respectively. We will need the results of [DGOT, AGGTV] for the Verlinde loop operators In the case of the operators a simple diagonal result was found,111Comparing to [DGOT] we changed the definition of slightly to absorb a factor of . The expressions for the operators are more complicated for . They take the form Explicit formulae for the coefficients can be found in [AGGTV, DGOT]. 3.5 Classical limit of Verlinde loop operators Using the results of the explicit calculations of these operators from [AGGTV, DGOT] we will now identify the variables and with complex Fenchel-Nielsen coordinates on the character variety in the limit considered above. In this limit one may compute the Verlinde loop operators in two different ways. One may, on the one hand, use the factorisation (3.13) in order to show that the classical limit of the Verlinde loop operators can be identified with traces of the monodromies of the differential operator . This can be compared to the classical limit of the explicit formulae (3.15) for the Verlinde loop operators which turn out to be identical to the expressions for the trace functions given in Section 2.3 when the conformal blocks are normalised appropriately.222Changing the normalisation of the three-point conformal blocks will change the form of the coefficients in (3.15b). There exists a choice of normalisation reproducing the corresponding coefficients in (2.9). In this way it may be shown that satisfies with being the value of the coordinate defined via (2.9). 3.6 Classical limit of conformal blocks as generating function Recall that that the coordinates are a set of Darboux coordinates for the moduli space of flat -connections on . Given the function one may invert the relations (3.16) to define , and then define using (3.14). This is just the standard procedure to define a canonical transformation in Hamiltonian mechanics in terms of generating functions. The coordinates defined in this way will therefore be another set of Darboux coordinates for the natural symplectic structure on which is related to the natural symplectic structures of the Garnier system as The Riemann-Hilbert correspondence defines a -dependent change of variables from to . Fixing by imposing the condition of constant monodromy defines commuting flows of the variables . The Hamiltonian form (2.4) of the differential equations governing these flows can be found as follows. We are considering a canonical transformation in a non-autonomous Hamiltonian system generated by the function . Having dynamics in the variables described in Hamiltonian form (2.4) is equivalent to having dynamics in the variables generated by the Hamiltonian . For describing isomonodromic deformations we choose , implying that the functions related to via (3.14) are the Hamiltonians to be used when representing the isomonodromic flows in the Hamiltonian form (2.4). The fact that the functions defined from in (3.14) must coincide with the Hamiltonians of the Garnier system follows from the observation that both are uniquely determined by the system of linear equations (2.2). We recover, in an independent way, the fact that the coordinates are Darboux coordinates for the natural Poisson structure of the Garnier system. In this way we have fully reproduced the Hamiltonian representation of the Garnier system describing the isomonodromic deformations of the differential equation with of the form (2.1) from conformal field theory. 4 Comparison with similar results We would here like to compare our results to some known results of a similar nature. 4.1 Genus zero analog of Kawai’s theorem To begin with, let us discuss the cases where the number of degenerate fields is less than . The classical limit can be analysed in the same way as before, the only change being a lower number of apparent singularities in the differential equation . In this case we can only determine a subset of the parameters using the constraints (2.2), determining, for example, as function of the parameters . The total number of independent variables in the differential equation is . In the extreme case we do not have any apparent singularities, the only parameters left are the independent variables . These parameters can be identified as coordinates on the cotangent fibres of the moduli space of -punctured spheres having conical singularities at , with deficit angles determined by the parameters in (2.1) [TZ]. Together with the positions one gets a system of Darboux coordinates for the total space of the cotangent bundle . The holonomy of the corresponding connection defines a map from from to the character variety . Parameterising points on by the complex Fenchel-Nielsen coordinates introduced in Section 2.3 one obtains a change of coordinates from to . In the case presently under consideration we will still find a semiclassical asymptotics of the form (3.13). However, the function characterising the leading term will now be -independent, . This function will satisfy the relations (3.16) and the second relation in (3.14), as before. These relations identify as generating function for the change of coordinates from to in the sense of symplectic geometry. The existence of such a generating function shows that the change of coordinates defined by the holonomy map preserves the natural symplectic structures. A similar result was obtained in [Ka] for the case of surfaces of type . Relations of the function with conformal field theory have been discussed in [T10] and [LLNZ]. It has been proposed in [LLNZ] to compute from the limit of the isomonodromic deformation flows when approach , . The function can furthermore be used to characterise the spectrum of the -Gaudin model [T10, T17b]. In the context of the Nekrasov-Shatashvili program relating supersymmetric gauge theories to integrable models similar relations have been proposed in [NRS]. 4.2 Relation to Liouville theory The observations made in Section 4.1 are related to the results obtained in [TZ] describing the Weil-Petersson symplectic form on the Teichmüller spaces in terms of the Liouville action functional. The metric of constant negative curvature on defines a function of the form (2.1) with via . The particular values of the residues that are found when will be denoted as . The Liouville action is the functional of having an extremum when has constant negative curvature. Evaluating the Liouville action at this extremum defines a function . The residues of are related to as [TZ]. One may note, on the other hand, that the holonomy of defines functions . Based on the semiclassical limit of conformal field theory we will argue that the function defined by restriction of to , Indeed, the correlation functions of Liouville theory, , can be decomposed into conformal blocks as [ZZ, T01] In the semiclassical limit one may use (3.13). The integral over in (4.20) will be dominated by a saddle point determined by the condition that This condition ensures that defined in (4.18) satisfies (4.19). It follows that the Liouville action coincides with up to a constant. This observation is related to the characterisation of the spectrum of the -Gaudin model in terms of the function [T10, T17b]. 5 Conclusions: CFT as quantisation of the Garnier system The observations above relate conformal field theory to the quantisation of the isomonodromic deformation problem. In order to quantise the Garnier system one may start by observing that both and represent Darboux coordinates for the moduli spaces of flat connections, (3.17). One may therefore consider two quantisation schemes in which and both get represented as multiplication operators, whereas the operators associated to and are and , acting on suitable spaces of function and , respectively. The next step will be to quantize the Hamiltonians . We will define the quantum counterparts of as solutions to the following set of constraints, As in the classical case one may solve these constraints to define second order differential operators in the variables if . There are additional terms of order which ensure that for all . It is then natural to require that the quantum Hamiltonians generate the evolution with respect to the “time” variables , These equations are easily seen to be equivalent to the null vector decoupling equations satisfied by the chiral partition functions of the conformal blocks on obtained from the conformal blocks introduced in Section 3.1 by removing the punctures at and , as was first pointed out in [T10]. Generalising [GT, Section 5.2] it is possible to show that these equation define the series expansion of associated to suitable gluing patterns uniquely, with exponents of the leading terms in the expansions specified in terms of the variables . It had previously been observed [Re] that the Knizhnik-Zamolodchikov equations appearing in CFTs with affine Lie algebra symmetry can be interpreted as the time-dependent Schrödinger equations that would appear in the quantisation of the isomonodromic deformation problem. Our observations in Section 3.3 are close analogs of the results in [Re] for the Virasoro case. Both are related to each other through a variant of Sklyanin’s Separation of Variables method [T10]. The consideration of the classical limit of the Verlinde loop operators in Section 3.5 adds the crucial other side of the coin needed to get a precise characterisation of the classical conformal blocks as generating functions. The quantisation of this classical integrable system yields equations characterising the conformal blocks of the Virasoro algebra completely. Describing the conformal blocks in this way suggests to reinterpret the chiral partition functions as the wave-functions intertwining the representations for the quantised Garner system in terms of functions and introduced above. We may furthermore note that conformal field theory is related to the isomonodromic deformation problem in two limits, the limit discussed here and the limit considererd in [ILTe], see [T17a] for a review. In the case one may identify the isomonodromic tau-function with a Fourier-transformation of Virasoro conformal blocks. This is remarkable, and deserves to be better understood. Acknowledgements. The author would like to thank N. Reshetikhin for interest in this work, and for the suggestion to include a comparison with similar results into the paper. This work was supported by the Deutsche Forschungsgemeinschaft (DFG) through the collaborative Research Centre SFB 676 “Particles, Strings and the Early Universe”, project A10.
5c7d3f567add158d
(-2, 3, 7) pretzel knot (2,3,7) triangle group (B, N) pair (a,b,0) class of distributions (p,q) shuffle (ε, δ)-definition of limit *-algebra *-autonomous category -yllion /dev/zero 0 (number) 0,1-simple lattice 0.999... 1 (number) 1 + 1 + 1 + 1 + · · · 1 + 2 + 3 + 4 + · · · 1 + 2 + 4 + 8 + · · · 1 − 2 + 3 − 4 + · · · 1 − 2 + 4 − 8 + · · · 1-center problem 1-factorable 1.96 1/2 + 1/4 + 1/8 + 1/16 + · · · 1/2 − 1/4 + 1/8 − 1/16 + · · · 1/4 + 1/16 + 1/64 + 1/256 + · · · 10 (number) 10-cube 10-demicube 10-polytope 10-simplex 100 (number) 1000 (number) 10000 (number) 100000 (number) 10000000 (number) 100000000 (number) 1000000000 (number) 1001 (number) 101 (number) 102 (number) 1024 (number) 103 (number) 104 (number) 105 (number) 106 (number) 107 (number) 108 (number) 1089 (number) 109 (number) 11 (number) 11-cell 110 (number) 111 (number) 112 (number) 113 (number) 1138 (number) 114 (number) 115 (number) 116 (number) 117 (number) 118 (number) 119 (number) 11th dimension 12 (number) 120 (number) 120-cell 121 (number) 122 (number) 123 (number) 124 (number) 124000 (number) 125 (number) 126 (number) 127 (number) 128 (number) 1289 (number) 129 (number) 13 (number) 130 (number) 131 (number) 132 (number) 133 (number) 134 (number) 135 (number) 136 (number) 137 (number) 138 (number) 139 (number) 13th root 14 (number) 140 (number) 141 (number) 142 (number) 142857 (number) 143 (number) 144 (number) 144000 (number) 145 (number) 1458 (number) 146 (number) 147 (number) 148 (number) 149 (number) 15 (number) 15 and 290 theorems 150 (number) 151 (number) 152 (number) 153 (number) 154 (number) 155 (number) 156 (number) 157 (number) 158 (number) 159 (number) 16 (number) 16-cell 160 (number) 161 (number) 162 (number) 163 (number) 164 (number) 165 (number) 166 (number) 167 (number) 168 (number) 169 (number) 17 (number) 170 (number) 1701 (number) 171 (number) 172 (number) 1728 (number) 1729 (number) 173 (number) 174 (number) 175 (number) 176 (number) 177 (number) 178 (number) 179 (number) 18 (number) 180 (number) 181 (number) 182 (number) 183 (number) 184 (number) 185 (number) 186 (number) 187 (number) 188 (number) 189 (number) 19 (number) 190 (number) 1909 (number) 191 (number) 192 (number) 193 (number) 194 (number) 195 (number) 196 (number) 197 (number) 198 (number) 1987 (number) 199 (number) 2 (number) 2 + 2 = 5 2-bridge knot 2-category 2-opt 2-satisfiability 2-sided 2-valued morphism 20 (number) 200 (number) 2000 (number) 20000 (number) 201 (number) 202 (number) 203 (number) 204 (number) 205 (number) 206 (number) 207 (number) 208 (number) 209 (number) 21 (number) 210 (number) 211 (number) 212 (number) 213 (number) 216 (number) 22 (number) 220 (number) 221 (number) 222 (number) 223 (number) 227 (number) 229 (number) 23 (number) 230 (number) 233 (number) 235 (number) 239 (number) 24 (number) 24 Game 24-cell 240 (number) 241 (number) 242 (number) 243 (number) 245 (number) 25 (number) 250 (number) 251 (number) 255 (number) 256 (number) 257 (number) 26 (number) 260 (number) 263 (number) 269 (number) 27 (number) 270 (number) 273 (number) 277 (number) 28 (number) 280 (number) 284 (number) 29 (number) 290 (number) 2D geometric model 2π theorem 3 (number) 3-jm symbol 3-manifold 3-opt 3-sphere 30 (number) 300 (number) 3000 (number) 30000 (number) 31 (number) 311 (number) 313 (number) 318 (number) 32 (number) 33 (number) 34 (number) 35 (number) 353 (number) 359 (number) 36 (number) 360 (number) 363 (number) 365 (number) 37 (number) 38 (number) 384 (number) 39 (number) 3D Life 3D projection 3SLS 3SUM 4 (number) 4-manifold 40 (number) 400 (number) 4000 (number) 40000 (number) 40585 (number) 41 (number) 4104 (number) 418 (number) 42 (number) 420 (number) 43 (number) 44 (number) 440 (number) 444 (number) 45 (number) 46 (number) 46664 (number) 47 (number) 48 (number) 49 (number) 495 (number) 496 (number) 5 (number) 5-manifold 5-polytope 50 (number) 500 (number) 5000 (number) 50000 (number) 501 (number) 5040 (number) 51 (number) 52 (number) 53 (number) 54 (number) 55 (number) 555 (number) 56 (number) 57 (number) 57-cell 58 (number) 59 (number) 593 (number) 6 (number) 6-j symbol 6-polytope 6-sphere coordinates 60 (number) 600 (number) 600-cell 6000 (number) 60000 (number) 61 (number) 616 (number) 6174 (number) 619 (number) 62 (number) 6236 (number) 63 (number) 6346 (number) 64 (number) 64079 (number) 65 (number) 65535 (number) 65536 (number) 65537 (number) 66 (number) 666 (number) 67 (number) 68 (number) 68-95-99.7 rule 69 (number) 69105 (number) 7 (number) 7-polytope 70 (number) 700 (number) 7000 (number) 70000 (number) 702 (number) 71 (number) 715 (number) 72 (number) 720 (number) 73 (number) 74 (number) 743 (number) 75 (number) 76 (number) 77 (number) 7744 (number) 78 (number) 786 (number) 79 (number) 790 (number) 8 (number) 8-polytope 80 (number) 800 (number) 8000 (number) 80000 (number) 81 (number) 8128 (number) 82 (number) 83 (number) 836 (number) 84 (number) 85 (number) 86 (number) 87 (number) 88 (number) 880 (number) 881 (number) 883 (number) 89 (number) 9 (number) 9-j symbol 9-polytope 90 (number) 900 (number) 9000 (number) 90000 (number) 91 (number) 911 (number) 92 (number) 93 (number) 94 (number) 95 (number) 96 (number) 97 (number) 98 (number) 9814072356 (number) 99 (number) 999 (number) 9999 (number) A Beautiful Mind A Beautiful Mind (book) A Beautiful Mind (film) A Brief History of Time (film) A Course of Pure Mathematics A History of Vector Analysis A History of π A Madman Dreams of Turing Machines A Mathematical Theory of Communication A Mathematician's Apology A Million Random Digits with 100,000 Normal Deviates A New Era of Thought A Requiem for Homo Sapiens A Symbolic Analysis of Relay and Switching Circuits A derivation of the discrete Fourier transform A posteriori probability A priori (statistics) A priori probability A* search algorithm A-equivalence A-group A-paracompact space AA postulate ABC@Home ABS methods ABX test AD+ ADE classification ADHM construction AF+BG theorem AF-heap AKS primality test ANOVA-simultaneous component analysis AP Calculus AP Statistics APMonitor ARGUS distribution ASSA AIDS model ATS theorem AUSM AUTODYN AWM/MAA Falconer Lecturer Abacus Abacus logic Abacus system Abc conjecture Abel Prize Abel equation Abel polynomials Abel transform Abel's binomial theorem Abel's curve theorem Abel's identity Abel's inequality Abel's test Abel's theorem Abel's uniform convergence test Abelian Abelian and tauberian theorems Abelian category Abelian extension Abelian group Abelian integral Abelian root group Abelian variety Abelian variety of CM-type Abelian von Neumann algebra Abel–Jacobi map Abel–Ruffini theorem Aberth method Abhyankar's conjecture Abhyankar's lemma Abhyankar–Moh theorem Abjad numerals Abnormal subgroup Abouabdillah's theorem Abramowitz and Stegun Absolute Galois group Absolute Infinite Absolute continuity Absolute convergence Absolute deviation Absolute geometry Absolute presentation of a group Absolute value Absolute value (algebra) Absolute zero Absolutely convex set Absolutely irreducible Absolutely simple group Absoluteness (mathematical logic) Absorbing element Absorbing set Absorbing set (random dynamical systems) Absorption law Abstract Wiener space Abstract additive Schwarz method Abstract algebra Abstract algebraic logic Abstract algebraic variety Abstract analytic number theory Abstract family of acceptors Abstract family of languages Abstract index group Abstract index notation Abstract machine Abstract nonsense Abstract polytope Abstract semantic graph Abstract simplicial complex Abstract structure Abstract variety Abstraction (mathematics) Abstraction model checking Abu-Mahmud al-Khujandi Abundant number Abuse of notation Academic Games Acceleration Acceptable quality limit Accessible category Accidental sampling Accuracy and precision Accuracy paradox Achilles number Ackermann function Ackermann ordinal Ackermann set theory Ackermann–Teubner Memorial Award Acnode Acoustic analogy Acoustic source localization Acquiescence bias Acta Arithmetica Acta Mathematica Action (physics) Action algebra Active and passive transformation Active set Actor model theory Actual infinity Actuarial notation Actuarial present value Actuarial science Actuary Acyclic coloring Acyclic models theorem Acyclic object Acyclic space Aczel's anti-foundation axiom Adams Prize Adams filtration Adams hemisphere-in-a-square projection Adams operation Adams spectral sequence Adapted process Adaptive Simpson's method Adaptive mesh refinement Adaptive quadrature Adaptive simulated annealing Adaptive stepsize Addition Addition chain Addition of natural numbers Addition of natural numbers/Proofs Addition theorem Addition-chain exponentiation Addition-subtraction chain Additive Schwarz method Additive category Additive function Additive group Additive identity Additive inverse Additive number theory Additive polynomial Additive synthesis Additively indecomposable ordinal Adele ring Adelic algebraic group Adequate pointclass Adherent point Adiabatic theorem Adjacency list Adjacency matrix Adjacent Adjacent angle Adjoint Adjoint bundle Adjoint endomorphism Adjoint functors Adjoint representation Adjugate matrix Adjunction (field theory) Adjunction formula (algebraic geometry) Adjunction space Adleman–Pomerance–Rumely primality test Admissible ordinal Admissible representation Ado's theorem Adomian decomposition method Adrien Pouliot Award Advanced Numerical Research and Analysis Group Advanced Z-transform Advanced level mathematics Advances in Applied Clifford Algebras Advection Affiliated operator Affine Affine Grassmannian Affine Grassmannian (manifold) Affine Hecke algebra Affine Lie algebra Affine action Affine arithmetic Affine combination Affine connection Affine curvature Affine differential geometry Affine focal set Affine geometry Affine geometry of curves Affine group Affine hull Affine involution Affine logic Affine manifold Affine quantum group Affine representation Affine space Affine transformation African Institute for Mathematical Sciences African Mathematical Union Afrika Matematica Age (model theory) Age Standardized Mortality Rates Ageometresia Agmon's inequality Agoh–Giuga conjecture Ahlswede–Daykin inequality Airport problem Airy function Aisenstadt Prize Aitken's delta-squared process Akaike information criterion Akamai Foundation Akbulut cork Akhmim wooden tablet Akira Haraguchi Akra-Bazzi method Al-Marrakushi Alan Turing Building Albanese variety Albert algebra Alcubierre drive Alcuin Aleksandrov–Clark measure Aleph number Alexander duality Alexander horned sphere Alexander matrix Alexander polynomial Alexander's Star Alexander's trick Alexander-Spanier cohomology Alexandroff extension Alexandrov topology Algebra Algebra & Number Theory Algebra (disambiguation) Algebra (ring theory) Algebra Project Algebra Universalis Algebra bundle Algebra homomorphism Algebra i Logika Algebra of physical space Algebra of random variables Algebra of sets Algebra over a field Algebra representation Algebra tile Algebraic Geometry (book) Algebraic K-theory Algebraic Riccati equation Algebraic analysis Algebraic character Algebraic closure Algebraic combinatorics Algebraic connectivity Algebraic curve Algebraic cycle Algebraic differential equation Algebraic element Algebraic enumeration Algebraic equation Algebraic extension Algebraic function Algebraic geometry Algebraic geometry and analytic geometry Algebraic graph theory Algebraic group Algebraic independence Algebraic integer Algebraic link Algebraic logic Algebraic manifold Algebraic modeling language Algebraic normal form Algebraic notation Algebraic number Algebraic number field Algebraic number theory Algebraic set Algebraic signal processing Algebraic solution Algebraic space Algebraic stack Algebraic statistics Algebraic structure Algebraic surface Algebraic topology Algebraic topology (object) Algebraic torus Algebraic variety Algebraic vector bundle Algebraic-group factorisation algorithm Algebraically closed field Algebraically closed group Algebraically compact group Algebraically compact module Algebroid Algorism Algorithm Algorithm characterizations Algorithm design Algorithm examples Algorithmic Number Theory Symposium Algorithmic inference Algorithms for calculating variance Alignments of random points Aliquot Aliquot sequence Aliter All Students Take Calculus All horses are the same color All one polynomial All pairs shortest path All-pairs testing All-pay auction Allais paradox Allan variance Allegory (category theory) Alligation Almagest Almost Almost Mathieu operator Almost Ramsey cardinal Almost all Almost complex manifold Almost convergent sequence Almost disjoint sets Almost everywhere Almost flat manifold Almost ineffable cardinal Almost integer Almost perfect number Almost periodic function Almost prime Almost surely Almost symplectic manifold Alperin-Brauer-Gorenstein theorem Alpha max plus beta min algorithm Alpha recursion theory Alpha-beta pruning Altern base Alternant code Alternant matrix Alternated hypercubic honeycomb Alternating direction implicit method Alternating factorial Alternating group Alternating knot Alternating permutation Alternating polynomial Alternating series Alternating series test Alternating sign matrix Alternation (geometry) Alternative algebra Alternative hypothesis Alternative set theory Alternativity Alternatization Altitude (triangle) Amalgamation property Ambient construction Ambient isotopy Ambient space Amenable Banach algebra Amenable group Amenable number American Institute of Mathematical Sciences American Institute of Mathematics American Invitational Mathematics Examination American Journal of Mathematics American Mathematical Association of Two-Year Colleges American Mathematical Monthly American Mathematical Society American Mathematics Competitions American Regions Mathematics League American Statistical Association Amicable number Ammann–Beenker tiling Amnestic functor Amoeba (mathematics) Amortized analysis Ampersand curve Ample line bundle An Algebra for Theoretical Genetics An Exceptionally Simple Theory of Everything An inequality on location and scale parameters Analysis Situs (book) Analysis of categorical data Analysis of covariance Analysis of molecular variance Analysis of rhythmic variance Analysis of variance Analysis on fractals Analytic Fredholm theorem Analytic and enumerative statistical studies Analytic capacity Analytic combinatorics Analytic continuation Analytic element method Analytic function Analytic geometry Analytic manifold Analytic number theory Analytic proof Analytic semigroup Analytic set Analytic space Analytic subgroup Analytic torsion Analytic variety Analytical Society Analytical engine Analytical expression Analytical hierarchy Analytical mechanics Analytics Analytization trick Anamorphism Ancestral graph Ancestral relation Ancient Egyptian multiplication Ancient Mesopotamian units of measurement Ancillary statistic Anderson's theorem Anderson-Darling test Andreotti–Frankel theorem Andrews–Curtis conjecture Andrica's conjecture Andronov-Pontryagin criterion Angel problem Anger function Angle Angle bisector theorem Angle chasing Angle condition Angle excess Angle of parallelism Angle trisection Angular acceleration Angular diameter Angular distance Angular eccentricity Angular frequency Angular mil Angular momentum operator Angular velocity Angular velocity tensor Anicius Manlius Severinus Boethius Anisohedral tiling Ankeny-Artin-Chowla congruence Annales Academiæ Scientiarum Fennicæ Annales Henri Poincaré Annales de l'Institut Fourier Annales de la Faculté des Sciences de Toulouse Annals of Mathematics Annihilating space Annihilator (ring theory) Annihilator method Annuity function Annulus (mathematics) Anomalous cancellation Anomalous scaling dimension Anomaly time series Anonymous function Anosov diffeomorphism Ansatz Anscombe transform Anscombe's quartet Ant colony optimization Ant on a rubber rope Anthropomorphic polygon Anti de Sitter space Anti-diagonal matrix Anti-martingale Anti-racist mathematics Antiautomorphism Anticausal system Antichain Anticommutativity Antiderivative Antiderivative (complex analysis) Antifundamental representation Antiholomorphic function Antihomomorphism Antiisomorphism Antilinear map Antimagic square Antimatroid Antiparallel (mathematics) Antiparallelogram Antipodal point Antiprism Antiquarian science book Antisymmetric Antisymmetric relation Antisymmetric tensor Antisymmetrizer Antiunitary Antoine's necklace Anyon Anyonic Lie algebra Apartness relation Apeirogon Apeirogonal antiprism Apeirogonal prism Apeirohedron Aperiodic finite state automaton Aperiodic graph Aperiodic monoid Aperiodic tiling Apex (geometry) Apodization Apollonian circles Apollonian gasket Apollonian sphere packing Apollonius' theorem Apothem Apotome Appell sequence Appell series Application of tensor theory in engineering Application of tensor theory in physics Applied Econometrics and International Development Applied Probability Trust Applied mathematics Applied mechanics Applied probability Apply Approach space Approximate Bayesian computation Approximate identity Approximately finite dimensional C*-algebra Approximately finite-dimensional Approximation Approximation error Approximation in algebraic groups Approximation property Approximation theory Apéry's constant Apéry's theorem Arabic numerals Arakelov theory Arbelos Arbitrarily large Arbitrary-precision arithmetic Arborescence (graph theory) Arboricity Arc (geometry) Arc (projective geometry) Arc elasticity Arc length Arc routing Arc-transitive graph Arcadia (play) Archard equation Archimedean circle Archimedean field Archimedean group Archimedean property Archimedean solid Archimedean spiral Archimedes Palimpsest Archimedes number Archimedes' cattle problem Archimedes' quadruplets Archimedes' twin circles Archimedes-lab.org Archive for Rational Mechanics and Analysis Area Area compatibility factor Area of a disk Area theorem (conformal mapping) Areal velocity Areas of mathematics Arens-Fort space Arf invariant Arf invariant (knot) Arg (mathematics) Arg max Argument principle Ariadne's thread (logic) Aristarchus On the Sizes and Distances Aristotle's wheel paradox Arithmetic Arithmetic and geometric Frobenius Arithmetic coding Arithmetic combinatorics Arithmetic complexity of the discrete Fourier transform Arithmetic derivative Arithmetic dynamics Arithmetic function Arithmetic genus Arithmetic group Arithmetic hyperbolic 3-manifold Arithmetic mean Arithmetic of abelian varieties Arithmetic precision Arithmetic progression Arithmetic rope Arithmetic-geometric mean Arithmetica Arithmetica Universalis Arithmetical hierarchy Arithmetical set Arithmetization of analysis Arithmeum Arity Arm solution Armenian numerals Armstrong's axioms Arnold conjecture Arnold's cat map Arnoldi iteration Aronszajn line Aronszajn tree Arrangement of hyperplanes Arrangement of lines Arrow notation Arrow's impossibility theorem Arrowhead matrix Ars Combinatoria (journal) Ars Conjectandi Ars Magna (Gerolamo Cardano) Ars Mathematica Contemporanea Art gallery problem Art gallery theorem Art of Problem Solving Art of Problem Solving Foundation Arthur Besse Arthur conjectures Arthur–Selberg trace formula Artificial Bee Colony Algorithm Artificial immune system Artificial neural network Artin L-function Artin approximation theorem Artin billiard Artin conjecture Artin conjecture (L-functions) Artin group Artin reciprocity law Artin's conjecture on primitive roots Artin-Mazur zeta function Artin-Rees lemma Artin-Schreier theory Artin-Zorn theorem Artinian Artinian module Artinian ring Artin–Hasse exponential Artin–Wedderburn theorem Artstein's theorem Aryabhata algorithm Aryabhata equation Aryabhatiya Arzelà–Ascoli theorem Asaṃkhyeya Ascendant subgroup Ascending chain condition Ascertainment bias Ascii85 Asian Pacific Mathematics Olympiad Askey–Gasper inequality Askey–Wilson polynomials Asperity Aspherical space Assignment problem Associahedron Associated Legendre function Associated bundle Associated prime Association (statistics) Association for Symbolic Logic Association for Women in Mathematics Association of Christians in the Mathematical Sciences Association of Mathematics Teachers of India Association of Teachers of Mathematics Association scheme Associative algebra Associative magic square Associativity Associator Assortative mixing Astroid Astronomical coordinate systems Astronomical year numbering Asymmetric norm Asymmetric relation Asymptote Asymptotic analysis Asymptotic curve Asymptotic distribution Asymptotic equipartition property Asymptotic expansion Asymptotic stability Asymptotic theory Asymptotically flat spacetime Asymptotology Atan2 Atiyah–Bott fixed-point theorem Atiyah–Hirzebruch spectral sequence Atiyah–Segal completion theorem Atiyah–Singer index theorem Atkinson's theorem Atkin–Lehner theory Atlas (topology) Atlas of Lie groups and representations Atom (measure theory) Atom (order theory) Atomic formula Atomic model Atomic sentence Atoroidal Atriphtaloid Attacking Faulty Reasoning Attic numerals Attractor Attributional calculus Aubin-Lions lemma Auction algorithm Auction theory Augmentation ideal Augmented Dickey-Fuller test Augmented dodecahedron Augmented hexagonal prism Augmented matrix Augmented pentagonal prism Augmented sphenocorona Augmented triangular prism Augmented tridiminished icosahedron Augmented truncated cube Augmented truncated dodecahedron Augmented truncated tetrahedron Aumann's agreement theorem Australian Aboriginal enumeration Australian Mathematical Sciences Institute Australian Mathematical Society Australian Mathematics Competition Auto magma object Autocorrelation Autocorrelation technique Autocovariance Autoepistemic logic Automated Mathematician Automated proof checking Automated reasoning Automated theorem proving Automatic differentiation Automatic group Automatic label placement Automatic semigroup Automatic sequence Automorphic factor Automorphic form Automorphic number Automorphism Automorphisms of the symmetric and alternating groups Autonomous category Autonomous convergence theorem Autonomous system (mathematics) Autoregressive conditional heteroskedasticity Autoregressive fractionally integrated moving average Autoregressive integrated moving average Autoregressive model Autoregressive moving average model Auxiliary field Monte Carlo Auxiliary fraction Auxiliary function Auxiliary particle filter Auxiliary polynomial Auxiliary polynomial theorem Auxiliary view Availability Average Average crossing number Average order of an arithmetic function Average path length Avicenna Avogadro constant Axial multipole moments Axial symmetry Axiality and rhombicity Axiom Axiom A Axiom S5 Axiom of Archimedes Axiom of Maria Axiom of choice Axiom of constructibility Axiom of countability Axiom of countable choice Axiom of dependent choice Axiom of determinacy Axiom of empty set Axiom of extensionality Axiom of global choice Axiom of infinity Axiom of limitation of size Axiom of pairing Axiom of power set Axiom of projective determinacy Axiom of real determinacy Axiom of reducibility Axiom of regularity Axiom of union Axiom schema Axiom schema of predicative separation Axiom schema of replacement Axiom schema of specification Axiomatic system Axis angle Axis-aligned object Axonometric projection Ax–Kochen theorem Aztec diamond Azuma's inequality Azumaya algebra A¹ homotopy theory A∞-operad B* search algorithm B*-algebra B,C,K,W system B-spline BA model BBGKY hierarchy BCH code BCJR algorithm BCK algebra BCMP network BDDC BFGS method BIBO stability BK-space BKM algorithm BLOSUM BRST algorithm BRST formalism Ba space Babenko–Beckner inequality Babuška-Lax-Milgram theorem Baby Monster group Baby-step giant-step Babylonian mathematics Babylonian numerals Bach tensor Bach's algorithm Bachmann–Howard ordinal Back-and-forth method Back-stepping Backcasting Background and genesis of topos theory Backhouse's constant Backstepping Backtracking line search Backward differentiation formula Backward induction Baer ring Baer–Specker group Bailey pair Bailey–Borwein–Plouffe formula Baillie-PSW primality test Baily–Borel compactification Baire category theorem Baire function Baire measure Baire one star function Baire set Baire space Baire space (set theory) Bairstow's method Bak-Sneppen model Baker's dozen Baker's map Baker-Campbell-Hausdorff formula Bakhshali manuscript Bak–Tang–Wiesenfeld sandpile Balanced prime Balanced repeated replication Balanced set Balanced ternary Balancing domain decomposition Baldwin-Lomax model Balian-Low theorem Balkan Mathematical Olympiad Ball (mathematics) Ballpark estimate Balls and vase problem Baltic Way (mathematical contest) Ban number Banach algebra Banach bundle Banach fixed point theorem Banach function algebra Banach limit Banach manifold Banach measure Banach space Banach's matchbox problem Banach–Alaoglu theorem Banach–Mazur compactum Banach–Mazur game Banach–Mazur theorem Banach–Stone theorem Banach–Tarski paradox Banburismus Band (algebra) Band matrix Band sum Bandelet (computer science) Bank condition Bankoff circle Bankruptcy problem Banks-Zaks fixed point Bapat–Beg theorem Bar induction Bar product (coding theory) Baralipton Barber paradox Barber–Johnson diagram Barbier's theorem Bareiss algorithm Barnardisation Barnes G-function Barnes integral Barnes–Wall lattice Baroclinity Baroco Barometric formula Barotropic vorticity equation Barrelled space Barrier function Barrow's inequality Barth surface Bartlett's test Barwise compactness theorem Barycentric coordinates Barycentric coordinates (mathematics) Barycentric subdivision Barycentric-sum problem Base (geometry) Base (group theory) Base (mathematics) Base (topology) Base 13 Base 24 Base 30 Base 32 Base 36 Base 62 Base 64 Base conversion divisibility test Base cylinder Base diameter Base flow (random dynamical systems) Base locus Base rate Basel problem Basic Linear Algebra Subprograms Basic hypergeometric series Basic skills Basis (linear algebra) Basis (universal algebra) Basis function Baskakov operator Basketball statistician Bass diffusion model Bass–Serre theory Basu's theorem Bateman Manuscript Project Bateman transform Bateman-Horn conjecture Bathtub curve Bauer-Fike theorem Baum-Welch algorithm Baumslag–Solitar group Baum–Connes conjecture Baum–Sweet sequence Bayes estimator Bayes factor Bayes linear Bayes' theorem Bayesian Bayesian additive regression kernels Bayesian average Bayesian efficiency Bayesian experimental design Bayesian game Bayesian inference Bayesian information criterion Bayesian linear regression Bayesian model comparison Bayesian multivariate linear regression Bayesian network Bayesian probability Bayesian search theory Bayesian spam filtering Beal's conjecture Bean curve Bean machine Beatty sequence Beautiful Young Minds Beauville–Laszlo theorem Beaver Bit-vector Decision Procedure Beck's monadicity theorem Beck's theorem Beck's theorem (geometry) Bedlam cube Bee colony optimization Beeman's algorithm Beer's theorem Beer–Lambert law Bees algorithm Begriffsschrift Behavioral modeling Behrens–Fisher problem Bel decomposition Belief propagation Bell curve Bell number Bell polynomials Bell series Bell's theorem Bellman equation Bellman-Ford algorithm Belt problem Beltrami identity Beltrami's theorem Belyi's theorem Bendixson-Dulac theorem Benford's law Benjamin Graham formula Benjamin–Bona–Mahony equation Benjamin–Ono equation Bentley-Ottmann algorithm Benz plane Beremiz Samir Berezin transform Berezinian Berge conjecture Berge knot Berger code Berger inequality Berger's inequality for Einstein manifolds Berger's sphere Berger–Kazdan comparison theorem Bergman metric Berkson error model Berkson's paradox Berlekamp's algorithm Berlekamp–Massey algorithm Berlin Mathematical School Berlin Papyrus Bernays–Schönfinkel class Bernoulli Society for Mathematical Statistics and Probability Bernoulli differential equation Bernoulli distribution Bernoulli family Bernoulli number Bernoulli polynomials Bernoulli process Bernoulli sampling Bernoulli scheme Bernoulli trial Bernoulli's inequality Bernoulli's principle Bernstein inequalities (probability theory) Bernstein inequality Bernstein polynomial Bernstein's constant Bernstein's inequality (mathematical analysis) Bernstein's theorem Bernstein–Sato polynomial Berry paradox Berry–Esséen theorem Bertrand's ballot theorem Bertrand's box paradox Bertrand's paradox (probability) Bertrand's postulate Bertrand–Diquet–Puiseux theorem Berwick Prizes Besicovitch covering theorem Besov space Bessel function Bessel polynomials Bessel process Bessel's inequality Bessel-Clifford function Best linear unbiased prediction Best response Beta distribution Beta function Beta integral Beta normal form Beta prime distribution Beta wavelet Beta-binomial model Beta-dual space Beth definability Beth number Bethe ansatz Bethe lattice Betti number Betti's theorem Beurling algebra Beurling–Lax theorem Beyer Chair of Applied Mathematics Beziergon Bhaskara's lemma Bhattacharya coefficient Bhattacharyya distance Bi-directional delay line Bi-quinary coded decimal Biadjacency matrix Bialgebra Bianchi classification Bianchi group Biangular coordinates Biarc Bias (statistics) Bias of an estimator Biased graph Biased sample Biaugmented pentagonal prism Biaugmented triangular prism Biaugmented truncated cube Bicategory Bicentric polygon Bicircular matroid Bicoherence Bicommutant Bicomplex number Biconditional elimination Biconditional introduction Bicone Biconjugate gradient method Biconnected component Biconnected graph Bicorn Bicubic interpolation Bicupola (geometry) Bicuspid curve Bicycle and motorcycle dynamics Bicyclic semigroup Bidiagonal matrix Bidiagonalization Bidirected graph Bidirectional search Bieberbach conjecture Bifolium Bifrustum Bifundamental representation Bifurcation diagram Bifurcation locus Bifurcation theory Big Numbers Big O notation Big Omega function Bigyrate diminished rhombicosidodecahedron Biharmonic Bézier surface Biharmonic equation Biholomorphism Bijection Bijection, injection and surjection Bijective numeration Bijective proof Bilateral hypergeometric series Bilinear form Bilinear interpolation Bilinear map Bilinear program Bilinear transform Billionth Bilunabirotunda Bimagic cube Bimagic square Bimodal distribution Bimodule Bimonster Bin (computational geometry) Bin packing problem Binary GCD algorithm Binary Golay code Binary and Binary classification Binary combinatory logic Binary constraint Binary cyclic group Binary entropy function Binary erasure channel Binary function Binary icosahedral group Binary logarithm Binary matrix Binary numeral system Binary octahedral group Binary operation Binary prefix Binary relation Binary search tree Binary set Binary space partitioning Binary splitting Binary symmetric channel Binary tetrahedral group Binary tree Binary-coded decimal Binet–Cauchy identity Bing metrization theorem Bing's example Bing's theorem Bingham distribution Bingham plastic Binomial Binomial (disambiguation) Binomial approximation Binomial coefficient Binomial differential equation Binomial distribution Binomial heap Binomial inverse theorem Binomial probability Binomial proportion confidence interval Binomial regression Binomial series Binomial test Binomial theorem Binomial transform Binomial type Biologically inspired algorithms Biometrics (journal) Biometrika Biorthogonal system Biorthogonal wavelet Biostatistics Bipartite dimension Bipartite graph Biplot Bipolar coordinates Bipolar cylindrical coordinates Biproduct Bipyramid Biquadratic field Biquandle Biquaternion Birational geometry Birational invariant Birch and Swinnerton-Dyer conjecture Birch's theorem Birch–Tate conjecture Birectified hexateron Birkhoff interpolation Birkhoff polytope Birkhoff's axioms Birkhoff's representation theorem Birkhoff's theorem (electromagnetism) Birkhoff's theorem (relativity) Birkhoff-Grothendieck theorem Birnbaum-Saunders distribution Birnbaum–Orlicz space Birotunda Birth-death process Birthday attack Birthday problem Bisected hexagonal tiling Bisection Bisection method Bishop-Cannings theorem Bishop–Gromov inequality Bismut connection Bispectrum Bispherical coordinates Bisymmetric matrix Bit-string physics Bitangent Bitonic tour Bitopological space Bitruncated 120-cell Bitruncated 24-cell Bitruncated 5-cell Bitruncated cubic honeycomb Bitruncated tesseract Bitruncation Bitwise operation Bivariegated graph Black Path Game Black box theory Black model Black swan theory Blackboard bold Blackwell channel Blade element theory Blancmange curve Blancmange function Bland's rule Blaschke product Blaschke selection theorem Blattner's conjecture Blau space Blind deconvolution Blind signal separation Bloch space Bloch sphere Bloch's theorem Bloch's theorem (complex variables) Block (group theory) Block LU decomposition Block Lanczos algorithm for nullspace of a matrix over a finite field Block Wiedemann algorithm Block cellular automaton Block code Block design Block matrix Block matrix pseudoinverse Block reflector Block stacking problem Block walking Blocking (statistics) Blossom (functional) Blowing down Blowing up Blue sky catastrophe Bluestein's FFT algorithm Blum axioms Blum integer Bochner identity Bochner integral Bochner space Bochner's formula Bochner's theorem Bochner-Riesz mean Bockstein homomorphism Bode plot Boerdijk–Coxeter helix Bogacki–Shampine method Bogdanov-Takens bifurcation Bogomol'nyi-Prasad-Sommerfield bound Bogomolov conjecture Bohr compactification Bohr–Mollerup theorem Bol loop Boltzmann distribution Bolyai Prize Bolyai–Gerwien theorem Bolza surface Bolzano–Weierstrass theorem Bombieri norm Bombieri-Lang conjecture Bombieri–Friedlander–Iwaniec theorem Bombieri–Vinogradov theorem Bondy's theorem Bonferroni correction Bonnesen's inequality Bonnet theorem Book (graph theory) Book embedding Book of Lemmas Book of Optics Book on Numbers and Computation Boole's inequality Boole's syllogistic Boolean algebra Boolean algebra (introduction) Boolean algebra (logic) Boolean algebra (structure) Boolean algebras canonically defined Boolean analysis Boolean circuit Boolean conjunctive query Boolean datatype Boolean delay equation Boolean domain Boolean expression Boolean function Boolean grammar Boolean logic Boolean operations on polygons Boolean prime ideal theorem Boolean ring Boolean satisfiability problem Boolean-valued function Boolean-valued model Booth's multiplication algorithm Bootstrap aggregating Bootstrapping (statistics) Bootstrapping populations Borel algebra Borel conjecture Borel determinacy theorem Borel equivalence relation Borel fixed-point theorem Borel functional calculus Borel hierarchy Borel measure Borel regular measure Borel right process Borel subgroup Borel summation Borel's law of large numbers Borel's lemma Borel's paradox Borel-Cantelli lemma Borel-Moore homology Borell-Brascamp-Lieb inequality Borel–Carathéodory theorem Borel–Weil theorem Borel–Weil–Bott theorem Boris Kordemsky Born-von Karman boundary condition Bornological space Borromean rings Borsuk's conjecture Borsuk–Ulam theorem Borwein's algorithm Borůvka's algorithm Bose–Einstein statistics Bott periodicity theorem Bottleneck traveling salesman problem Bottom type Boubaker polynomials Bound graph Boundary (topology) Boundary element method Boundary parallel Boundary value problem Bounded (set theory) Bounded complete poset Bounded deformation Bounded function Bounded inverse theorem Bounded mean oscillation Bounded operator Bounded quantifier Bounded set Bounded set (topological vector space) Bounded variation Boundedly generated group Boundedness Bounding interval hierarchy Bounding sphere Bounding volume Bourbaki dangerous bend symbol Bourbaki–Witt theorem Boustrophedon transform Bow curve Bowyer-Watson algorithm Box plot Box topology Box-Jenkins Box-Muller transform Box-Pierce test Boxcar function Boxicity Boxing the compass Boxology Box–Cox distribution Boy or Girl paradox Boy's surface Boy's surface/Proofs Bra-ket notation Bracelet (combinatorics) Brachistochrone curve Bracket Bracket (mathematics) Bracket algebra Bracket polynomial Braess's paradox Brahmagupta interpolation formula Brahmagupta matrix Brahmagupta theorem Brahmagupta's formula Brahmagupta's problem Brahmagupta–Fibonacci identity Brahmasphutasiddhanta Brahmi numeral Braid group Braid theory Braided Hopf algebra Braided monoidal category Bramble-Hilbert lemma Branch (graph theory) Branch and bound Branch and cut Branch point Branch-decomposition Branched covering Branched surface Branching factor Branching process Branching quantifier Branching theorem Brane Brascamp-Lieb inequality Brauer group Brauer's theorem Brauer's theorem on induced characters Brauer's three main theorems Brauer-Siegel theorem Brauer–Nesbitt theorem Brauer–Suzuki theorem Bravais lattice Brazilian Journal of Probability and Statistics Breadth-first search Breather Breather surface Bred vectors Bregman divergence Brent's method Bresenham's line algorithm Bretschneider's formula Breusch-Pagan test Breusch–Godfrey test Brewster's angle Brian's Brain Brianchon's theorem Bridge (graph theory) Bridge and torch problem Bridge number Bridge probabilities Brier score Bring radical British Mathematical Olympiad British Society for Research into Learning Mathematics British Society for the History of Mathematics British flag theorem Brocard circle Brocard points Brocard triangle Brocard's conjecture Brocard's problem Broken diagonal Broken space diagonal Broken symmetry Brooks’ theorem Brouwer fixed point theorem Brouwer-Hilbert controversy Brouwer–Heyting–Kolmogorov interpretation Browder-Minty theorem Brown's representability theorem Brown-Forsythe test Brownian bridge Brownian dynamics Brownian motion Brownian tree Brown–Peterson cohomology Broyden's method Bruck–Chowla–Ryser theorem Bruhat decomposition Brumer–Stark conjecture Brun sieve Brun's constant Brun's theorem Brunn-Minkowski theorem Brunnian link Brun–Titchmarsh theorem Brute force attack Bruun's FFT algorithm Bryant surface Bs space Bubble Babble Buchberger's algorithm Buckingham π theorem Buckley–Leverett equation Buddhabrot Buffon's needle Buffon's noodle Building (mathematics) Building automation Bulgarian solitaire Bulirsch-Stoer Algorithm Bullet-nose curve Bulletin of Mathematical Analysis and Applications Bulletin of the American Mathematical Society Bump function Bunched logic Bunching parameter Bundle (mathematics) Bundle gerbe Bundle map Bunyakovsky conjecture Burali-Forti paradox Burau representation Burgers vector Burgers' equation Burning Ship fractal Burnside ring Burnside theorem Burnside's lemma Burnside's problem Burr distribution Burrows-Abadi-Needham logic Burrows-Wheeler transform Bus bunching Busemann's theorem Business mathematics Business statistics Busy beaver Butson-type Hadamard matrix Butterfly curve Butterfly curve (algebraic) Butterfly curve (transcendental) Butterfly diagram Butterfly effect Butterfly theorem By inspection Bäcklund transform Bézier curve Bézier spline Bézier surface Bézier triangle Bézout domain Bézout matrix Bézout's identity Bézout's theorem Bôcher Memorial Prize Bôcher's theorem Büchi automaton Bühlmann model C closed subgroup C normal subgroup C space C*-algebra C-minimal theory C-number C-semiring C-symmetry C0-semigroup CA group CAT(k) group CAT(k) space CCR and CAR algebras CEP subgroup CGAL CHAID CHSH inequality CIR process CM-field CMA-ES CMAC CN group COPSS Presidents' Award CORDIC CPCTC CR manifold CW complex CYK algorithm Cabal (set theory) Cabibbo–Kobayashi–Maskawa matrix Cabinet projection Cable knot Cabri Geometry Cabtaxi number Caccioppoli set Cactus graph Cadaeic Cadenza Caesar cipher Cage (graph theory) Cahen's constant Cahiers de Topologie et Géometrie Différentielle Categoriques Cahn–Hilliard equation Cairo pentagonal tiling Cake number Calabi conjecture Calabi flow Calabi–Yau manifold Calculating machine Calculation Calculator Calculatrivia Calculus Calculus (book) Calculus Made Easy Calculus of constructions Calculus of inductive constructions Calculus of structures Calculus of variations Calculus of voting Calculus on Manifolds (book) Calculus ratiocinator Calculus with polynomials Calderón-Zygmund lemma Calendrical calculation Calibrated geometry Calibrated probability assessment Calibration (statistics) California State Summer School for Mathematics and Science Calkin algebra Callendar-Van Dusen equation Calogero conjecture Calogero-Degasperis-Fokas equation Caloric polynomial Camassa–Holm equation Camber angle Cambridge Mathematical Tripos Cameron-Martin theorem Cameron–Erdős conjecture Camino (diffusion MRI toolkit) Campbell's theorem Canada/USA Mathcamp Canadian Journal of Mathematics Canadian Mathematical Olympiad Canadian Mathematical Society Canadian Open Mathematics Challenge Canadian Society for History and Philosophy of Mathematics Canadian traveller problem Cancellation property Candidate solution Cannon's algorithm Canonical Huffman code Canonical analysis Canonical basis Canonical bundle Canonical connection Canonical coordinates Canonical correlation Canonical form Canonical form (Boolean algebra) Canonical line bundle Canonical quantization Canonicalization Canopy clustering algorithm Cantellated 120-cell Cantellated 24-cell Cantellated 5-cell Cantellated 600-cell Cantellated cubic honeycomb Cantellated tesseract Cantellation (geometry) Cantitruncated 120-cell Cantitruncated 24-cell Cantitruncated 5-cell Cantitruncated 600-cell Cantitruncated alternated cubic honeycomb Cantitruncated cubic honeycomb Cantitruncated tesseract Cantor cube Cantor distribution Cantor function Cantor medal Cantor set Cantor space Cantor's diagonal argument Cantor's first uncountability proof Cantor's paradox Cantor's theorem Cantor-Dedekind axiom Cantor–Bernstein–Schroeder theorem Cantor–Zassenhaus algorithm Cap product Capable group Capacity of a set Capillary routing Carathéodory conjecture Carathéodory metric Carathéodory's criterion Carathéodory's existence theorem Carathéodory's extension theorem Carathéodory's theorem Carathéodory's theorem (conformal mapping) Carathéodory's theorem (convex hull) Carathéodory-Jacobi-Lie theorem Cardiac input Cardinal assignment Cardinal function Cardinal number Cardinality Cardinality of the continuum Cardioid Caristi fixed point theorem Carl Friedrich Gauss Prize Carleman matrix Carleman's condition Carleman's inequality Carleson measure Carlson symmetric form Carlson's theorem Carmichael function Carmichael number Carmichael's theorem Carmichael's totient function conjecture Carminati-McLenaghan invariants Carnot's theorem Carol number Carpenter's ruler problem Carroll diagram Carry (arithmetic) Carry On, Mr. Bowditch Cartan Cartan connection Cartan connection applications Cartan decomposition Cartan matrix Cartan model Cartan subalgebra Cartan subgroup Cartan's criterion Cartan's equivalence method Cartan's theorem Cartan's theorems A and B Cartan-Karlhede algorithm Cartan-Kuranishi prolongation theorem Cartan–Dieudonné theorem Cartan–Eilenberg resolution Cartan–Hadamard theorem Cartan–Kähler theorem Carter subgroup Cartesian closed category Cartesian coordinate system Cartesian product Cartesian product of graphs Carus Mathematical Monographs Cascade algorithm Casey's theorem Cash–Karp method Casimir invariant Cassini and Catalan identities Cassini oval Casson handle Casson invariant Castelnuovo-de Franchis theorem Casting out nines Casus irreducibilis Catalan number Catalan solid Catalan's conjecture Catalan's constant Catalan's problem Catalog of articles in probability theory Catamorphism Catastro of Ensenada Catastrophe theory Categorial grammar Categorical algebra Categorical bridge Categorical distribution Categorical logic Categorical set theory Categories for the Working Mathematician Categories of manifolds Categorification Category (mathematics) Category of abelian groups Category of elements Category of finite dimensional Hilbert spaces Category of groups Category of magmas Category of manifolds Category of medial magmas Category of metric spaces Category of preordered sets Category of relations Category of rings Category of sets Category of small categories Category of topological spaces Category of topological vector spaces Category of vector spaces Category theory Catenary Catenary ring Catenoid Cathetus Catmull–Clark subdivision surface Cauchy boundary condition Cauchy condensation test Cauchy distribution Cauchy formula for repeated integration Cauchy index Cauchy matrix Cauchy momentum equation Cauchy net Cauchy principal value Cauchy problem Cauchy product Cauchy sequence Cauchy space Cauchy surface Cauchy theorem Cauchy's convergence test Cauchy's functional equation Cauchy's integral formula Cauchy's integral theorem Cauchy's test Cauchy's theorem (geometry) Cauchy's theorem (group theory) Cauchy-Binet formula Cauchy-Euler equation Cauchy-Hadamard theorem Cauchy-continuous function Cauchy–Kowalevski theorem Cauchy–Riemann equations Cauchy–Schwarz inequality Causal Markov condition Causal loop diagram Causal sets Causal structure Causal system Causality conditions Caustic (mathematics) Cavalier perspective Cayley graph Cayley table Cayley transform Cayley's formula Cayley's mousetrap Cayley's theorem Cayley-Bacharach theorem Cayley–Dickson construction Cayley–Hamilton theorem Cebeci-Smith model Ceiling effect Celeritas Celestial coordinate system Celestial mechanics Celestial navigation Cell (geometry) Cell lists Cellular Potts model Cellular automaton Cellular homology Censored regression model Censoring (statistics) Center (algebra) Center (group theory) Center for Mathematical Modeling (CMM) Center manifold Center of curvature Center of mass Center of mass coordinates Centered cube number Centered decagonal number Centered heptagonal number Centered hexagonal number Centered nonagonal number Centered octagonal number Centered pentagonal number Centered polygonal number Centered square number Centered tree Centered triangular number Centered trochoid Centering matrix Centerpoint (geometry) Central angle Central binomial coefficient Central carrier Central composite design Central limit theorem Central moment Central product Central series Central simple algebra Central subgroup Centrality Centralizer and normalizer Centrally closed subgroup Centre (category) Centre (geometry) Centre de Recherches Mathématiques Centrifugal force Centro de Investigación en Matemáticas Centroid Centroidal Voronoi tessellation Centrosymmetric matrix Centrosymmetry Cepstrum Cerf theory Certificate of Advanced Study in Mathematics Cesàro equation Cesàro mean Cesàro summation Ceva's theorem Chabauty topology Chaff algorithm Chain (algebraic topology) Chain complete Chain complex Chain rule Chain rule for Kolmogorov complexity Chain sequence (continued fraction) Chaitin's algorithm Chaitin's constant Chakravala method Champernowne constant Championnat International de Jeux Mathématiques et Logiques Chan's algorithm Change detection Change of bases Change of basis Change of variables (PDE) Change ringing Change-making problem Channel code Channel surface Chaos communications Chaos game Chaos theory Chaotic bubble Chaotic hysteresis Chaplygin problem Chaplygin's equation Chapman function Chapman-Kolmogorov equation Chapman–Robbins bound Character (mathematics) Character group Character sum Character table Character theory Character variety Characteristic (algebra) Characteristic class Characteristic function Characteristic function (convex analysis) Characteristic function (probability theory) Characteristic multiplier Characteristic polynomial Characteristic sequence Characteristic set Characteristic state function Characteristic subgroup Characteristically simple group Characterization (mathematics) Characterizations of the category of topological spaces Characterizations of the exponential function Charles Sanders Peirce bibliography Charlie Eppes Charlier polynomials Chartered mathematician Charts on SO(3) Chasles' theorem Chauvenet Prize Chauvenet's criterion Chebotarev's density theorem Chebyshev center Chebyshev cube root Chebyshev distance Chebyshev equation Chebyshev function Chebyshev nodes Chebyshev polynomials Chebyshev rational functions Chebyshev's inequality Chebyshev's sum inequality Chebyshev's theorem Chebyshev-Markov-Stieltjes inequalities Chebyshev–Gauss quadrature Checking if a coin is fair Checksum Cheeger bound Cheeger constant Cheeger constant (graph theory) Chemical flux Chemical graph theory Chen prime Chen's theorem Chennai Mathematical Institute Chern class Chern-Simons form Chern-Weil homomorphism Chern-Weil theory Chernoff bound Chernoff face Chernoff's distribution Chetayev instability theorem Chevalley basis Chevalley scheme Chevalley-Warning theorem Chevalley–Shephard–Todd theorem Chi distribution Chi-square distribution Chi-square test Chicago school (mathematical analysis) Chief series Chien search Chiliogon China Girls Math Olympiad Chinese hypothesis Chinese mathematics Chinese number gestures Chinese numerals Chinese remainder theorem Chinese restaurant process Chiral Potts curve Chiral knot Chiral polytope Chirality (mathematics) Chirikov criterion Chirplet transform Chisanbop Chisini mean Choi's theorem on completely positive maps Choi-Williams distribution function Choice function Choice sequence Cholesky decomposition Chomp Choquet integral Choquet theory Chord (geometry) Chordal graph Chore division Chow coordinates Chow ring Chowla–Mordell theorem Chowla–Selberg formula Christoffel symbols Christoffel symbols/Proofs Christofides algorithm Chromatic polynomial Chromatic spectral sequence Chronogram Chronology of computation of π Chu space Chua's circuit Chudnovsky algorithm Church encoding Church's thesis (constructive mathematics) Church–Kleene ordinal Church–Rosser theorem Church–Turing thesis Chuvash numerals Cichoń's diagram Cinquefoil knot Circle Circle bundle Circle criterion Circle graph Circle group Circle map Circle of antisimilitude Circle of confusion Circle packing Circle packing theorem Circle-valued Morse theory Circles of Apollonius Circuit minimization Circuit rank Circulant graph Circulant matrix Circular coloring Circular convolution Circular error probable Circular motion Circular points at infinity Circular sector Circular segment Circular shift Circular symmetry Circular-arc graph Circulation problem Circumconic and inconic Circumference Circumscribed circle Circumscribed sphere Cissoid Cissoid of Diocles Clackson scroll formula Clairaut's equation Clairaut's relation Clairaut's theorem Clark-Ocone theorem Clarkson's inequalities Class Class (set theory) Class automorphism Class field theory Class formation Class function Class number formula Class number problem Classen's law Classical Hamiltonian quaternions Classical Wiener space Classical definition of probability Classical group Classical mathematics Classical mechanics Classical modular curve Classical scaling dimension Classical treatment of tensors Classification of Clifford algebras Classification of Fatou components Classification of discontinuities Classification of electromagnetic fields Classification of finite simple groups Classification of manifolds Classification problem Classification theorem Classifier (mathematics) Classifying space Classifying space for O(n) Classifying space for U(n) Clausal normal form Clausen function Clausen's formula Claw (graph theory) Claw-free permutation Clay Mathematics Institute Clay Mathematics Monographs Clay Research Award Cleaver (geometry) Clebsch-Gordan coefficients Clenshaw algorithm Clenshaw–Curtis quadrature Clifford algebra Clifford bundle Clifford module Clifford torus Clifford's circle theorems Clifford's theorem Clifford-Klein form Climate ensemble Cliodynamics Clique (graph theory) Clique complex Clique cover Clique problem Clique-sum Clique-width Clobber Clock angle problem Cloister vault Clone (algebra) Clopen set Clos network Close-packing of spheres Closed and exact differential forms Closed category Closed convex function Closed form Closed geodesic Closed graph theorem Closed manifold Closed monoidal category Closed operator Closed set Closed surface Closed testing procedure Closed timelike curve Closed-form expression Closed-loop transfer function Closeness (mathematics) Closest pair of points problem Closest string Closing (morphology) Closure (mathematics) Closure (topology) Closure operator Closure problem Closure with a twist Clothoid Club filter Club set Clubsuit Cluster analysis Cluster decomposition theorem Cluster sampling Clustering coefficient Clutching construction Clutter (mathematics) Clélies Coadjoint representation Coalgebra Coanalytic set Coarea formula Coarse structure Coase theorem Coastline paradox Coates graph Coaxial Cobordism Cobordism theorem Cobweb plot Cochleoid Cochran test Cochran's theorem Cochran-Armitage test for trend Cochrane-Orcutt estimation Cocker's Arithmetick Cocker's Decimal Arithmetick Cocoloring Cocompact group action Cocountable Cocountable topology Cocurvature Cocycle class Codd's cellular automaton Codd's theorem Code (set theory) Codimension Coding theory Codomain Coefficient Coefficient diagram method Coefficient matrix Coefficient of determination Coefficient of variation Coefficients of potential Coequalizer Coercive function Cofactor (linear algebra) Cofibration Cofinal (mathematics) Cofinality Cofinite Coframe Cofunction Coggeshall slide rule Cognitive pretesting Cognitive science of mathematics Cognitively Guided Instruction Cograph Cohen's class distribution function Cohen's kappa Cohen-Daubechies-Feauveau wavelet Cohen–Macaulay ring Coherence (philosophical gambling strategy) Coherence (statistics) Coherence condition Coherent duality Coherent ring Coherent risk measure Coherent sheaf Coherent space Coherent topology Cohn's irreducibility criterion Cohomological dimension Cohomology Cohomology operation Cohomology ring Cohomotopy group Cohort (statistics) Coiflet Coimage Coin flipping Coin problem Coincidence point Coincident Cointegration Cointerpretability Cokernel Col (game) Colatitude Cole Prize Coleman-Mandula theorem Colin de Verdière graph invariant Colinear map Collaboration graph Collage theorem Collapse (topology) Collapsing manifold Collatz conjecture Collectionwise Hausdorff space Collectionwise normal Collectively exhaustive events Collineation Collocation method Cologarithm Colombeau algebra Colombian numerals Colored matroid Colossally abundant number Column space Column vector Comb space Combinadic Combination Combinatorial class Combinatorial commutative algebra Combinatorial data analysis Combinatorial design Combinatorial enumeration Combinatorial explosion Combinatorial explosion (communication) Combinatorial game theory Combinatorial hierarchy Combinatorial optimization Combinatorial principles Combinatorial proof Combinatorial species Combinatorial topology Combinatorics Combinatorics and dynamical systems Combinatorics on words Combinatory categorial grammar Combinatory logic Comma category Commensurability (mathematics) Common integrals in quantum field theory Common knowledge (logic) Common logarithm Common mode failure Common operator notation Common-cause and special-cause Communications in Statistics Communications on Pure and Applied Mathematics Commutant Commutant lifting theorem Commutation matrix Commutation theorem Commutative algebra Commutative diagram Commutative ring Commutativity Commutator Commutator subgroup Comodule Compact Riemann surface Compact closed category Compact complement topology Compact convergence Compact element Compact group Compact operator Compact operator on Hilbert space Compact quantum group Compact space Compact stencil Compact-open topology Compactification (mathematics) Compactly embedded Compactly generated Compactly generated group Compactly generated space Compactly-supported homology Compactness measure of a shape Compactness theorem Compacton Companion matrix Comparability Comparability graph Comparametric equation Comparing means Comparison of computer algebra systems Comparison of statistics journals Comparison of topologies Comparison test Comparison theorem Comparison triangle Comparisonwise error rate Compass (drafting) Compass and straightedge constructions Compass equivalence theorem Competitive Lotka–Volterra equations Complement (group theory) Complement (mathematics) Complement (set theory) Complement graph Complementarity theory Complementary angles Complementary event Complementary sequences Complementary series representation Complemented group Complemented lattice Complete Boolean algebra Complete Fermi–Dirac integral Complete Heyting algebra Complete School Complete algebraic variety Complete bipartite graph Complete category Complete coloring Complete graph Complete group Complete homogeneous symmetric polynomial Complete icosahedron Complete intersection Complete lattice Complete measure Complete metric space Complete numbering Complete partial order Complete quadrangle Complete quotient Complete set of invariants Complete theory Completely Hausdorff space Completely distributive lattice Completely metrizable space Completely multiplicative function Completely randomized design Completeness Completeness (order theory) Completeness (statistics) Completing the square Completion (ring theory) Complex Hadamard matrix Complex analysis Complex analytic geometry Complex analytic space Complex argument (continued fraction) Complex base systems Complex cobordism Complex conjugate Complex conjugate representation Complex conjugate root theorem Complex conjugate vector space Complex convexity Complex differential equation Complex differential form Complex dimension Complex dynamics Complex geodesic Complex geometry Complex line Complex logarithm Complex manifold Complex measure Complex mexican hat wavelet Complex multiplication Complex network Complex network zeta function Complex number Complex plane Complex polygon Complex polytope Complex projective plane Complex projective space Complex quadratic polynomial Complex reflection group Complex representation Complex spin structure Complex torus Complex wavelet transform Complexification Complexor Componendo and dividendo Component (group theory) Composant Composite field (mathematics) Composite number Composition (number theory) Composition algebra Composition of relations Composition operator Composition ring Composition series Compositional data Compositional pattern-producing network Compound Poisson distribution Compound Poisson process Compound interest Compound of cube and octahedron Compound of dodecahedron and icosahedron Compound of eight octahedra with rotational freedom Compound of eight triangular prisms Compound of five cubes Compound of five cuboctahedra Compound of five cubohemioctahedra Compound of five great cubicuboctahedra Compound of five great dodecahedra Compound of five great icosahedra Compound of five great rhombihexahedra Compound of five icosahedra Compound of five octahedra Compound of five octahemioctahedra Compound of five small cubicuboctahedra Compound of five small rhombicuboctahedra Compound of five small rhombihexahedra Compound of five small stellated dodecahedra Compound of five stellated truncated cubes Compound of five tetrahedra Compound of five tetrahemihexahedra Compound of five truncated cubes Compound of five truncated tetrahedra Compound of five uniform great rhombicuboctahedra Compound of four hexagonal prisms Compound of four octahedra Compound of four octahedra with rotational freedom Compound of four triangular prisms Compound of great icosahedron and great stellated dodecahedron Compound of six cubes with rotational freedom Compound of six decagonal prisms Compound of six decagrammic prisms Compound of six pentagonal antiprisms Compound of six pentagonal prisms Compound of six pentagrammic antiprisms Compound of six pentagrammic crossed antiprisms Compound of six pentagrammic prisms Compound of six square antiprisms Compound of six tetrahedra Compound of six tetrahedra with rotational freedom Compound of ten hexagonal prisms Compound of ten octahedra Compound of ten tetrahedra Compound of ten triangular prisms Compound of ten truncated tetrahedra Compound of three cubes Compound of three square antiprisms Compound of twelve pentagonal antiprisms with rotational freedom Compound of twelve pentagonal prisms Compound of twelve pentagrammic antiprisms Compound of twelve pentagrammic crossed antiprisms with rotational freedom Compound of twelve pentagrammic prisms Compound of twelve tetrahedra with rotational freedom Compound of twenty octahedra Compound of twenty octahedra with rotational freedom Compound of twenty tetrahemihexahedra Compound of twenty triangular prisms Compound of two great dodecahedra Compound of two great icosahedra Compound of two great inverted snub icosidodecahedra Compound of two great retrosnub icosidodecahedra Compound of two great snub icosidodecahedra Compound of two icosahedra Compound of two inverted snub dodecadodecahedra Compound of two small stellated dodecahedra Compound of two snub cubes Compound of two snub dodecadodecahedra Compound of two snub dodecahedra Compound of two snub icosidodecadodecahedra Compound of two truncated tetrahedra Comprehensive School Mathematics Program Compression (functional analysis) Compression body Computability logic Computable function Computable isomorphism Computable measure theory Computable model theory Computable number Computable real function Computation Computation in the limit Computation of CRC Computation of the permanent of a matrix Computational complexity of mathematical operations Computational complexity theory Computational electromagnetics Computational finance Computational formula for the variance Computational geometry Computational group theory Computational hardness assumption Computational indistinguishability Computational mathematics Computational model Computational number theory Computational physics Computational science Computational statistics Computational topology Computational tree logic Computer algebra system Computer numbering formats Computer representation of surfaces Computer-aided geometric design Computer-assisted proof Computer-based mathematics education Computing (journal) Computing the permanent Computing π Computus Concatenated error correction codes Concatenation (mathematics) Concave Concave function Concave polygon Concave set Concentration of measure Concentric Concept algebra Concepts of Modern Mathematics Conchoid (mathematics) Conchoid of Dürer Conchoid of de Sluze Conchospiral Concordance correlation coefficient Concordant pair Concrete Mathematics Concrete category Concurrent lines Concyclic points Condensation lemma Condition number Conditional change model Conditional convergence Conditional entropy Conditional event algebra Conditional expectation Conditional independence Conditional mutual information Conditional probability Conditional probability distribution Conditional proof Conditional quantifier Conditional random field Conditional variance Conditionality principle Conditioned disjunction Conditioning (probability) Condorcet's jury theorem Conductance (graph) Conductance (probability) Conductor of an abelian variety Cone (category theory) Cone (geometry) Cone (linear algebra) Cone (topology) Cone algorithm Cone of curves Cone-shape distribution function Conference Board of the Mathematical Sciences Conference graph Conference matrix Confidence band Confidence interval Confidence region Configuration (geometry) Configuration (mathematics) Configuration space Confirmatory factor analysis Confluent hypergeometric function Confocal Conformable matrix Conformal Killing equation Conformal connection Conformal equivalence Conformal geometry Conformal map Conformal symmetry Conformally flat manifold Confounding Congruence (general relativity) Congruence (geometry) Congruence (manifolds) Congruence lattice problem Congruence of squares Congruence relation Congruence subgroup Congruent number Congruent transformation Congruum Conic constant Conic optimization Conic section Conic section/Proofs Conical combination Conical coordinates Conical function Conical surface Conifold Conjecture Conjoint analysis Conjugacy class Conjugacy class sum Conjugacy problem Conjugate (algebra) Conjugate Fourier series Conjugate closure Conjugate element (field theory) Conjugate gradient method Conjugate index Conjugate points Conjugate prior Conjugate transpose Conjugate-permutable subgroup Conjugated line Conjugation Conjugation of isometries in Euclidean space Conjunction elimination Conjunction introduction Conjunctive grammar Conjunctive normal form Connect (game) Connected Mathematics Connected category Connected component Connected component (graph theory) Connected dominating set Connected space Connected sum Connectedness Connectedness locus Connection (mathematics) Connection (principal bundle) Connection (vector bundle) Connection form Connectivity (graph theory) Connector (mathematics) Connex (logic) Consensus based assessment Consensus theorem Conservation form Conservation law Conservative extension Conservative force Conservative vector field Conservativity theorem Conserved current Conserved quantity Consistency Consistent estimator Consistent heuristic Constance Kamii Constant Q transform Constant amplitude zero autocorrelation waveform Constant coefficients Constant curvature Constant factor rule in differentiation Constant factor rule in integration Constant function Constant of integration Constant problem Constant sheaf Constant term Constrained generalized inverse Constrained optimization and Lagrange multipliers Constraint (mathematics) Constraint algorithm Constraint counting Constraint graph Constraint optimization Constraint satisfaction problem Construct validity Constructibility Constructible number Constructible polygon Constructible universe Construction of splitting fields Construction of the real numbers Constructions of low-discrepancy sequences Constructive Approximation Constructive dilemma Constructive proof Constructive quantum field theory Constructive set theory Constructive solid geometry Constructivism (mathematics) Constructivist analysis Consumption distribution Contact (mathematics) Contact dynamics Contact geometry Contact process (mathematics) Contact type Containment hierarchy Containment order Content (algebra) Content (measure theory) Content validity Context-free grammar Context-sensitive grammar Context-sensitive language Contiguity Contingency table Continuant (mathematics) Continuation map Continued fraction Continued fraction factorization Continued fraction of Gauss Continuity correction Continuity equation Continuity property Continuity theorem Continuous function Continuous function (set theory) Continuous function (topology) Continuous functional calculus Continuous functions on a compact Hausdorff space Continuous linear extension Continuous linear operator Continuous modelling Continuous optimization Continuous predicate Continuous probability distribution Continuous stochastic process Continuous symmetry Continuous wavelet Continuous wavelet transform Continuous-time Markov process Continuously compounded nominal and real returns Continuously embedded Continuum (mathematics) Continuum function Continuum hypothesis Contorsion tensor Contour line Contour set Contourlet Contractibility of unit sphere in Hilbert space Contractible space Contraction (operator theory) Contraction mapping Contraction principle Contraction principle (large deviations theory) Contradiction Contraharmonic mean Contranormal subgroup Contraposition Contrast (statistics) Contributions of Leonhard Euler to mathematics Control flow graph Control limits Control of chaos Control reconfiguration Control system Control theory Control variate Control volume Control-Lyapunov function Controllability Controllability Gramian Controlled invariant subspace Controller (control theory) Controversy over Cantor's theory Conull set Convection–diffusion equation Convergence Convergence in measure Convergence of Fourier series Convergence of measures Convergence of random variables Convergence problem Convergence tests Convergent (continued fraction) Convergent series Convergent validity Converse implication Converse nonimplication Conversion between quaternions and Euler angles Convex Convex analysis Convex and concave polygons Convex body Convex combination Convex cone Convex conjugate Convex function Convex geometry Convex hull Convex hull algorithms Convex lattice polytope Convex metric space Convex optimization Convex polytope Convex regular 4-polytope Convex set Convex uniform honeycomb Convolution Convolution power Convolution sampling Convolution theorem Convolutional code Conway base 13 function Conway chained arrow notation Conway group Conway notation (knot theory) Conway polyhedron notation Conway puzzle Conway sequence Conway triangle notation Conway's Game of Life Conway's LUX method for magic squares Conway's Soldiers Conway's thrackle conjecture Conway-Maxwell-Poisson distribution Cook's distance Cook–Levin theorem Cooley-Tukey FFT algorithm Cooperative optimization Coordinate rotations and reflections Coordinate space Coordinate surface Coordinate system Coordinate transformation Coordinate vector Coordinate-free treatment Coordinate-induced basis Coordinates (mathematics) Copeland–Erdős constant Copenhagen interpretation Cophenetic correlation Coplanarity Copositive matrix Coppersmith–Winograd algorithm Coprime Coproduct Copula (statistics) Copying mechanism Core (graph theory) Core (group) Core damage frequency Core model Core-Plus Mathematics Project Coreset Corner Corner solution Corner transfer matrix Cornu spiral Corollary Corona theorem Correct sampling Correction for attenuation Correlate summation analysis Correlation Correlation (projective geometry) Correlation dimension Correlation does not imply causation Correlation function Correlation function (astronomy) Correlation function (quantum field theory) Correlation function (statistical mechanics) Correlation immunity Correlation inequality Correlation integral Correlation ratio Correlation sum Correlogram Correspondence (mathematics) Correspondence principle Corresponding angles Corresponding conditional (logic) Corresponding sides Coset Coset construction Coset enumeration Coset leader Cosmic space Costa's minimal surface Costas array Costate equations Cotangent bundle Cotangent complex Cotangent space Cotes' spiral Cotlar–Stein lemma Cotolerant sequence Cotorsion group Cotton tensor Council for the Mathematical Sciences Count On Count data Countable chain condition Countable set Countable tightness Countably compact space Countably generated space Counter Counterexample Counterexamples in Topology Counterfeit coin problem Counternull Counting Counting board Counting measure Counting process Counting quantification Counting rods Countryman line Couple Coupling (probability) Coupling from the past Coupon collector's problem Coupon collector's problem (generating function approach) Courant Institute of Mathematical Sciences Courant algebroid Courant bracket Courant minimax principle Courant–Friedrichs–Lewy condition Course-of-values recursion Cousin prime Cousin problems Covariance Covariance and contravariance Covariance and contravariance of vectors Covariance and correlation Covariance function Covariance matrix Covariant classical field theory Covariant derivative Covariant transformation Covariate Cover (topology) Coverage data Covering (graph theory) Covering code Covering group Covering lemma Covering number Covering problem Covering radius Covering relation Covering set Covering space Covering system Coversine Cox process Cox's theorem Cox-Zucker machine Coxeter group Coxeter number Coxeter's loxodromic sequence of tangent circles Coxeter-Dynkin diagram Coxeter–James Prize Coxeter–Todd lattice Cr topology Cracovian Craig interpolation Craig's theorem Cramer's rule Cramér's conjecture Cramér's theorem Cramér-Wold theorem Cramér-von-Mises criterion Cramér–Rao bound Crank–Nicolson method Craps principle Creative and productive sets Credible interval Crelle's Journal Cremona group Criterion validity Critical graph Critical group Critical line theorem Critical pair Critical point Critical point (mathematics) Critical point (set theory) Critical value Criticism of non-standard analysis Crofton formula Cromwell's rule Cronbach's alpha Crooked egg curve Crore Cross covariance Cross entropy Cross product Cross section (geometry) Cross tabulation Cross-cap Cross-correlation Cross-entropy method Cross-figure Cross-multiplication Cross-polytope Cross-ratio Cross-sectional regression Cross-validation Crosscap number Crossed ladders problem Crossed module Crossed product Crossing number Crossing number (graph theory) Crossing number (knot theory) Crout matrix decomposition Cruciform curve Crunode Cryptanalysis Cryptographic hash function Cryptography Cryptomorphism Cryptonomicon Crystal Ball function Crystal structure Crystal system Crystalline cohomology Crystallographic point group Crystallographic restriction theorem Császár polyhedron Cuban prime Cube Cube (algebra) Cube (film) Cube inscribed in a sphere Cube root Cube-connected cycles Cubic Cubic Hermite spline Cubic function Cubic graph Cubic group Cubic honeycomb Cubic plane curve Cubic reciprocity Cubic surface Cubitruncated cuboctahedron Cuboctahedral prism Cuboctahedron Cubohemioctahedron Cuboid Cue validity Cuisenaire rods Cullen number Cullen prime Cultural algorithm Cumulant Cumulative distribution function Cumulative frequency Cumulative frequency analysis Cunningham chain Cunningham number Cunningham project Cuntz algebra Cup product Cupola (geometry) Curl (mathematics) Current (mathematics) Current Index to Statistics Current algebra Curry's paradox Currying Curry–Howard correspondence Curse of dimensionality Curta calculator Curtis–Hedlund–Lyndon theorem Curvature Curvature form Curvature invariant Curvature invariant (general relativity) Curvature of Riemannian manifolds Curvature of a measure Curvature tensor Curve Curve fitting Curve of constant width Curve of pursuit Curve orientation Curve-fitting compaction Curved space Curvelet Curves in differential geometry Curvilinear coordinates Curvilinear perspective Cusp (singularity) Cusp form Cusp neighborhood Cuspidal representation Cut (graph theory) Cut locus Cut locus (Riemannian manifold) Cut vertex Cut-elimination theorem Cut-point Cut-the-Knot Cuthill–McKee algorithm Cutler's bar notation Cutting stock problem Cutting-plane method Cybenko theorem Cybernetics Cycle (graph theory) Cycle (mathematics) Cycle decomposition Cycle detection Cycle graph Cycle graph (algebra) Cycle graph (disambiguation) Cycle index Cycle notation Cycle per second Cycle space Cycles and fixed points Cyclic (mathematics) Cyclic code Cyclic group Cyclic homology Cyclic module Cyclic negation Cyclic number Cyclic order Cyclic permutation Cyclic quadrilateral Cyclic redundancy check Cyclic symmetries Cyclically reduced word Cycloid Cyclostationary process Cyclotomic character Cyclotomic field Cyclotomic identity Cyclotomic polynomial Cyclotomic unit Cylinder (geometry) Cylinder set Cylinder set measure Cylindric algebra Cylindrical algebraic decomposition Cylindrical coordinate system Cylindrical multipole moments Cylindrification Cylindroid Cyprus Mathematical Olympiad Cyprus Mathematical Society Cyrillic numerals Càdlàg Céa's lemma D' D'Agostino's K-squared test D'Alembert operator D'Alembert's formula D'Alembert's paradox D'Alembert's principle D'Alembert-Euler condition D-module DC bias DFFITS DFT matrix DIMACS DPLL algorithm Dagger category Dagger compact category Dagger symmetric monoidal category Damerau–Levenshtein distance Damping Damping ratio Dandelin spheres Daniel Gillespie Daniell integral Danielson-Lanczos lemma Dannie Heineman Prize for Mathematical Physics Danskin's theorem Dante space Dantzig–Wolfe decomposition Darb-i Imam Darboux basis Darboux derivative Darboux frame Darboux function Darboux integral Darboux vector Darboux's formula Darboux's theorem Darboux's theorem (analysis) Data (Euclid) Data analysis Data assimilation Data dredging Data point Data transformation (statistics) Database tuning Daubechies wavelet Davenport–Schinzel sequence Davenport–Schmidt theorem Davey–Stewartson equation Davidon-Fletcher-Powell formula Davidson Prize Davis–Putnam algorithm Dawson function Dawson–Gärtner theorem De Arte Combinatoria De Boor's algorithm De Branges space De Bruijn digraph De Bruijn graph De Bruijn index De Bruijn notation De Bruijn sequence De Bruijn-Newman constant De Casteljau's algorithm De Finetti diagram De Finetti's theorem De Franchis theorem De Gua's theorem De Longchamps point De Moivre's formula De Moivre's theorem De Moivre–Laplace theorem De Morgan Medal De Morgan's laws De Polignac's formula De Rham cohomology De Rham curve De Rham's theorem De Rham–Weil theorem De Sitter invariant special relativity De Sitter space De vetula DeWitt notation Dead beat control Dead-end elimination Deborah and Franklin Haimo Awards for Distinguished College or University Teaching of Mathematics Debye function Decacross Decade (log scale) Decagon Decagonal antiprism Decagonal bipyramid Decagonal number Decagonal prism Decagonal trapezohedron Decagram (geometry) Decahedron Decayotton Decidability (logic) Decidable sublanguages of set theory Decimal Decimal representation Decimal separator Decimal superbase Decimalization process Decision mathematics Decision problem Decision theory Decoding methods Decomposition (disambiguation) Decomposition matrix Decomposition of spectrum (functional analysis) Decoupling Decrement table Dedekind cut Dedekind discriminant theorem Dedekind domain Dedekind eta function Dedekind psi function Dedekind sum Dedekind zeta function Dedekind-infinite set Deduction theorem Deductive system Deep inference Default logic Defect (geometry) Defective matrix Deficient number Definable Definable real number Definable set Defined and undefined Definite bilinear form Definite quadratic form Definition Deformation (mechanics) Deformation (meteorology) Deformation retract Deformation theory Defuzzification Degasperis-Procesi equation Degen's eight-square identity Degeneracy (mathematics) Degenerate distribution Degenerate form Degree (angle) Degree (graph theory) Degree (mathematics) Degree diameter problem Degree distribution Degree matrix Degree of a continuous mapping Degree of a field extension Degree of a polynomial Degree of an algebraic variety Degree of anonymity Degree symbol Degree-constrained spanning tree Degree-genus formula Degrees of freedom (physics and chemistry) Degrees of freedom (statistics) Dehn function Dehn plane Dehn surgery Dehn twist Dehn's lemma Dehn–Sommerville equations Del Del Pezzo surface Del in cylindrical and spherical coordinates Delannoy number Delaunay tessellation field estimator Delaunay triangulation Delay differential equation Delayed clause construction Delayed column generation Deligne conjecture Deligne–Lusztig theory Delphi method Delta function Delta invariant Delta lemma Delta method Delta operator Delta set Delta-functor Delta-ring Deltahedron Deltoid curve Deltoidal hexecontahedron Deltoidal icositetrahedron Deltoidal trihexagonal tiling Demand optimization Demienneract Demihepteract Demihepteractic honeycomb Demihexeract Demihexeractic honeycomb Demihypercube Deming regression Demiocteract Demiocteractic honeycomb Demipenteract Demipenteractic honeycomb Demitesseractic honeycomb Demon algorithm Dempster-Shafer theory Dendrite (mathematics) Dendrogram Dendroid (topology) Denjoy diffeomorphism Denotational semantics Dense graph Dense order Dense set Dense-in-itself Densely-defined operator Density (disambiguation) Density estimation Density matrix Deontic logic Dependence relation Dependency graph Dependency relation Dependent and independent variables Depth (ring theory) Depth-first search Depth-limited search Derangement Derivation (abstract algebra) Derivation of the Navier–Stokes equations Derivation of the Routh array Derivative Derivative (examples) Derivative (generalizations) Derivative algebra (abstract algebra) Derivative of a constant Derivator Derived category Derived functor Derived set (mathematics) Desargues graph Desargues' theorem Desarguesian plane Descartes' rule of signs Descartes' theorem Descendant subgroup Descent (category theory) Descent direction Describing function Description number Descriptive complexity theory Descriptive geometry Descriptive interpretation Descriptive research Descriptive set theory Descriptive statistics Design matrix Design of experiments Design theory Dessin d'enfant Destination dispatch Destructive dilemma Detailed balance Determinacy Determinant Deterministic context-free grammar Deterministic context-free language Deterministic system (mathematics) Detrended fluctuation analysis Developable Developable surface Development (differential geometry) Development (topology) Deviance (statistics) Deviance information criterion Deviation Devil's curve Dharmakirti Diabolical cube Diaconescu's theorem Diagonal Diagonal argument Diagonal form Diagonal functor Diagonal intersection Diagonal lemma Diagonal magic cube Diagonal matrix Diagonal subgroup Diagonalizable matrix Diagonalization Diagonally dominant matrix Diagram (category theory) Diagrammatic reasoning Dialectica interpretation Dialectica space Dialling Diameter Diamond (mathematics) Diamond cubic Diamond-square algorithm Dice's coefficient Dickey-Fuller test Dickman-de Bruijn function Dickson polynomial Dickson's lemma Dicyclic group Diffeology Diffeomorphism Diffeomorphism constraint Difference engine Difference hierarchy Difference of Gaussians Difference of two squares Difference operator Difference polynomials Difference quotient Difference set Different ideal Differentiable manifold Differential (calculus) Differential (infinitesimal) Differential Galois theory Differential algebra Differential algebraic equation Differential analyser Differential calculus Differential calculus over commutative algebras Differential coefficient Differential entropy Differential equation Differential equations of addition Differential evolution Differential form Differential game Differential geometry Differential geometry of curves Differential geometry of surfaces Differential graded algebra Differential graded category Differential ideal Differential inclusion Differential of the first kind Differential operator Differential structure Differential topology Differential variational inequality Differentially closed field Differentiation in Fréchet spaces Differentiation of integrals Differentiation of measures Differentiation of trigonometric functions Differentiation rules Differentiation under the integral sign Differintegral Diffie-Hellman problem Diffuse element method Diffusion MRI Diffusion Monte Carlo Diffusion equation Diffusion process Diffusion-limited aggregation Diffuson Digamma function Digit sum Digital Library of Mathematical Functions Digital Morse theory Digital control Digital geometry Digital root Digital sum Digital sum in base b Digital sundial Digital topology Dignāga Digon Dihedral angle Dihedral group Dihedral group of order 6 Dihedral prime Dihedral symmetry in three dimensions Dihedron Dijkstra's algorithm Dijkstra-Scholten algorithm Dilation (mathematics) Dilation (operator theory) Dilworth's theorem Dimension Dimension (vector space) Dimension function Dimension of an algebraic variety Dimension reduction Dimension theorem for vector spaces Dimension theory Dimension theory (algebra) Dimensional analysis Dimensional metrology Dimensional operator Dimensional regularization Dimensions Dimetric projection Diminished rhombicosidodecahedron Dinatural transformation Dini derivative Dini test Dini's theorem Dinitz conjecture Diophantine approximation Diophantine equation Diophantine set Diophantus II.VIII Dipole graph Dirac adjoint Dirac algebra Dirac bracket Dirac comb Dirac delta function Dirac equation Dirac equation in the algebra of physical space Dirac measure Dirac operator Dirac's theorem Direct algebraic logic Direct image functor Direct image with compact support Direct integral Direct limit Direct limit of groups Direct linear transformation Direct material variance Direct multiple shooting method Direct product Direct proof Direct relationship Direct simulation Monte Carlo Direct stiffness method Direct sum of groups Direct sum of modules Directed acyclic graph Directed infinity Directed relation Directed rounding Directed set Direction (geometry, geography) Direction cosine Direction of movement Direction vector Directional derivative Directional statistics Directional symmetry Director string Dirichlet L-function Dirichlet algebra Dirichlet beta function Dirichlet boundary condition Dirichlet character Dirichlet conditions Dirichlet convolution Dirichlet density Dirichlet distribution Dirichlet eta function Dirichlet integral Dirichlet kernel Dirichlet problem Dirichlet process Dirichlet series Dirichlet's approximation theorem Dirichlet's energy Dirichlet's principle Dirichlet's test Dirichlet's theorem Dirichlet's theorem on arithmetic progressions Dirichlet's unit theorem Dirk Niblick of the Math Brigade Disattenuation Discharging method (discrete mathematics) Discontinuous Galerkin method Discontinuous linear map Discourse on the Method Discrepancy Discrepancy function Discrepancy of hypergraphs Discrepancy theory Discrete Fourier transform Discrete Fourier transform (general) Discrete Hartley transform Discrete Laplace operator Discrete Morse theory Discrete Poisson equation Discrete and Computational Geometry Discrete category Discrete choice analysis Discrete cosine transform Discrete differential geometry Discrete element method Discrete exterior calculus Discrete geometry Discrete group Discrete logarithm Discrete mathematics Discrete measure Discrete modelling Discrete optimization Discrete phase-type distribution Discrete probability distribution Discrete series representation Discrete sine transform Discrete space Discrete spectrum Discrete tomography Discrete valuation Discrete valuation ring Discrete wavelet transform Discrete-time Fourier transform Discretization Discretization error Discriminant Discriminant function analysis Discriminant of an algebraic number field Discriminant validity Discriminated union Disdyakis dodecahedron Disdyakis triacontahedron Disintegration theorem Disjoint sets Disjoint union Disjoint union (topology) Disjoint union of graphs Disjunction and existence properties Disjunction elimination Disjunction property of Wallman Disjunctive normal form Disjunctive sequence Disjunctive sum Disjunctive syllogism Disk (mathematics) Disk algebra Disk covering problem Disk integration Disorder problem Disperser Dispersion point Dispersionless equation Dispersive partial differential equation Disphenocingulum Disphenoid Disphenoid tetrahedral honeycomb Displacement (vector) Displacement operator Disquisitiones Arithmeticae Dissected regular icosahedron Dissection problem Dissection puzzle Dissipation Dissipation factor Dissipative operator Dissipative soliton Dissociated press Distance Distance (graph theory) Distance geometry Distance matrix Distance-hereditary graph Distance-regular graph Distance-transitive graph Distinct Distributed constraint optimization Distributed minimum spanning tree Distribution (differential geometry) Distribution (mathematics) Distribution ensemble Distribution function Distributive category Distributive homomorphism Distributive lattice Distributive lattice/Proofs Distributive law between monads Distributivity Distributivity (order theory) Ditrigonal dodecadodecahedron Divergence Divergence theorem Divergent geometric series Divergent series Diversity index Divide and conquer algorithm Divide-and-conquer eigenvalue algorithm Divided differences Divided power structure Dividing a circle into areas Divisibility rule Divisible group Division (mathematics) Division algebra Division algorithm Division by two Division by zero Division by zero (disambiguation) Division ring Divisor Divisor (algebraic geometry) Divisor function Divisor summatory function Dixmier conjecture Dixmier trace Dixon's factorization method Dixon's identity Dobinski's formula Dodecadodecahedron Dodecagon Dodecagonal antiprism Dodecagonal number Dodecagonal prism Dodecahedral conjecture Dodecahedral prism Dodecahedron Dodgem Dodgson condensation Dolbeault cohomology Dolbeault operator Doléans-Dade exponential Domain (complex analysis) Domain (mathematics) Domain (ring theory) Domain coloring Domain decomposition method Domain decomposition methods Domain of discourse Domain of holomorphy Domain relational calculus Domain theory Domatic number Dome (mathematics) Dominance order Dominated convergence theorem Dominating decision rule Dominating set Dominating set problem Domineering Domino problem Domino tiling Dominoes Don't-care Donald in Mathmagic Land Donaldson theory Donaldson's theorem Donkey pronoun Donsker's theorem Doob martingale Doob's martingale convergence theorems Doob's martingale inequality Doob-Meyer decomposition theorem Doomsday argument Door space Doo–Sabin subdivision surface Dormand–Prince method Dot book Dot product Dots and Boxes Douady rabbit Double (manifold) Double Mersenne number Double Mersenne prime Double coset Double counting (fallacy) Double counting (proof technique) Double dabble Double exponential distribution Double exponential function Double negative elimination Double pendulum Double tangent bundle Double torus knot Doubling the cube Doubling time Doubly stochastic matrix Doubly stochastic model Doubly-connected edge list Doubly-periodic function Dougall's formula Dowker notation Dowker space Dowling geometry Dozen Dragon curve Drawing of lots Drawing straws Drazin inverse Drinfel'd module Dual (category theory) Dual abelian variety Dual basis Dual basis in a field extension Dual bundle Dual code Dual cone and polar cone Dual control theory Dual curve Dual graph Dual norm Dual number Dual object Dual pair Dual polygon Dual polyhedron Dual problem Dual quaternion Dual representation Dual space Dual topology Dual wavelet Duality Duality (electrical circuits) Duality (mathematics) Duality (mechanical engineering) Duality (order theory) Duality (projective geometry) Ducci sequence Dudeney number Duffing equation Duffing map Duggan-Schwartz theorem Duhamel's integral Duhamel's principle Duistermaat–Heckman formula Duke Mathematical Journal Dulmage-Mendelsohn decomposition Dummy variable Dunbar's number Duncan's new multiple range test Dunce cap Dunce hat (topology) Dunford-Schwartz theorem Dunford–Pettis property Dunkl operator Duocylinder Duodecimal Duoprism Dupin cyclide Dupin indicatrix Duplication matrix Durand-Kerner method Durbin test Durbin–Watson statistic Durvexity Dutch book Dvoretzky's theorem Dvoretzky–Kiefer–Wolfowitz inequality Dwork's lemma Dyadic product Dyadic rational Dyadic tensor Dyadic transformation Dyadics Dym equation Dynamic Bayesian network Dynamic Monte Carlo method Dynamic convex hull Dynamic link matching Dynamic programming Dynamic relaxation Dynamic time warping Dynamical billiards Dynamical system Dynamical system (definition) Dynamical systems theory Dynkin index Dynkin system Dynkin's formula Dynkin's lemma Dyscalculia Dyson conjecture E (mathematical constant) E-function E7½ EHP spectral sequence ELEMENTARY ELMO Earley parser Early numeracy Early stopping Earnshaw's theorem Earth mover's distance East Journal on Approximations Eastern Arabic numerals Easton's theorem Eberlein–Šmulian theorem Eccentricity (mathematics) Eccentricity vector Eckmann–Hilton argument Ecliptic latitude Ecliptic longitude Ecological correlation Ecological fallacy Econometric Theory Econometrica Eddy covariance Edge (geometry) Edge coloring Edge contraction Edge cycle cover Edge figure Edge of chaos Edge space Edge-graceful labeling Edge-matching puzzle Edge-of-the-wedge theorem Edge-transitive graph Edgeworth conjecture Edgeworth series Edholm's law Edinburgh Mathematical Society Edmonds matrix Edmonds's algorithm Edmonds-Karp algorithm Edmund F. Robertson Educational Studies in Mathematics Edyth May Sliffe Award Effect size Effective action Effective descriptive set theory Effective dimension Effective results in number theory Effectively separable Efficiency (statistics) Egorov's theorem Egyptian Mathematical Leather Roll Egyptian fraction Egyptian mathematics Egyptian numerals Ehlers group Ehrenfeucht–Fraïssé game Ehresmann connection Ehresmann's theorem Ehrhart polynomial Ehrling's lemma Eigendecomposition of a matrix Eigenface Eigenfunction Eigenplane Eigenpoll Eigenvalue algorithm Eigenvalue perturbation Eigenvalue, eigenvector and eigenspace Eight queens puzzle Eikonal approximation Eikonal equation Eilenberg's inequality Eilenberg-Ganea conjecture Eilenberg-MacLane space Eilenberg-Moore spectral sequence Eilenberg-Steenrod axioms Eilenberg–Mazur swindle Eilenberg–Zilber theorem Einstein field equations Einstein manifold Einstein notation Einstein tensor Eisenstein ideal Eisenstein integer Eisenstein prime Eisenstein series Eisenstein's criterion Eisenstein's theorem Either–or topology Elasticity of a function Electric field gradient Electric field integral equation Electrical resistivity tomography Electrogravitic tensor Electromagnetic stress-energy tensor Electromagnetic tensor Electromagnetic wave equation Element (category theory) Element (mathematics) Elementarily equivalent Elementary abelian group Elementary algebra Elementary amenable group Elementary arithmetic Elementary class Elementary divisors Elementary embedding Elementary event Elementary function Elementary group theory Elementary mathematics Elementary matrix Elementary proof Elementary reflector Elementary substructure Elementary symmetric mean Elementary symmetric polynomial Elements of Algebra Eleusis (game) Elevation (view) Elevator paradox Elias delta coding Elias gamma coding Elias omega coding Elimination theory Elkies trinomial curves Elliott–Halberstam conjecture Ellipse Ellipse/Proofs Ellipsis Ellipsoid Ellipsoid method Ellipsoidal coordinates Elliptic Curve DSA Elliptic boundary value problem Elliptic catenary Elliptic cohomology Elliptic complex Elliptic coordinates Elliptic curve Elliptic curve cryptography Elliptic curve primality proving Elliptic cylindrical coordinates Elliptic function Elliptic gamma function Elliptic geometry Elliptic hypergeometric series Elliptic integral Elliptic modulus Elliptic operator Elliptic point Elliptic rational functions Elliptic surface Elliptic unit Ellis–Nakamura lemma Ellsberg paradox Elo rating system Elongated alternated cubic honeycomb Elongated dipyramid Elongated hexagonal dipyramid Elongated pentagonal cupola Elongated pentagonal dipyramid Elongated pentagonal gyrobicupola Elongated pentagonal gyrobirotunda Elongated pentagonal gyrocupolarotunda Elongated pentagonal orthobicupola Elongated pentagonal orthobirotunda Elongated pentagonal orthocupolarotunda Elongated pentagonal pyramid Elongated pentagonal rotunda Elongated square cupola Elongated square dipyramid Elongated square gyrobicupola Elongated square pyramid Elongated triangular cupola Elongated triangular dipyramid Elongated triangular gyrobicupola Elongated triangular orthobicupola Elongated triangular prismatic honeycomb Elongated triangular pyramid Elongated triangular tiling Embedded Zerotrees of Wavelet transforms Embedding Embedding problem Embree-Trefethen constant Emirp Empirical Bayes method Empirical distribution function Empirical measure Empirical modelling Empirical orthogonal functions Empirical probability Empirical process Empirical statistical laws Empty domain Empty function Empty product Empty set Empty string Empty sum En (Lie algebra) Encryption Encyclopaedia of Mathematics Encyclopedia of Statistical Sciences Encyclopedia of Triangle Centers End (category theory) End (topology) Endofunction Endomorphism Endomorphism ring Endoscopic group Energetic space Energy flux Energy minimization Energy principles in structural mechanics Energy statistics Engel expansion Engel group Engel theorem Engineering cybernetics Engineering notation Engineering statistics Engineering tolerance Engineering treatment of the finite element method English numerals Enneacross Enneadecagon Enneagram Enneazetton Enneper surface Enneract Enriched category Enriques surface Enriques-Kodaira classification Enrolled Actuary Ensemble Kalman filter Entailment Entire function Entitative graph Entrainment (physics) Entropic vector Entropy Entropy (ecology) Entropy (information theory) Entropy estimation Entropy maximization Entropy power inequality Entscheidungsproblem Enumeration Enumerative combinatorics Enumerative geometry Enumerator polynomial Envelope (mathematics) Envelope theorem Enveloping von Neumann algebra Envy-free Epicycloid Epigraph (mathematics) Epimenides paradox Epimorphism Epispiral Epistemic logic Epitrochoid Epps effect Epsilon calculus Epsilon conjecture Epsilon-induction Epsilon-neighborhood Epstein zeta function EqWorld Equable shapes Equal incircles theorem Equaliser (mathematics) Equality (mathematics) Equally spaced polynomial Equals sign Equant Equating coefficients Equation Equation of State Calculations by Fast Computing Machines Equation of motion Equation solving Equations defining abelian varieties Equatorial coordinate system Equiangular lines Equiangular polygon Equianharmonic Equiareal Equiconsistency Equicontinuity Equidigital number Equidimensional Equidistributed sequence Equidistribution theorem Equilateral pentagon Equilateral polygon Equilateral triangle Equilibrium point Equinumerosity Equipossible Equipotential Equipotential surface Equiprobable Equitorium Equivalence (measure theory) Equivalence class Equivalence of categories Equivalence of metrics Equivalence relation Equivalence relations on algebraic cycles Equivalent rectangular bandwidth Equivalization Equivariant L-function Equivariant cohomology Equivariant map Erasure (logic) Erasure code Erdős cardinal Erdős conjecture Erdős conjecture on arithmetic progressions Erdős number Erdős-Diophantine graph Erdős-Nagy theorem Erdős–Anning theorem Erdős–Bacon number Erdős–Borwein constant Erdős–Burr conjecture Erdős–Faber–Lovász conjecture Erdős–Fuchs theorem Erdős–Graham conjecture Erdős–Gyárfás conjecture Erdős–Kac theorem Erdős–Ko–Rado theorem Erdős–Mordell inequality Erdős–Rényi model Erdős–Stone theorem Erdős–Straus conjecture Erdős–Szekeres theorem Erdős–Woods number Ergebnisse der Mathematik und ihrer Grenzgebiete Ergodic (adjective) Ergodic Ramsey theory Ergodic hypothesis Ergodic measure Ergodic process Ergodic sequence Ergodic theory Erlang distribution Erlangen program Ernst equation Erosion (morphology) Error analysis Error bar Error detection and correction Error function Error-correcting codes with feedback Errors and residuals in statistics Errors-in-variables model Esquisse d'un Programme Essential dimension Essential extension Essential manifold Essential range Essential singularity Essential spectrum Essential subgroup Essential supremum and essential infimum Essentially surjective functor Essentially unique Estimation Estimation lemma Estimation of covariance matrices Estimation of signal parameters via rotational invariance techniques Estimation theory Estimator Estrin's scheme Eta function Etemadi's inequality Etendue Eternity puzzle Ethnocomputing Ethnomathematics Etruscan numerals Euclid and his Modern Rivals Euclid number Euclid's Elements Euclid's lemma Euclid's orchard Euclid's theorem Euclid-Mullin sequence Euclidean Euclidean algorithm Euclidean distance Euclidean distance matrix Euclidean domain Euclidean field Euclidean geometry Euclidean group Euclidean minimum spanning tree Euclidean plane isometry Euclidean relation Euclidean shortest path Euclidean space Euclidean subspace Euclidean vector Eudemus of Rhodes Euler Medal Euler angles Euler brick Euler characteristic Euler class Euler diagram Euler equations Euler filter Euler function Euler hypergeometric integral Euler integral Euler line Euler method Euler number Euler on infinite series Euler operator Euler pole Euler product Euler pseudoprime Euler summation Euler system Euler's conjecture Euler's continued fraction formula Euler's criterion Euler's disk Euler's equation of degree four Euler's equations Euler's factorization method Euler's formula Euler's four-square identity Euler's identity Euler's rotation theorem Euler's rule Euler's spiral Euler's sum of powers conjecture Euler's theorem Euler's theorem (differential geometry) Euler's theorem in geometry Euler's totient function Euler-Jacobi pseudoprime Euler-Maruyama method Euler-Tricomi equation Eulerian number Eulerian path Euler–Lagrange equation Euler–Maclaurin formula Euler–Mascheroni constant Euler–Poisson–Darboux equation Euler–Rodrigues parameters Eureka (magazine) European Congress of Mathematics European Journal of Combinatorics European Mathematical Society European numerals Evacuation process simulation Evaluating sums Even and odd functions Even and odd ordinals Even code Even-hole-free graph Even-odd rule Evenness of zero Event (probability theory) Event calculus Event generator Eventually (mathematics) Everyday Mathematics Evidence under Bayes theorem Evolute Evolution and the Theory of Games Evolution strategy Evolutionarily stable set Evolutionarily stable state Evolutionary algorithm Evolutionary game theory Evolutionary graph theory Evolutionary programming Ewald summation Ewens's sampling formula Exact category Exact coloring Exact differential Exact differential equation Exact functor Exact sequence Exact solution Exact test Exact trigonometric constants Exactly solvable model Example of a commutative non-associative magma Example of a non-associative algebra Examples of Markov chains Examples of boundary value problems Examples of differential equations Examples of generating functions Examples of groups Examples of vector spaces Excellent ring Exceptional divisor Exceptional inverse image functor Exceptional object Excess risk Excess-3 Exchange matrix Exchange paradox Exchangeable random variables Excision theorem Excitable medium Excluded point topology Exclusive Exclusive or Exhaustion by compact sets Existence theorem Existential graph Existential quantification Existentially closed model Exner equation Exotic R4 Exotic probability Exotic sphere Expander graph Expander mixing lemma Expander walk sampling Expansion (geometry) Expansive Expectation-maximization algorithm Expected gain Expected return Expected utility hypothesis Expected value Expected value of perfect information Expenditure minimization problem Experiment (probability theory) Experimental Mathematics (journal) Experimental mathematics Experimentwise error rate Explained sum of squares Explained variance Explained variation Explicit and implicit methods Explicit formula Exploratory data analysis Exponential Exponential decay Exponential dichotomy Exponential dispersion model Exponential distribution Exponential error Exponential factorial Exponential family Exponential formula Exponential function Exponential growth Exponential integral Exponential map Exponential object Exponential power distribution Exponential sheaf sequence Exponential smoothing Exponential sum Exponential time Exponential-Golomb coding Exponentially equivalent measures Exponentiation Exponentiation by squaring Expression (mathematics) Exsecant Ext functor Extended Euclidean algorithm Extended Kalman filter Extended Newsvendor models Extended finite element method Extended negative binomial distribution Extended real number line Extendible cardinal Extensible automorphism Extension (mathematics) Extension (predicate logic) Extension and contraction of ideals Extension by definitions Extension of scalars Extension topology Extensionality Extensions of symmetric operators Exterior (topology) Exterior algebra Exterior angle theorem Exterior bundle Exterior covariant derivative Exterior derivative External (mathematics) External ray Extouch triangle Extra special group Extractor Extraneous and missing solutions Extrapolation Extravagant number Extremal combinatorics Extremal graph theory Extremal length Extremal optimization Extremally disconnected space Extreme physical information Extreme point Extreme value Extreme value theorem Extreme value theory Eye of Horus Eötvös effect E₆ E₇ E₈ E₈ lattice E₈ manifold E∞-operad F-algebra F-coalgebra F-distribution F-divergence F-space F-test F-theory F. and M. Riesz theorem F1 Score FBI transform FC-group FCVO FETI FETI-DP FISH (cipher) FK-AK space FK-space FKG inequality FLAME clustering FNN algorithm FOIL rule FSU Young Scholars Program FTCS scheme FWL theorem Face (geometry) Face configuration Face diagonal Facet (mathematics) Facetting Fachinformationszentrum Karlsruhe Facility location Factor analysis Factor base Factor graph Factor of automorphy Factor theorem Factor-critical graph Factoradic Factorial Factorial code Factorial experiment Factorial moment Factorial moment generating function Factorial prime Factorion Factorization Factorization lemma Factorization system Faculty of Mathematics, University of Cambridge Fair coin Fair division Faithful representation Fake 4-ball Falconer's formula False (Unix) False discovery rate False position method False positive paradox False precision Faltings' theorem Family (mathematics) Family automorphism Family of curves Family of sets Familywise error rate Fano factor Fano plane Fano resonance Fano variety Fano's inequality Farey sequence Farkas' lemma Faro shuffle Farrell–Jones conjecture Fary-Milnor theorem Fast Fourier Transform Telescope Fast Fourier transform Fast Kalman filter Fast Library for Number Theory Fast Walsh–Hadamard transform Fast marching method Fast multipole method Fast probability integration Fast wavelet transform FastICA Fat tail Fatou's lemma Fatou's theorem Fatou-Bieberbach domain Fatou–Lebesgue theorem Faugère F4 algorithm Faulhaber's formula Faustmann's formula Favard constant Favard operator Faà di Bruno's formula Feasible generalized least squares Feature vector Feebly compact space Feedback arc set Feedback linearization Feedback vertex set Feferman–Schütte ordinal Feigenbaum constants Feigenbaum function Feit-Thompson conjecture Feit–Thompson theorem Fejér kernel Fejér's theorem Fekete polynomial Feld-Tai lemma Felix Klein Protocols Feller process Feller's coin-tossing constants Feller-continuous process Fermat Prize Fermat cubic Fermat curve Fermat number Fermat point Fermat polygonal number theorem Fermat primality test Fermat's Enigma Fermat's Last Theorem Fermat's Last Theorem in fiction Fermat's factorization method Fermat's little theorem Fermat's principle Fermat's spiral Fermat's theorem Fermat's theorem (stationary points) Fermat's theorem on sums of two squares Fermat–Apollonius circle Fermi coordinates Fermi's golden rule Fermi-Walker differentiation Fermi–Dirac statistics Fermi–Pasta–Ulam problem Fermi–Ulam model Fernique's theorem Feynman graph Feynman parametrization Feynman point Feynman-Kac formula Fiber (mathematics) Fiber bundle Fiber bundle construction theorem Fiber derivative Fibered knot Fibered manifold Fibonacci Quarterly Fibonacci coding Fibonacci family Fibonacci fractal Fibonacci heap Fibonacci number Fibonacci numbers in popular culture Fibonacci polynomials Fibonacci prime Fibonacci word Fibonacci's identity Fibrant object Fibration Fibred category Fictionalism Fiducial inference Field (mathematics) Field arithmetic Field extension Field line Field norm Field of definition Field of fractions Field of sets Field of values Field trace Field with one element Fielden Chair of Pure Mathematics Fields Institute Fields Medal Fieller's theorem Fierz identity Fifteen puzzle Fifth dimension Figurate number Figure-eight knot (mathematics) Filled Julia set Filling area conjecture Filling radius Filter (mathematics) Filter bank Filtered algebra Filtered category Filtering problem (stochastic processes) Filtration (mathematics) Final topology Final value theorem Financial modeling Financial risk management Fine topology (potential theory) Finger binary Finger counting Finitary Finite Finite difference Finite difference method Finite dimensional von Neumann algebra Finite element method Finite element method in structural mechanics Finite field Finite field arithmetic Finite geometry Finite group Finite intersection property Finite mathematics Finite model theory Finite morphism Finite rank operator Finite set Finite state machine Finite strain theory Finite thickness Finite topological space Finite topology Finite type invariant Finite volume method Finite-difference time-domain method Finite-dimensional distribution Finitely generated abelian group Finitely generated algebra Finitely generated module Finitism Finitistic induction Finnish numerals Finsler manifold First Hurwitz triplet First derivative test First fundamental form First order partial differential equation First uncountable ordinal First variation First variation of area formula First-countable space First-hitting-time model First-order hold First-order logic First-order predicate Fischer group Fish curve Fishburn–Shepp inequality Fisher consistency Fisher information Fisher information metric Fisher kernel Fisher transformation Fisher's equation Fisher's exact test Fisher's inequality Fisher's method Fisher's noncentral hypergeometric distribution Fisher's z-distribution Fisher-Tippett distribution Fisher-Yates shuffle Fisher–Tippet–Gnedenko theorem Fitch-style calculus Fitness function Fitness model Fitting lemma Fitting length Fitting subgroup Fitting's theorem FitzHugh–Nagumo model Five Equations That Changed the World: The Power and Poetry of Mathematics Five circles theorem Five color theorem Five lemma Five-number summary Five-point stencil Five-pointed star Five-term exact sequence Fixation index Fixed effects estimation Fixed point Fixed point (mathematics) Fixed point combinator Fixed point index Fixed point iteration Fixed point property Fixed point space Fixed point theorem Fixed point theorems in infinite-dimensional spaces Fixed points of isometry groups in Euclidean space Fixed-point lemma for normal functions Flag (geometry) Flag (linear algebra) Flat (geometry) Flat manifold Flat map Flat module Flat morphism Flat topology Flatland Flatness Flatterland Fleiss' kappa Fleming-Viot process Flexagon Flexible polyhedron Flip (algebraic geometry) Flipped SU(5) Floating point Floer homology Flooding algorithm Floor and ceiling functions Floorplan (microelectronics) Flop (algebraic geometry) Floquet theory Floret pentagonal tiling Flow (mathematics) Flow network Flow velocity Flower of Life Floyd's triangle Floyd–Warshall algorithm Fluctuation theorem Fluent (artificial intelligence) Fluent calculus Flux Flux limiter Flype Focal surface Fock matrix Fock space Fock state Focus (geometry) Fodor's lemma Fokker periodicity blocks Fokker–Planck equation Folded normal distribution Folded spectrum method Foliation Folium of Descartes Folk mathematics Folk theorem Fondements de la Géometrie Algébrique For Want of a Nail (proverb) Forbidden graph characterization Force-based algorithms Forcing (mathematics) Forcing (recursion theory) Ford circle Ford-Fulkerson algorithm Forecast error Forecasting Forest plot Forgetful functor Fork (topology) Forking extension Form constant Formal calculation Formal concept analysis Formal derivative Formal grammar Formal group Formal language Formal moduli Formal power series Formal proof Formal scheme Formal system Formally real field Formation matrix Formula Formula calculator Formula for primes Formulario mathematico Fort space Fortunate number Fortunate prime Fortune's algorithm Forward chaining Forward-backward algorithm Foster graph Foster's theorem Foundations of mathematics Foundations of statistics Fountain code Four color theorem Four dimensionalism Four exponentials conjecture Four fours Four-gradient Four-vector Four-vertex theorem Fourier Fourier algebra Fourier analysis Fourier integral operator Fourier inversion theorem Fourier operator Fourier optics Fourier pair Fourier series Fourier theorem Fourier transform Fourier transform on finite groups Fourier transform spectroscopy Fourier–Bessel series Fourier–Motzkin elimination Fourth dimension Fourth power Fox derivative Fox n-coloring Fractal Fractal analysis Fractal antenna Fractal art Fractal compression Fractal cosmology Fractal dimension Fractal dimension on networks Fractal flame Fractal generating software Fractal in soil mechanics Fractal lake Fractal landscape Fractal sequence Fractal transform Fractics Fractint Fraction (mathematics) Fraction of variance unexplained Fractional Brownian motion Fractional Fourier transform Fractional calculus Fractional cascading Fractional coloring Fractional differential equation Fractional ideal Fractional order control Fractional order integrator Fractional quantum mechanics Fracton Frame bundle Frame of a vector space Framed knot Frank and Brennie Morgan Prize for Outstanding Research in Mathematics by an Undergraduate Student Frank–Wolfe algorithm Fransén-Robinson constant Frattini subgroup Frattini's argument Fraysseix–Rosenstiehl's planarity criterion Fraňková–Helly selection theorem Frederick W. Lanchester Prize Fredholm alternative Fredholm determinant Fredholm integral equation Fredholm kernel Fredholm module Fredholm operator Fredholm theory Fredholm's theorem Free Boolean algebra Free Lie algebra Free abelian group Free algebra Free fraction Free functor Free group Free ideal ring Free lattice Free loop Free module Free monoid Free object Free probability Free product Free regular set Free variables and bound variables Free-by-cyclic group Freedman-Diaconis rule Freeform surface modelling Frege's propositional calculus Frege's theorem Freidlin-Wentzell theorem Freiheitssatz Freiling's axiom of symmetry Freiman's theorem Freivald's algorithm French curve French mathematical seminars Frenet–Serret formulas Frequency (statistics) Frequency distribution Frequency domain Frequency partition Frequency probability Frequency spectrum Freshman's dream Fresnel equations Fresnel integral Freudenthal magic square Freudenthal suspension theorem Friedel's law Friedlander–Iwaniec theorem Friedman number Friedman test Friedrichs extension Friedrichs' inequality Friendly number Friendly-index set Frieze group Frobenius algebra Frobenius endomorphism Frobenius group Frobenius matrix Frobenius method Frobenius normal form Frobenius pseudoprime Frobenius reciprocity theorem Frobenius solution to the hypergeometric equation Frobenius theorem Frobenius theorem (differential topology) Frobenius theorem (real division algebras) Frobenius-Schur indicator From Here to Infinity (book) Frontal solver Frostman lemma Frugal number Frustum Fréchet algebra Fréchet derivative Fréchet distance Fréchet distribution Fréchet filter Fréchet manifold Fréchet space Fréchet surface Frénicle standard form Fröhlich Prize Fröhlicher spectral sequence Frölicher space Frölicher–Nijenhuis bracket Fubini's theorem Fubini–Study metric Fuchs's theorem Fuchsian group Fuchsian model Fuglede's theorem Fuhrmann circle Fujiki class C Fujita conjecture Fulkerson Prize Full and faithful functors Full employment theorem Full reptend prime Full state feedback Full width at half maximum Fully characteristic subgroup Fully invariant subgroup Fully normalized subgroup Fulton–Hansen connectedness theorem Function (mathematics) Function application Function approximation Function composition Function field Function field (scheme theory) Function field of an algebraic variety Function of a real variable Function problem Function representation Function series Function space Functional (mathematics) Functional analysis Functional calculus Functional completeness Functional data analysis Functional decomposition Functional derivative Functional determinant Functional equation Functional equation (L-function) Functional integration Functional predicate Functional square root Functional-theoretic algebra Functionally graded element Functor Functor category Fundamenta Mathematicae Fundamental axiom of analysis Fundamental class Fundamental discriminant Fundamental domain Fundamental frequency Fundamental group Fundamental lemma of calculus of variations Fundamental lemma of sieve theory Fundamental pair of periods Fundamental plane (spherical coordinates) Fundamental polygon Fundamental recurrence formulas Fundamental representation Fundamental solution Fundamental theorem Fundamental theorem of Galois theory Fundamental theorem of Riemannian geometry Fundamental theorem of algebra Fundamental theorem of arithmetic Fundamental theorem of calculus Fundamental theorem of combinatorial enumeration Fundamental theorem of curves Fundamental theorem of cyclic groups Fundamental theorem of finitely generated abelian groups Fundamental theorem of linear algebra Fundamental theorem on homomorphisms Fundamental unit (number theory) Fundamental vector field Furstenberg's proof of the infinitude of primes Further Mathematics Futoshiki Fuzzy game Fuzzy logic Fuzzy matrix theory Fuzzy measure theory Fuzzy set Fuzzy set operations Fuzzy sphere Fuzzy transportation Fáry's theorem Følner sequence Fürer's algorithm Fσ set F₄ G-delta space G-networks G-structure G-test G2 manifold G2-structure GCD domain GEH GEOS circle GEUP GF(2) GRE Mathematics Test GS8 Braille Gabor atom Gabor filter Gabor transform Gabor-Wigner transform Gabow's algorithm Gabriel graph Gabriel's Horn Gain graph Gain group Galerkin method Galilean transformation Galileo's paradox Gall-Peters projection Gallery of curves Gallery of named graphs Galley division Galois cohomology Galois connection Galois extension Galois group Galois module Galois theory Galois/Counter Mode Galton's problem Galton–Watson process Gambler's conceit Gambler's fallacy Gambler's ruin Gambling and information theory Game complexity Game of chance Game semantics Game theory Game tree Gaming mathematics Gamma distribution Gamma function Gamma matrices Gamma process Gamma test (statistics) Ganea conjecture Gantt chart Gap theorem (disambiguation) Garden of Eden pattern Garside element Gauge covariant derivative Gauge function Gauge theory Gauged supergravity Gauss circle problem Gauss map Gauss sum Gauss' law for gravity Gauss's constant Gauss's law Gauss's lemma Gauss's lemma (Riemannian geometry) Gauss's lemma (number theory) Gauss's lemma (polynomial) Gauss-Codazzi equations (relativity) Gauss-Jacobi Mechanical Quadrature Gaussian binomial Gaussian curvature Gaussian elimination Gaussian free field Gaussian function Gaussian integer Gaussian integral Gaussian isoperimetric inequality Gaussian measure Gaussian period Gaussian polar coordinates Gaussian prime Gaussian process Gaussian quadrature Gaussian quantum Monte Carlo Gaussian random field Gaussian rational Gaussian surface Gauss–Bonnet theorem Gauss–Codazzi equations Gauss–Hermite quadrature Gauss–Jordan elimination Gauss–Kronrod quadrature formula Gauss–Kuzmin distribution Gauss–Kuzmin–Wirsing operator Gauss–Laguerre quadrature Gauss–Legendre algorithm Gauss–Lucas theorem Gauss–Manin connection Gauss–Markov Gauss–Markov process Gauss–Markov theorem Gauss–Newton algorithm Gauss–Seidel method Geary's C Gegenbauer polynomials Gelfand pair Gelfand representation Gelfand-Mazur theorem Gelfand-Naimark-Segal construction Gelfand–Naimark theorem Gelfond's constant Gelfond–Schneider constant Gelfond–Schneider theorem Gell-Mann matrices Genealogical numbering systems General Algebraic Modeling System General Matrix Multiply General covariance General frame General linear group General linear model General number field sieve General position General relativity General set theory General topology Generalised Morse sequence Generalised circle Generalised hyperbolic distribution Generalised logistic function Generalised metric Generalizability theory Generalization (logic) Generalization error Generalizations of Fibonacci numbers Generalizations of Pauli matrices Generalized Appell polynomials Generalized Dirichlet distribution Generalized Fitting subgroup Generalized Fourier series Generalized Gauss-Bonnet theorem Generalized Gaussian distribution Generalized Gauss–Newton method Generalized Helmholtz theorem Generalized Jacobian Generalized Kac–Moody algebra Generalized Korteweg-de Vries equation Generalized Ozaki cost function Generalized Pochhammer symbol Generalized Poincaré conjecture Generalized Procrustes analysis Generalized Riemann hypothesis Generalized Verma module Generalized Wiener process Generalized additive model Generalized arithmetic progression Generalized canonical correlation Generalized complex structure Generalized continued fraction Generalized coordinates Generalized eigenvector Generalized extreme value distribution Generalized f-mean Generalized flag variety Generalized function Generalized game Generalized inverse Generalized inverse Gaussian distribution Generalized linear array model Generalized linear model Generalized mean Generalized method of moments Generalized minimal residual method Generalized multidimensional scaling Generalized n-gon Generalized permutation matrix Generalized quadrangle Generalized quaternion interpolation Generalized singular value decomposition Generalized star height problem Generalized taxicab number Generating function Generating primes Generating set Generating set of a group Generating trigonometric tables Generator (category theory) Generator matrix Generic filter Generic point Generic polynomial Generic property Generic scalar transport equation Genetic algorithm Genetic algorithm in economics Genetic operator Genetic programming Genocchi number Genocchi prime Gentleman's Diary Gentzen's consistency proof Genus (mathematics) Genus field Genus of a multiplicative sequence Genus-2 surface GeoGebra Geoboard Geocentric coordinates Geodemographic segmentation Geodesic Geodesic convexity Geodesic curvature Geodesic deviation equation Geodesic dome Geodesic grid Geodesic manifold Geodesic map Geodesics as Hamiltonian flows Geombinatorics Geometriae Dedicata Geometric Brownian motion Geometric algebra Geometric analysis Geometric and Functional Analysis (journal) Geometric calculus Geometric combinatorics Geometric continuity Geometric data analysis Geometric distribution Geometric flow Geometric function theory Geometric genus Geometric graph theory Geometric group action Geometric group theory Geometric hashing Geometric integrator Geometric invariant theory Geometric mean Geometric measure theory Geometric median Geometric model Geometric primitive Geometric programming Geometric progression Geometric quantization Geometric series Geometric spanner Geometric standard deviation Geometric topology Geometric topology (object) Geometric-harmonic mean Geometrical optics Geometrization conjecture Geometry Geometry & Topology Geometry Expressions Geometry and topology Geometry of interaction Geometry of numbers Geometry processing Geometry template Gerbe Germ (mathematics) German Mathematical Society German tank problem Gershgorin circle theorem Gerstenhaber algebra Gesellschaft für Angewandte Mathematik und Mechanik Ghost Leg Ghosts of departed quantities Giant component Gibbard–Satterthwaite theorem Gibbs algorithm Gibbs lemma Gibbs measure Gibbs paradox Gibbs phenomenon Gibbs sampling Gibbs state Gibbs' inequality Gibbs' phase rule Gibbs-Helmholtz equation Gift wrapping algorithm Gigantic prime Gilbert-Varshamov bound Gilbert–Johnson–Keerthi distance algorithm Gilbreath's conjecture Gillespie algorithm Gimel function Gingerbreadman map Gini coefficient Girih tiles Girsanov theorem Girth (graph theory) Girvan-Newman algorithm Gittins index Giuga number Given, Required, Analysis, Solution, and Paraphrase Givens rotation Glaisher's theorem Glaisher-Kinkelin constant Gleason's theorem Glide plane Glide reflection Glivenko's theorem Glivenko-Cantelli theorem Global analytic function Global dimension Global element Global field Global optimization Global optimum Global square Global symmetry Glossary of Riemannian and metric geometry Glossary of arithmetic and Diophantine geometry Glossary of category theory Glossary of classical physics Glossary of differential geometry and topology Glossary of experimental design Glossary of field theory Glossary of game theory Glossary of graph theory Glossary of group theory Glossary of order theory Glossary of probability and statistics Glossary of ring theory Glossary of scheme theory Glossary of semisimple groups Glossary of shapes with metaphorical names Glossary of systems theory Glossary of tensor theory Glossary of topology Gluing axiom Gnomon (figure) Gnomonic projection Go and mathematics Goal node (computer science) Goal programming God Created the Integers God's algorithm Goddard–Thorn theorem Godement resolution Godunov's scheme Godunov's theorem Goertzel algorithm Going up and going down Golay code Gold code Goldbach's comet Goldbach's conjecture Goldbach's weak conjecture Goldbach–Euler theorem Goldbeter-Koshland kinetics Golden angle Golden function Golden ratio Golden ratio base Golden rectangle Golden rhombus Golden section search Golden spiral Golden triangle (mathematics) Golden–Thompson inequality Goldie's theorem Goldstine theorem Golod–Shafarevich theorem Golomb coding Golomb ruler Golomb sequence Golomb–Dickman constant Golygon Gomory's theorem Gompertz function Gompertz-Makeham law of mortality Gonality of an algebraic curve Good Will Hunting Good prime Goodman and Kruskal's lambda Goodman relation Goodman-Nguyen-van Fraassen algebra Goodness of fit Goodstein's theorem Good–Turing frequency estimation Googol Googolplex Goormaghtigh conjecture Goppa code Gordon-Luecke theorem Gorenstein ring Gosper curve Gosset 1 22 polytope Gosset 1 32 polytope Gosset 1 33 honeycomb Gosset 1 42 polytope Gosset 1 52 honeycomb Gosset 2 21 polytope Gosset 2 22 honeycomb Gosset 2 31 polytope Gosset 2 41 polytope Gosset 2 51 honeycomb Gosset 3 21 polytope Gosset 3 31 honeycomb Gosset 4 21 polytope Goursat's lemma Graceful labeling Grad (angle) Grad-Shafranov equation Grade (slope) Graded Lie algebra Graded algebra Graded category Graded poset Graded vector space Gradient Gradient conjecture Gradient descent Gradient theorem Gradient-related Graduate Studies in Mathematics Graduate Texts in Mathematics Graeco-Latin square Graeffe's method Graftal Graham scan Graham's number Gramian matrix Grammar-based code Gram–Schmidt process Grand 120-cell Grand 600-cell Grand Riemann hypothesis Grand antiprism Grand mean Grand stellated 120-cell Grand supercycle Grandi's series Grandi's series in education Granger causality Graph (data structure) Graph (mathematics) Graph automorphism Graph canonization Graph center Graph coloring Graph cuts in computer vision Graph drawing Graph embedding Graph enumeration Graph genus Graph homomorphism Graph isomorphism Graph labeling Graph manifold Graph of a function Graph of groups Graph operations Graph paper Graph partition Graph pebbling Graph product Graph property Graph reduction Graph rewriting Graph theory Graph toughness Graph transformation Graph traversal Graphical comparison of musical scales and mathematical progressions Graphical model Graphical projection Grassmann number Grassmannian Grassmannian (disambiguation) Grassmann–Cayley algebra Gravitational energy Gravitational instanton Gravitational singularity Gravity (social science methodology) Gravity set Gray code Gray graph Great 120-cell Great Deluge algorithm Great Internet Mersenne Prime Search Great circle Great cubicuboctahedron Great deltoidal icositetrahedron Great dirhombicosidodecahedron Great disnub dirhombidodecahedron Great ditrigonal dodecicosidodecahedron Great ditrigonal icosidodecahedron Great dodecahedron Great dodecahemicosahedron Great dodecahemidodecahedron Great dodecicosahedron Great dodecicosidodecahedron Great ellipse Great grand 120-cell Great grand stellated 120-cell Great hexacronic icositetrahedron Great icosahedral 120-cell Great icosahedron Great icosicosidodecahedron Great icosidodecahedron Great icosihemidodecahedron Great inverted snub icosidodecahedron Great retrosnub icosidodecahedron Great rhombic triacontahedron Great rhombidodecahedron Great rhombihexahedron Great rhombitriheptagonal tiling Great rhombitrihexagonal tiling Great snub dodecicosidodecahedron Great snub icosidodecahedron Great stellated 120-cell Great stellated dodecahedron Great stellated truncated dodecahedron Great truncated cuboctahedron Great truncated icosidodecahedron Great-circle distance Greatest common divisor Greatest common divisor of two polynomials Greatest element Greatest fixed point Greedoid Greedy algorithm Greedy algorithm for Egyptian fractions Greedy coloring Greedy randomized adaptive search procedure Greek letters used in mathematics, science, and engineering Greek mathematics Greek numerals Green formula Green measure Green's function Green's function for the three-variable Laplace equation Green's identities Green's matrix Green's relations Green's theorem Green-Kubo relations Green–Tao theorem Gregory number Greibach normal form Gretl Grid cell topology Grid method Griess algebra Grigorchuk group Grimm's conjecture Gromov norm Gromov product Gromov's compactness theorem Gromov's compactness theorem (geometry) Gromov's compactness theorem (topology) Gromov's inequality for complex projective space Gromov's systolic inequality for essential manifolds Gromov's theorem on groups of polynomial growth Gromov-Ruh theorem Gromov–Hausdorff convergence Gromov–Witten invariant Gross (unit) Grosshans subgroup Grothendieck connection Grothendieck group Grothendieck inequality Grothendieck space Grothendieck spectral sequence Grothendieck topology Grothendieck universe Grothendieck's Galois theory Grothendieck's Séminaire de géométrie algébrique Grothendieck's connectedness theorem Grothendieck's relative point of view Grothendieck–Hirzebruch–Riemann–Roch theorem Grothendieck–Katz p-curvature conjecture Ground expression Groundwater flow equation Group (mathematics) Group Hopf algebra Group action Group algebra Group code Group cohomology Group extension Group homomorphism Group isomorphism Group isomorphism problem Group method of data handling Group object Group of Lie type Group representation Group ring Group scheme Group theory Group velocity Group with operators Groupoid Grover's algorithm Growth curve Growth rate (group theory) Grubbs' test for outliers Grundy's game Grundzüge der Mengenlehre Grunwald-Letnikov differintegral Grunwald-Wang theorem Grushko theorem Grzegorczyk hierarchy Gröbner basis Grönwall's inequality Grötzsch graph Guard digit Gudermannian function Guess value Guesstimate Guide to Available Mathematical Software Guided Local Search Guillotine problem Gumbel distribution Gun (cellular automaton) Guttman scale Guy Medal Guyou hemisphere-in-a-square projection Gy's sampling theory Gyrate bidiminished rhombicosidodecahedron Gyrate rhombicosidodecahedron Gyrated tetrahedral-octahedral honeycomb Gyrated triangular prismatic honeycomb Gyration Gyration tensor Gyrobifastigium Gyroelongated alternated cubic honeycomb Gyroelongated dipyramid Gyroelongated pentagonal bicupola Gyroelongated pentagonal birotunda Gyroelongated pentagonal cupola Gyroelongated pentagonal cupolarotunda Gyroelongated pentagonal pyramid Gyroelongated pentagonal rotunda Gyroelongated square bicupola Gyroelongated square cupola Gyroelongated square dipyramid Gyroelongated square pyramid Gyroelongated triangular bicupola Gyroelongated triangular cupola Gyroelongated triangular prismatic honeycomb Gyroid Gyrovector space Gysin sequence Gâteaux derivative Gårding domain Gårding's inequality Gödel Prize Gödel number Gödel numbering for sequences Gödel's completeness theorem Gödel's incompleteness theorems Gödel, Escher, Bach Gödel–Gentzen negative translation Gömböc Gδ set G₂ H square H tree H-cobordism H-derivative H-index H-infinity methods in control theory H-space H-theorem HN group HNN extension HOMFLY polynomial Haag's theorem Haagerup property Haaland equation Haar measure Haar wavelet Haboush's theorem Hackenbush Hadamard code Hadamard conjecture Hadamard finite part integral Hadamard manifold Hadamard matrix Hadamard three-circle theorem Hadamard transform Hadamard variance Hadamard's dynamical system Hadamard's inequality Hadamard's lemma Hadjicostas's formula Hadwiger conjecture (graph theory) Hadwiger's theorem Hadwiger–Finsler inequality Hadwiger–Nelson problem Haefliger structure Hafner-Sarnak-McCurley constant Hahn decomposition theorem Hahn embedding theorem Hahn polynomials Hahn–Banach theorem Hahn–Kolmogorov theorem Hairy ball theorem Haken manifold Halbert L. Dunn Award Hales–Jewett theorem Half circle distribution Half range Fourier series Half-integer Half-life Half-logistic distribution Half-normal distribution Half-period ratio Half-space Halin graph Hall polynomial Hall subgroup Hall's theorem Hall's universal group Halley's method Hall–Janko graph Hall–Janko group Hall–Littlewood polynomial Halpern-Lauchli theorem Halting problem Halton sequence Ham sandwich theorem Hamburger moment problem Hamilton Mathematics Institute Hamilton institute Hamilton's principle Hamiltonian Hamiltonian (control theory) Hamiltonian (quantum mechanics) Hamiltonian group Hamiltonian matrix Hamiltonian mechanics Hamiltonian path Hamiltonian path problem Hamiltonian vector field Hamilton–Jacobi equation Hamilton–Jacobi–Bellman equation Hamming bound Hamming code Hamming distance Hamming graph Hamming space Hamming weight Hamming(7,4) Hampshire College Summer Studies in Mathematics Hanan grid Handbook of Automated Reasoning Handedness and mathematical ability Handle (mathematics) Handle decomposition Handlebody Handwaving Hankel contour Hankel matrix Hankel singular value Hankel transform Hann function Hanna Neumann conjecture Hanner's inequalities Happy Ending problem Happy number Happy prime Harada-Norton group Harald Ganzinger Harary's generalized tic-tac-toe Hard hexagon model Hardness of approximation Hardy notation Hardy space Hardy's inequality Hardy's theorem Hardy-Littlewood maximal function Hardy-Weinberg principle Hardy–Littlewood circle method Hardy–Ramanujan theorem Harish-Chandra Research Institute Harish-Chandra character Harish-Chandra class Harish-Chandra homomorphism Harmonic Harmonic (mathematics) Harmonic analysis Harmonic conjugate Harmonic division Harmonic divisor number Harmonic function Harmonic map Harmonic mean Harmonic measure Harmonic number Harmonic oscillator Harmonic polynomial Harmonic series (mathematics) Harmonic spectrum Harmonic wavelet transform Harmonices Mundi Harmonious coloring Harmonograph Harmony search Harnack's curve theorem Harnack's inequality Harnack's principle Harris chain Harshad number Hartley transform Hartley's test Hartman-Grobman theorem Hartogs number Hartogs' lemma Hartogs' theorem Hartree equation Hartree-Fock Harvard-MIT Mathematics Tournament Hasegawa-Mima equation Hasse diagram Hasse invariant of a quadratic form Hasse norm theorem Hasse principle Hasse's theorem Hasse's theorem on elliptic curves Hasse-Weil zeta function Hasse-Witt matrix Hasse–Davenport relation Hasse–Minkowski theorem Hat matrix Hat operator Hatch mark Hauksbók Hauptvermutung Hausdorff density Hausdorff dimension Hausdorff distance Hausdorff maximal principle Hausdorff measure Hausdorff moment problem Hausdorff paradox Hausdorff space Hausman specification test Hausman test Haven (graph theory) Haversine formula Hawaiian earring Hazard function Hazard ratio Heap (mathematics) Heaps' law Hearing the shape of a drum Heat equation Heath-Brown–Moroz constant Heaviside cover-up method Heaviside step function Heavy-tailed distribution Heawood conjecture Heawood graph Heawood number Hebesphenomegacorona Hebrew numerals Hecke algebra Hecke character Hecke operator Heckman correction Hectagonal number Hedetniemi's conjecture Hedgehog space Heegaard splitting Heegner number Heegner point Heesch's problem Height (ring theory) Height of a polynomial Heilbronn triangle problem Heine's identity Heine–Borel theorem Heine–Cantor theorem Heinz mean Heisenberg group Heisenberg model Heisenberg picture Hekat (volume) Held group Helicoid Helix Hellenic Mathematical Society Hellinger distance Hellinger–Toeplitz theorem Helly family Helly's selection theorem Helly's theorem Helly–Bray theorem Helmert transformation Helmert-Wolf blocking Helmholtz decomposition Helmholtz equation Helmholtz theorem Helmholtz theorem (classical mechanics) Helmholtz's theorems Hemi-dodecahedron Hemi-icosahedron Hemi-octahedron Hemicompact space Hemicontinuity Hemicube (geometry) Hemimetric space Henagon Hendecagon Hendecagram Hennessy-Milner logic Hensel's lemma Henselian ring Henstock–Kurzweil integral Heptacross Heptadecagon Heptagon Heptagonal antiprism Heptagonal number Heptagonal pyramidal number Heptagram Heptagrammic prism (7/2) Heptagrammic prism (7/3) Heptahedron Heptapeton Hepteract Hepteractic honeycomb Heptomino Herbrand base Herbrand interpretation Herbrand normal form Herbrand quotient Herbrand structure Herbrand universe Herbrand's theorem Herbrandization Herbrand–Ribet theorem Hereditarily countable set Hereditarily finite set Hereditary C*-subalgebra Hereditary property Hereditary ring Hereditary set Hermite constant Hermite interpolation Hermite normal form Hermite number Hermite polynomials Hermite spline Hermite's identity Hermite–Hadamard inequality Hermitian Hermitian adjoint Hermitian connection Hermitian function Hermitian hat wavelet Hermitian manifold Hermitian matrix Hermitian symmetric space Hermitian variety Hermitian wavelet Heron's formula Heronian mean Heronian tetrahedron Heronian triangle Herzog–Schönheim conjecture Hessenberg matrix Hessian curve Hessian matrix Heston model Heteroclinic bifurcation Heteroclinic cycle Heteroclinic network Heteroclinic orbit Heteroscedasticity Heteroscedasticity-consistent standard errors Heterosquare Heun's equation Heun's method Hewitt–Savage zero-one law Hex (board game) Hexacode Hexacross Hexadecagon Hexadecimal Hexaflake Hexagon Hexagonal antiprism Hexagonal bifrustum Hexagonal bipyramid Hexagonal lattice Hexagonal number Hexagonal prism Hexagonal prismatic honeycomb Hexagonal pyramid Hexagonal pyramidal number Hexagonal tiling Hexagonal trapezohedron Hexagonal truncated trapezohedron Hexagram Hexahedron Hexany Hexapawn Hexateron Hexavigesimal Hexeract Hexeractic honeycomb Hexomino Heyting algebra Heyting arithmetic Hicksian demand function Hidato Hidden Field Equations Hidden Markov model Hidden semi-Markov model Hidden subgroup problem Hierarchical Bayes model Hierarchical RBF Hierarchical hidden Markov model Hierarchical linear modeling Hierarchy (mathematics) Higgs mechanism Higgs prime High (computability) High School Attached to Beijing University of Technology High availability High-dimensional model representation High-dimensional statistics High-resolution scheme Higher category theory Higher dimension Higher order derivative test Higher order singular value decomposition Higher residuosity problem Higher spin alternating sign matrix Higher-dimensional algebra Higher-order control Higher-order factor analysis Higher-order function Higher-order grammar Higher-order logic Highest averages method Highly abundant number Highly composite number Highly cototient number Highly cototient prime Highly totient number Higman's embedding theorem Higman's lemma Higman–Sims graph Higman–Sims group Hilbert C*-module Hilbert algebra Hilbert basis Hilbert basis (linear programming) Hilbert class field Hilbert cube Hilbert curve Hilbert manifold Hilbert matrix Hilbert modular form Hilbert number Hilbert polynomial Hilbert projection theorem Hilbert scheme Hilbert space Hilbert symbol Hilbert transform Hilbert's Nullstellensatz Hilbert's Theorem 90 Hilbert's arithmetic of ends Hilbert's axioms Hilbert's basis theorem Hilbert's eighteenth problem Hilbert's eighth problem Hilbert's eleventh problem Hilbert's fifteenth problem Hilbert's fifth problem Hilbert's first problem Hilbert's fourteenth problem Hilbert's fourth problem Hilbert's irreducibility theorem Hilbert's nineteenth problem Hilbert's ninth problem Hilbert's paradox of the Grand Hotel Hilbert's problems Hilbert's program Hilbert's second problem Hilbert's seventeenth problem Hilbert's seventh problem Hilbert's sixteenth problem Hilbert's sixth problem Hilbert's syzygy theorem Hilbert's tenth problem Hilbert's theorem (differential geometry) Hilbert's third problem Hilbert's thirteenth problem Hilbert's twelfth problem Hilbert's twentieth problem Hilbert's twenty-first problem Hilbert's twenty-second problem Hilbert's twenty-third problem Hilbert-Schmidt integral operator Hilbert-Schmidt theorem Hilbert-style deduction system Hilbert–Poincaré series Hilbert–Pólya conjecture Hilbert–Schmidt operator Hilbert–Smith conjecture Hilbert–Speiser theorem Hill climbing Hill tetrahedron Hill's equation Hill's model Hille–Yosida theorem Himmelblau's function Hindcast Hindmarsh-Rose model Hindu mathematics Hindu units of measurement Hindu-Arabic numeral system Hinge theorem Hipparchus On Sizes and Distances Hippopede Hiroshima Mathematical Journal Hiroyuki Goto Hirsch conjecture Hirschberg's algorithm Hirzebruch–Riemann–Roch theorem Histogram Historia Mathematica Historiometry History monoid History of Grandi's series History of algebra History of calculus History of combinatorics History of computer science History of computing History of elementary algebra History of geometry History of group theory History of knot theory History of large numbers History of logic History of manifolds and varieties History of mathematical notation History of mathematics History of numerical solution of differential equations using computers History of quaternions History of statistics History of the Church–Turing thesis History of the Hindu-Arabic numeral system History of the separation axioms History of trigonometry History of variational principles in physics History of writing ancient numbers Hitchin functional Hitting set Hitting time Hjalmar Ekdal topology Hjelmslev transformation Hjelmslev's theorem Hjulstrøm curve HoDoMS Hobby–Rice theorem Hoberman sphere Hochschild homology Hodge conjecture Hodge cycle Hodge dual Hodge index theorem Hodge manifold Hodge structure Hodge theorem Hodge theory Hodges-Lehmann estimator Hodgkin–Huxley model Hoeffding's inequality Hoffmann-Zeller theorem Hoffman–Singleton graph Hofstadter sequence Hofstadter's butterfly Holditch's theorem Holland's schema theorem Holley inequality Hollow matrix Holm-Bonferroni method Holmström's theorem Holomorph (mathematics) Holomorphic function Holomorphic functional calculus Holomorphic sheaf Holomorphic vector bundle Holomorphically convex hull Holomorphically separable Holonomic Holonomic constant Holonomy Holor Holyhedron Hom functor Homeomorphism Homeomorphism (graph theory) Homeomorphism group Homeotopy Homeschool mathematics Homicidal chauffeur problem Homoclinic bifurcation Homoclinic orbit Homoeoid Homogeneity (statistics) Homogeneous (large cardinal property) Homogeneous (mathematics) Homogeneous coordinates Homogeneous differential equation Homogeneous function Homogeneous linear equation Homogeneous polynomial Homogeneous relation Homogeneous space Homogeneous tree Homogeneously Suslin set Homography Homological algebra Homological conjectures in commutative algebra Homological dimension Homological mirror symmetry Homology (mathematics) Homology manifold Homology sphere Homology theory Homomorphic secret sharing Homomorphism Homoscedasticity Homothetic center Homothetic transformation Homotopical algebra Homotopy Homotopy category Homotopy category of chain complexes Homotopy extension property Homotopy fiber Homotopy group Homotopy groups of spheres Homotopy lifting property Homotopy principle Homotopy sphere Honest leftmost branch Honeycomb Honeycomb (geometry) Honeycomb conjecture Hong Kong Mathematical High Achievers Selection Contest Hong Kong Mathematics Olympiad Hoover index Hopcroft–Karp algorithm Hopf algebra Hopf bifurcation Hopf conjecture Hopf fibration Hopf invariant Hopf link Hopf manifold Hopf maximum principle Hopfian group Hopf–Rinow theorem Horizontal bundle Horizontal coordinate system Horizontal line test Horizontal plane Horn clause Horn function Horn-satisfiability Horner scheme Horoball Horocycle Horopter Horrocks-Mumford bundle Horseshoe lemma Horseshoe map Hosohedron Hosoya index Hosoya's triangle Hotelling's T-square distribution Hotelling's lemma Hough function House with two rooms Householder operator Householder transformation Householder's method How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension How to Lie with Statistics How to Solve It How to Solve It By Computer Hrushovski construction Hsiang-Lawson's conjecture Hu Washizu principle Hua's lemma Hubbard-Stratonovich transformation Hubbert curve Huber-White standard errors Huffman coding Huge cardinal Human-based genetic algorithm Humbert surface Hume's principle Hunayn ibn Ishaq Hundred (word) Hundred-dollar, Hundred-digit Challenge problems Hundredth Hungarian algorithm Hunt process Hunter–Saxton equation Huntington University Math Competition Huntington-Hill method Hurewicz theorem Hurst exponent Hurwitz matrix Hurwitz polynomial Hurwitz quaternion Hurwitz quaternion order Hurwitz surface Hurwitz zeta function Hurwitz's automorphisms theorem Hurwitz's theorem Husimi Q representation Hutchinson metric Hutchinson operator Huzita-Hatori axioms Hybrid automaton Hybrid bond graph Hybrid logic Hybrid system Hybrid topology Hylomorphism (computer science) Hyman Levy Hyper operator Hyper-Graeco-Latin square design Hyper-Woodin cardinal Hyper-exponential distribution Hyper-heuristic HyperFun Hyperarithmetical theory Hyperbola Hyperbolic 3-manifold Hyperbolic Dehn surgery Hyperbolic angle Hyperbolic catenary Hyperbolic coordinates Hyperbolic distribution Hyperbolic equilibrium point Hyperbolic function Hyperbolic geometry Hyperbolic group Hyperbolic growth Hyperbolic link Hyperbolic manifold Hyperbolic motion Hyperbolic partial differential equation Hyperbolic plane Hyperbolic quaternion Hyperbolic secant distribution Hyperbolic sector Hyperbolic set Hyperbolic soccerball Hyperbolic space Hyperbolic spiral Hyperbolic tree Hyperbolic triangle Hyperbolic trigonometry Hyperbolic volume (knot) Hyperbolic-orthogonal Hyperboloid Hyperboloid model Hyperboloid structure Hypercell Hypercomplex manifold Hypercomplex number Hypercomputation Hypercone Hyperconnected space Hypercube Hypercube graph Hypercubic honeycomb Hypercycle (geometry) Hyperdeterminant Hyperelliptic curve Hyperelliptic curve cryptography Hyperfinite type II factor Hyperfunction Hypergeometric differential equation Hypergeometric distribution Hypergeometric function of a matrix argument Hypergeometric identities Hypergeometric series Hypergraph Hyperhomology Hyperinteger Hyperkähler manifold Hypernumber Hyperparameter Hyperperfect number Hyperplane Hyperplane at infinity Hyperplane section Hyperreal number Hyperrectangle Hyperspecial subgroup Hyperstability Hyperstructure Hypersurface Hypertranscendental function Hypertranscendental number Hypocycloid Hypoelliptic operator Hypoexponential distribution Hypograph Hypograph (mathematics) Hypohamiltonian graph Hypostatic abstraction Hypotenuse Hypothetical syllogism Hypotrochoid Hypsometric equation Hysteresivity Hénon map Hölder condition Hölder's inequality Hölder's theorem Hörmander's condition IA automorphism ICER ICTP Ramanujan Prize INSEE code IP (complexity) IP set ISO 31-11 ITest Icosagon Icosahedral 120-cell Icosahedral prism Icosahedral symmetry Icosahedron Icosian Calculus Icosian game Icosidodecadodecahedron Icosidodecahedron Icosikaitetragon Icositetrachoric honeycomb Icositruncated dodecadodecahedron Ideal (order theory) Ideal (ring theory) Ideal (set theory) Ideal class group Ideal number Ideal quotient Ideal sheaf Ideal theory Ideal triangle Idealised population Idempotence Idempotent function Idempotent measure Identical particles Identically zero Identifiability condition Identity (mathematics) Identity component Identity element Identity function Identity matrix Identity theorem Identity theorem for Riemann surfaces Identric mean Idoneal number If and only if Ignorability Igusa zeta-function Ihara zeta function Ikeda map Illegal number Illegal prime Illustration of the central limit theorem Image (category theory) Image (mathematics) Image functors for sheaves Imaginary curve Imaginary element Imaginary line (mathematics) Imaginary number Imaginary part Imaginary point Imaginary unit Imagining Numbers Immanant of a matrix Immersion (mathematics) Impartial game Imperfect group Implementation of mathematics in set theory Implicant Implication graph Implicational propositional calculus Implicit function Implicit function theorem Implied volatility Importance sampling Impossible cube Impossible event Imprecise probability Impredicativity Improper integral Improper rotation Impulse Impulse response Imputation (statistics) In-place matrix transposition Inaccessible cardinal Incidence (geometry) Incidence algebra Incidence geometry Incidence matrix Incidence structure Incircle and excircles of a triangle Inclined plane Inclusion map Inclusion probability Inclusion-exclusion principle Inclusive Incommensurable magnitudes Incomplete Cholesky factorization Incomplete Fermi–Dirac integral Incomplete LU factorization Incomplete gamma function Incomplete polylogarithm Incompressible flow Incompressible surface Increasing process Increment theorem Indagationes Mathematicae Indecomposability Indecomposable continuum Indecomposable distribution Indecomposable module Indefinite and fictitious numbers Indefinite inner product space Indefinite logarithm Indefinite orthogonal group Independence (mathematical logic) Independence-friendly logic Independent and identically-distributed random variables Independent component analysis Independent equation Independent set Independent set problem Indeterminacy in concurrent computation Indeterminate Indeterminate (variable) Indeterminate equation Indeterminate form Indeterminate system Index (mathematics) Index calculus algorithm Index notation Index of coincidence Index of dispersion Index set Index set (recursion theory) Indexed family Indexed language India at the International Mathematical Olympiad Indian National Mathematics Olympiad Indian Statistical Institute Indian logic Indian mathematics Indian numbering system Indian numerals Indiana Pi Bill Indiana University Mathematics Journal Indicator function Indicators of spatial association Indiscernibles Indra's Pearls (book) Induced homomorphism Induced homomorphism (algebraic topology) Induced homomorphism (fundamental group) Induced metric Induced path Induced representation Induced subgraph isomorphism problem Inductive dimension Inductive inference Inductive set Inductive set (axiom of infinity) Industrial-grade prime Ineffable cardinal Ineffably Ramsey cardinal Inequalities in information theory Inequality Inequality of arithmetic and geometric means Inequation Inertia tensor of triangle Inexact differential Infimum Infinitary combinatorics Infinitary logic Infinite arithmetic series Infinite broom Infinite conjugacy class property Infinite descending chain Infinite descent Infinite divisibility Infinite divisibility (probability) Infinite loop Infinite monkey theorem Infinite monkey theorem in popular culture Infinite product Infinite set Infinite skew polyhedron Infinite-dimensional holomorphy Infinite-dimensional optimization Infinite-period bifurcation Infinitely near point Infinitesimal Infinitesimal character Infinitesimal generator (stochastic processes) Infinitesimal transformation Infinity Infinity plus one Infinity-Borel set Infix notation Inflection point Influence of non-standard analysis Infomax Informal mathematics Information Information bottleneck method Information diagram Information geometry Information set Information set (game theory) Information source Information source (mathematics) Information theory Information theory and measure theory Inframetric Infrared fixed point Inhabited set Inherent bias Inherent zero Inhomogeneous electromagnetic wave equation Initial algebra Initial and terminal objects Initial topology Initial value problem Injective cogenerator Injective function Injective hull Injective metric space Injective module Injective object Injective sheaf Inner angle Inner automorphism Inner measure Inner model Inner model theory Inner product space Inner regular measure Innumeracy (book) Inoue–Hirzebruch surface Input selection Input shaping Inscribed angle Inscribed angle theorem Inscribed figure Inscribed sphere Inseparable differential equation Inspec Instability Instant Insanity Instanton Institut Henri Poincaré Institut de Mathématiques de Toulouse Institut des Hautes Études Scientifiques Institute for Advanced Study Institute for Mathematics and its Applications Institute for Pure and Applied Mathematics Institute of Applied Physics and Computational Mathematics Institute of Combinatorics and its Applications Institute of Mathematical Sciences Institute of Mathematics (National Academy of Sciences of Belarus) Institute of Mathematics and its Applications Institute of Mathematics of the Romanian Academy Institute of Mathematics, Physics, and Mechanics Institute of Statistical Mathematics Institute of Statisticians Institution (computer science) Institutional model theory Institutiones calculi differentialis Institutionum calculi integralis Instituto Nacional de Estatística Instituto Nacional de Matemática Pura e Aplicada Instrumental variable Integer Integer factorization Integer factorization records Integer lattice Integer matrix Integer relation algorithm Integer sequence Integer square root Integer-valued polynomial Integrability Integrability conditions for differential systems Integrable function Integrable system Integral Integral Equations and Operator Theory Integral Transforms and Special Functions Integral curve Integral domain Integral equation Integral expression Integral geometry Integral of secant cubed Integral representation theorem for classical Wiener space Integral symbol Integral test for convergence Integral transform Integrality Integrally closed Integraph Integrated mathematics Integrating factor Integrating trigonometric products as complex exponentials Integration by parts Integration by parts operator Integration by reduction formulae Integration by substitution Integration of the normal density function Integration techniques Integration using parametric derivatives Integrator Integro-differential equation Integrodifference equation Inter-School Mathematics Contest Inter-rater reliability Interaction (statistics) Interaction variable Interactive Mathematics Program (IMP) Interactive computation Interactive evolutionary computation Interactive theorem proving Intercept theorem Interchange of limiting operations Interclass correlation Interclass dependence Interconnectivity Interdimensional Interdisciplinary Contest in Modeling Interesting number paradox Interim analysis Interior (topology) Interior algebra Interior point method Interior product Interleave sequence Intermediate Math League of Eastern Massachusetts Intermediate logic Intermediate treatment of tensors Intermediate value theorem Intermittency Internal and external angle Internal consistency Internal set Internal set theory International Association for Mathematics and Computers in Simulation International CensusAtSchool Project International Centre for Mathematical Sciences International Centre for Theoretical Physics International Commission on Mathematical Instruction International Conference on Differential Geometric Methods in Theoretical Physics International Congress of Mathematicians International Congress on Industrial and Applied Mathematics International Congress on Mathematical Education International Council for Industrial and Applied Mathematics International Futures International Journal of Intelligent Technologies and Applied Statistics International Journal of Mathematics and Mathematical Sciences International Mathematical Olympiad International Mathematical Olympiad Training Camp International Mathematical Olympiad selection process International Mathematical Union International Mathematics Competition for University Students International Society for Bayesian Analysis International Society for Design and Development in Education International Society for Mathematical Sciences International Statistical Institute International Symposium on Graph Drawing International Workshops on Lattice QCD and Numerical Analysis Interpolation Interpolation (computer programming) Interpolation space Interpretability Interpretable structure Interpretation (logic) Interpretation (model theory) Interprime Interquartile mean Interquartile range Intersection (set theory) Intersection array Intersection form (4-manifold) Intersection graph Intersection homology Intersection number Intersection number (algebraic geometry) Intersection of a polyhedron with a line Intersection theorem Intersection theory Intersection theory (mathematics) Interval (mathematics) Interval arithmetic Interval chromatic number of an ordered graph Interval estimation Interval exchange transformation Interval graph Interval order Interval temporal logic Intervening variable Intraclass correlation Intransitivity Intrinsic equation Intrinsic metric Intrinsic random event Introductio in analysin infinitorum Introduction to Arithmetic Introduction to Commutative Algebra Introduction to Mathematical Philosophy Introduction to systolic geometry Intuitionism Intuitionistic logic Intuitionistic type theory Intégrale, longueur, aire Invalid proof Invariance of domain Invariance theorem Invariant (mathematics) Invariant basis number Invariant differential operators Invariant estimator Invariant factor Invariant factorization of LPDOs Invariant measure Invariant polynomial Invariant subspace Invariant subspace problem Invariant theory Invariants of tensors Inventiones Mathematicae Inventory control problem Inverse (mathematics) Inverse Galois problem Inverse Gaussian distribution Inverse Laplace transform Inverse Mills ratio Inverse Symbolic Calculator Inverse curve Inverse distance weighting Inverse eigenvalues theorem Inverse element Inverse function Inverse function theorem Inverse functions and differentiation Inverse gambler's fallacy Inverse hyperbolic function Inverse image functor Inverse iteration Inverse limit Inverse mean curvature flow Inverse probability Inverse problem Inverse problem for Lagrangian mechanics Inverse quadratic interpolation Inverse relation Inverse relationship Inverse scattering problem Inverse scattering transform Inverse semigroup Inverse system Inverse transform sampling Inverse trigonometric functions Inverse-Wishart distribution Inverse-chi-square distribution Inverse-gamma distribution Inverse-square law Inversion in a point Inversion operator Inversion transformation Inversive congruential generator Inversive distance Inversive geometry Inversive plane Inversive ring geometry Inverted bell Inverted pendulum Inverted snub dodecadodecahedron Invertible knot Invertible matrix Invertible sheaf Investigations in Numbers, Data, and Space Involute Involution Involution (mathematics) Involutive relation Involutory matrix Involve, a Journal of Mathematics Irish Mathematical Society Irrational base discrete weighted transform Irrational number Irrational rotation Irreducible (mathematics) Irreducible component Irreducible fraction Irreducible polynomial Irregular matrix Irregular prime Irregularity of distributions Irving Joshua Matrix Irwin-Hall distribution IsaPlanner Isaac Newton Institute Isaac Newton's early life and achievements Isaac Newton's later life Isaac Newton's occult studies Ishango bone Ishimori equation Ising model Isis (journal) Isocline Isodynamic point Isogeny Isogonal Isogonal conjugate Isogonal figure Isogonal trajectory Isohedral figure Isolated point Isolated singularity Isomap Isometric illusion Isometric projection Isometries in physics Isometry Isometry (Riemannian geometry) Isometry group Isomonodromic deformation Isomorphism Isomorphism class Isomorphism extension theorem Isomorphism of categories Isomorphism problem Isomorphism theorem Isomorphism-closed subcategory Isoparametric manifold Isoperimetric dimension Isoperimetric inequality Isoperimetric problem Isopsephy Isoptic Isosceles trapezoid Isosceles triangle theorem Isospectral Isothermal coordinates Isothetic polygon Isotomic conjugate Isotomic lines Isotonic regression Isotopy of loops Isotoxal figure Isotropic coordinates Isotropic line Isotropic manifold Isotropic quadratic form Italian Mathematical Union Italian school of algebraic geometry Item tree analysis Iterated Hilbert Transform Iterated binary operation Iterated function Iterated function system Iterated logarithm Iterated monodromy group Iteration Iterative Closest Point Iterative deepening depth-first search Iterative learning control Iterative method Iteratively re-weighted least squares Itoh-Tsujii inversion algorithm Itō calculus Itō diffusion Itō isometry Itō's lemma Iverson bracket Iwasawa decomposition Iwasawa group Iwasawa manifold Iwasawa theory J integral J-homomorphism J-invariant J. H. Wilkinson Prize for Numerical Software JLO cocycle JSJ decomposition Jaccard index Jack function Jacket matrix Jackson integral Jackson network Jackson's inequality Jackson's theorem Jackson's theorem (queueing theory) Jacobi eigenvalue algorithm Jacobi field Jacobi identity Jacobi matrix Jacobi method Jacobi polynomials Jacobi rotation Jacobi sum Jacobi symbol Jacobi theta functions - notational variations Jacobi triple product Jacobi's elliptic functions Jacobi's formula Jacobi's four-square theorem Jacobian Jacobian conjecture Jacobian variety Jacobi–Anger expansion Jacobson density theorem Jacobson radical Jacobson ring Jacobsthal number Jacobus Naveros Jacquet–Langlands correspondence Jaffard ring James' theorem James-Stein estimator Janko group Janko group J1 Janko group J3 Janko group J4 Japanese mathematics Japanese numerals Japanese theorem Japanese theorem for concyclic polygons Japanese theorem for concyclic quadrilaterals Jarque-Bera test Jay Hambidge Jayanta Bhatta Jeep problem Jeffery–Williams Prize Jeffreys prior Jenkins-Traub algorithm Jensen hierarchy Jensen's formula Jensen's inequality Jensen–Shannon divergence Jessen's icosahedron Jet (mathematics) Jet bundle Jet group Jeton Jeu de taquin Job-shop problem John J. O'Connor (mathematician) John L. Synge Award John Lucas (philosopher) John ellipsoid John's equation Johnson bound Johnson circles Johnson solid Johnson's algorithm Johnson–Lindenstrauss lemma Johnston diagram Join (mathematics) Join (topology) Joint Mathematical Council Joint Mathematics Meeting Joint Policy Board for Mathematics Joint entropy Joint probability distribution Jones calculus Jones polynomial Jordan algebra Jordan and Einstein frames Jordan curve theorem Jordan decomposition Jordan matrix Jordan measure Jordan normal form Jordan's inequality Jordan's lemma Jordan's totient function Jordan–Chevalley decomposition Jordan–Schönflies theorem Josephus problem Joukowsky transform Journal de Mathématiques Pures et Appliquées Journal for Research in Mathematics Education Journal of Algebra Journal of Algebraic Combinatorics Journal of Applied Econometrics Journal of Approximation Theory Journal of Business & Economic Statistics Journal of Chemometrics Journal of Combinatorial Theory Journal of Commutative Algebra Journal of Econometrics Journal of Educational and Behavioral Statistics Journal of Generalized Lie Theory and Applications Journal of Graph Theory Journal of Industrial and Management Optimization Journal of Mathematical Physics Journal of Mathematics Teacher Education Journal of Number Theory Journal of Official Statistics Journal of Recreational Mathematics Journal of Statistical Computation and Simulation Journal of Statistical Software Journal of Symbolic Computation Journal of the American Mathematical Society Journal of the American Statistical Association Journal of the Royal Statistical Society Judgment (mathematical logic) Juggler sequence Julia set Jump-and-Walk algorithm Junction tree algorithm Jung's theorem Junior Balkan Mathematical Olympiad Jury stability criterion Justesen code János Bolyai Mathematical Institute Jónsson cardinal Jónsson function K(Z,2) K-Poincaré algebra K-Poincaré group K-approximation of k-hitting set K-ary tree K-core K-edge-connected graph K-equivalence K-finite K-function K-homology K-means algorithm K-medoids K-minimum spanning tree K-set (geometry) K-theory K-theory (physics) K-topology K-vertex-connected graph K3 surface KANT KK-theory Kabsch algorithm Kachurovskii's theorem Kaczmarz method Kac–Moody algebra Kadomtsev–Petviashvili equation Kahan summation algorithm Kahun Papyrus Kaiser window Kakeya conjecture Kakeya set Kakutani fixed point theorem Kakutani's theorem (geometry) Kalman decomposition Kalman filter Kalman-Yakubovich-Popov lemma Kalmanson combinatorial conditions Kampyle of Eudoxus Kan extension Kan fibration Kantorovich inequality Kantorovich theorem Kaplan-Meier estimator Kaplan-Yorke map Kaplansky density theorem Kaplansky's conjecture Kaplansky's theorem on quadratic forms Kappa Mu Epsilon Kappa curve Kappa statistic Kaprekar number Karamata's inequality Karatsuba algorithm Karger's algorithm Karhunen-Loève theorem Karmarkar's algorithm Karnaugh map Karoubi envelope Karp's 21 NP-complete problems Karplus equation Karush–Kuhn–Tucker conditions Kato's conjecture Katětov–Tong insertion theorem Kauffman polynomial Kaup–Kupershmidt equation Kautz graph Kawasaki's theorem Kayles Kazhdan's property (T) Kazhdan–Lusztig polynomial Kazushige Goto Kd-tree Keith Medal Keith number Keldysh Institute of Applied Mathematics Kelly criterion Kelvin functions Kelvin transform Kempe chain Kempner series KenKen Kendall tau distance Kendall tau rank correlation coefficient Kendall's W Kendall's notation Kent distribution Kepler conjecture Kepler triangle Kepler's laws of planetary motion Kepler-Bouwkamp constant Kepler-Poinsot polyhedron Kerala school of astronomy and mathematics Kernel (algebra) Kernel (category theory) Kernel (linear operator) Kernel (mathematics) Kernel (matrix) Kernel (set theory) Kernel (statistics) Kernel density estimation Kernel regression Kernel smoother Kernel trick Kervaire invariant Key (cryptography) Key relevance Kharitonov region Kharitonov's theorem Kharkov Mathematical Society Kharoṣṭhī numerals Khatri-Rao product Khinchin's constant Khinchin's theorem Khintchine inequality Khmaladze transformation Khmer numerals Khovanov homology Killed process Killer Puzzles Killing form Killing horizon Killing spinor Killing vector field Kinematics Kinetic Monte Carlo Kinetic energy Kinetic logic King Wen sequence King's graph Kinki University Mathematics Contest Kirby calculus Kirby–Siebenmann class Kirchhoff's theorem Kirillov character formula Kirillov model Kirillov orbit theory Kirkman's schoolgirl problem Kirkpatrick–Seidel algorithm Kirkwood approximation Kirszbraun theorem Kissing number problem Kitchen sink regression Kite (geometry) Klee's measure problem Kleene algebra Kleene fixed-point theorem Kleene fixpoint theorem Kleene star Kleene's O Kleene's T predicate Kleene's recursion theorem Kleene-Rosser paradox Kleene–Brouwer order Klein bottle Klein four-group Klein geometry Klein model Klein quadric Klein quartic Klein surface Kleinian group Kleinian model Klein–Gordon equation Kline sphere characterization Kloosterman sum Knapsack problem Knaster-Kuratowski fan Knaster–Kuratowski–Mazurkiewicz lemma Knaster–Tarski theorem Kneser graph Kneser theorem Kneser-Tits conjecture Knight's graph Knight's tour Knights of the Lambda Calculus Knizhnik-Zamolodchikov equations Knot (mathematics) Knot complement Knot energy Knot group Knot invariant Knot polynomial Knot tabulation Knot theory Knot thickness Knuth's up-arrow notation Knuth-Bendix completion algorithm Knödel number Kobon triangle problem Koch snowflake Kochanek–Bartels spline Kochanski multiplication Kochen-Specker theorem Kodaira embedding theorem Kodaira vanishing theorem Koebe 1/4 theorem Kolakoski sequence Kolgomorov's inequality Kolmogorov backward equation Kolmogorov complexity Kolmogorov continuity theorem Kolmogorov extension theorem Kolmogorov randomness Kolmogorov space Kolmogorov's inequality Kolmogorov's zero-one law Kolmogorov–Arnold–Moser theorem Kolmogorov–Smirnov test Kolmogorov’s criterion Kolmogorov’s generalized criterion Kontorovich-Lebedev transform Kontsevich invariant Koornwinder polynomials Korea Institute for Advanced Study Korean numerals Korn's inequality Korteweg–de Vries equation Kos (measure) Kosaraju's algorithm Koszul algebra Koszul complex Koszul-Tate resolution Kraft's inequality Kraków School of Mathematics Kraków School of Mathematics and Astrology Kramer graph Kramers-Wannier duality Kramers–Kronig relation Krasovskii–LaSalle principle Kravchuk polynomials Krawtchouk matrices Krein–Milman theorem Kretschmann scalar Krieger-Nelson Prize Kriging Kripke semantics Kripke–Platek set theory Kripke–Platek set theory with urelements Kristo Ivanov Krohn–Rhodes theory Kronecker delta Kronecker limit formula Kronecker product Kronecker symbol Kronecker's lemma Kronecker's sigma function Kronecker's theorem Kronecker–Weber theorem Krull dimension Krull ring Krull's principal ideal theorem Krull's theorem Krull–Schmidt category Krull–Schmidt theorem Kruskal's algorithm Kruskal's tree theorem Kruskal-Wallis one-way analysis of variance Kruskal–Katona theorem Krylov subspace Krylov-Bogolyubov theorem Krypto (game) Kuder-Richardson Formula 20 Kuiper's test Kuiper's theorem Kulkarni–Nomizu product Kullback–Leibler divergence Kumaraswamy distribution Kummer ring Kummer sum Kummer theory Kummer's function Kuramoto model Kuratowski closure axioms Kuratowski convergence Kuratowski embedding Kuratowski's closure-complement problem Kuratowski's free set theorem Kurepa tree Kurosh problem Kurosh subgroup theorem Kurtosis Kurtosis risk Kuznetsov trace formula Ky Fan inequality Kymograph Kynea number Kähler differential Kähler manifold Kähler–Einstein metric König's lemma König's theorem (graph theory) König's theorem (set theory) Köthe conjecture Középiskolai Matematikai és Fizikai Lapok Künneth theorem Kōmura's theorem L (complexity) L game L'Analyse des Infiniment Petits pour l'Intelligence des Lignes Courbes L'Hôpital's rule L(R) L-BFGS L-function L-notation L-system L-theory LAPACK LCS35 LF (logical framework) LF-space LLT polynomial LOGCFL LT code LTI system theory LU decomposition LU reduction La Géométrie LaTeX LaTeX2HTML Labeled graph Labelled enumeration theorem Labour Force Survey Lack-of-fit sum of squares Lacunarity Lacunary function Lacunary value Ladner's theorem Lag operator Lagged Fibonacci generator Lagrange bracket Lagrange inversion theorem Lagrange multipliers Lagrange multipliers on Banach spaces Lagrange number Lagrange polynomial Lagrange reversion theorem Lagrange's formula Lagrange's four-square theorem Lagrange's identity Lagrange's theorem Lagrange's theorem (group theory) Lagrange's theorem (number theory) Lagrangian Lagrangian (disambiguation) Lagrangian Grassmannian Lagrangian and Eulerian coordinates Lagrangian foliation Lagrangian mechanics Lagrangian relaxation Laguerre form Laguerre polynomials Laguerre's method Lah number Lakes of Wada Lakh Laman graph Lambda calculus Lambda cube Lambda function Lambda lifting Lambda-Suslin Lambda-mu calculus Lambek-Moser theorem Lambert W function Lambert conformal conic projection Lambert quadrilateral Lambert series Lambert summation Lambert's cosine law Lambert's problem Lamm equation Lamplighter group Lamé function Lamé parameters Lamé's special quartic Lanchester's laws Lanczos algorithm Lanczos approximation Lanczos resampling Lanczos tensor Landau distribution Landau prime ideal theorem Landau set Landau's constants Landau's function Landau's problems Landau-Kolmogorov inequality Landau–Lifshitz equation Landau–Lifshitz model Landau–Lifshitz–Gilbert equation Landau–Ramanujan constant Landsberg–Schaar relation Lane-Emden equation Langevin dynamics Langevin equation Langford pairing Langlands classification Langlands decomposition Langlands group Langlands program Langton's ant Language equation Laplace distribution Laplace expansion Laplace expansion (potential) Laplace formula Laplace invariant Laplace limit Laplace operator Laplace pressure Laplace principle (large deviations theory) Laplace transform Laplace transform applied to differential equations Laplace's equation Laplace-Beltrami operator Laplace–Runge–Lenz vector Laplace–Stieltjes transform Laplacian matrix Laplacian operators in differential geometry Laplacian smoothing Laplacian vector field Large Veblen ordinal Large cardinal property Large countable ordinal Large deviations of Gaussian random functions Large deviations theory Large diffeomorphism Large eddy simulation Large gauge transformation Large numbers Large set Large set (Ramsey theory) Large sieve Largest known prime number Largest remainder method Larmor formula Las Vegas algorithm Laser diode rate equations Lasker–Noether theorem Latent Dirichlet allocation Latent class model Latent growth modeling Latent variable Latent variable model Lateral surface Latimer-MacDuffee theorem Latin hypercube sampling Latin square Latin square property Latitude Lattice (discrete subgroup) Lattice (group) Lattice (mathematics) Lattice (order) Lattice graph Lattice group Lattice of subgroups Lattice plane Lattice reduction Lattice sieving Lattice theorem Lattice-based access control Laurent polynomial Laurent series Lauricella hypergeometric series Lauricella's theorem Laver function Laver table Laver tree Law (stochastic processes) Law of Truly Large Numbers Law of accumulation Law of averages Law of cosines Law of cosines (hyperbolic) Law of cosines (spherical) Law of excluded middle Law of large numbers Law of sines Law of small numbers Law of tangents Law of the iterated logarithm Law of total cumulance Law of total expectation Law of total probability Law of total variance Lawrence–Krammer representation Laws of Form Laws of classical logic Lax equivalence theorem Lax pair Lax-Milgram lemma Lax–Friedrichs method Lax–Milgram theorem Lax–Wendroff method Layer cake representation Layered hidden Markov model Lazy S Lazy caterer's sequence Le Cam's theorem Leading zero Leaf node Leaky integrator Leapfrog integration Learning classifier system Learning curve Least absolute deviations Least common multiple Least fixed point Least squares Least upper bound axiom Least-squares estimation of linear regression coefficients Least-squares spectral analysis Lebedev grid Lebedev quadrature Lebedev–Milin inequality Lebesgue constant Lebesgue constant (interpolation) Lebesgue covering dimension Lebesgue differentiation theorem Lebesgue integration Lebesgue measure Lebesgue point Lebesgue spine Lebesgue's decomposition theorem Lebesgue's density theorem Lebesgue's lemma Lebesgue's number lemma Lebesgue-Stieltjes integration Lebombo bone Lee Hwa Chung theorem Lee distance Leech lattice Lee–Yang theorem Lefschetz duality Lefschetz fixed-point theorem Lefschetz hyperplane theorem Lefschetz manifold Lefschetz pencil Lefschetz zeta function Left inverse Left-right symmetry Left-truncatable prime Leftover hash-lemma Legendre chi function Legendre form Legendre polynomials Legendre rational functions Legendre relation Legendre sieve Legendre symbol Legendre transformation Legendre wavelet Legendre's conjecture Legendre's constant Legendre's equation Legendrian knot Lehmann–Scheffé theorem Lehmer matrix Lehmer mean Lehmer number Lehmer's GCD algorithm Lehmer's conjecture Lehmer-Schur algorithm Leibniz algebra Leibniz and Newton calculus controversy Leibniz formula for determinants Leibniz formula for pi Leibniz harmonic triangle Leibniz integral rule Leibniz operator Leibniz rule (generalized product rule) Leibniz's notation Lemma (mathematics) Lemniscate Lemniscate of Bernoulli Lemniscate of Booth Lemniscate of Gerono Lemniscatic elliptic function Lemoine hexagon Lemoine's conjecture Length Length function Length of a module Lens (geometry) Lens space Lenstra elliptic curve factorization Lenstra–Lenstra–Lovász lattice basis reduction algorithm Lentoid Leonardo number Leray cover Leray spectral sequence Leray's theorem Leray-Hirsch theorem Lerch zeta function Lerche–Newberger sum rule Lernmatrix Leroy P. Steele Prize Leslie Fox Prize for Numerical Analysis Lester's theorem Letters to a Young Mathematician Level of measurement Level set Level set (data structures) Level set method Level spacing distribution Level structure Levenberg–Marquardt algorithm Levene's test Levenshtein automaton Levenshtein coding Levenshtein distance Leverage (statistics) Levi decomposition Levi graph Levi lemma Levi-Civita connection Levi-Civita parallelogramoid Levi-Civita symbol Levi-Lechicki theorem Leviathan number Levinson recursion Levinson's inequality Levy skew alpha-stable distribution Levy-Mises equations Lewis Carroll identity Lewy's example Lexicographic product Lexicographic product of graphs Lexicographical order Lexis ratio Leyland number Li's criterion Liar paradox Liber Abaci Library of Alexandria Lichnerowicz formula Lichtenberg figure Lickorish–Wallace theorem Lidstone series Lie algebra Lie algebra bundle Lie algebra cohomology Lie algebra representation Lie algebroid Lie bialgebra Lie bracket Lie bracket of vector fields Lie coalgebra Lie derivative Lie group Lie group decomposition Lie groupoid Lie product formula Lie ring Lie sphere geometry Lie subgroup Lie superalgebra Lie theory Lie's third theorem Lieb's square ice constant Lies, damned lies, and statistics Lie–Kolchin theorem Life table Lift (mathematics) Lifting scheme Light cone Lightface analytic game Like and unlike terms Likelihood function Likelihood principle Likelihood-ratio test Lilavati Lilliefors test Limaçon Limaçon trisectrix Limit (category theory) Limit (mathematics) Limit cardinal Limit of a function Limit of a sequence Limit ordinal Limit point Limit point compact Limit set Limit superior and limit inferior Limit theorem Limit-cycle Limit-preserving function (order theory) Limitation of size Limiting density of discrete points Limiting recursive Limits of integration Lin-Kernighan Lin-Tsien equation Linde-Buzo-Gray algorithm Lindelöf hypothesis Lindelöf space Lindelöf's lemma Lindelöf's theorem Lindemann–Weierstrass theorem Lindenbaum's lemma Lindenbaum–Tarski algebra Lindley equation Lindley's paradox Lindström quantifier Lindström's theorem Line (geometry) Line at infinity Line bundle Line drawing algorithm Line element Line field Line graph Line graph of a hypergraph Line integral Line moiré Line of action Line search Line segment Line segment intersection Line-intercept sampling Line-line intersection Line-plane intersection Linear Linear algebra Linear algebraic group Linear approximation Linear canonical transformation Linear classifier Linear code Linear combination Linear complementarity problem Linear complex structure Linear congruence theorem Linear congruential generator Linear connection Linear continuum Linear differential equation Linear discriminant analysis Linear dynamical system Linear elasticity Linear equation Linear extension Linear feedback shift register Linear flow on the torus Linear forms in logarithms Linear function Linear functional Linear independence Linear inequality Linear interpolation Linear least squares Linear logic Linear map Linear matrix inequality Linear model Linear multistep method Linear partial information Linear prediction Linear predictive analysis Linear predictive coding Linear probability model Linear programming Linear programming decoding Linear programming relaxation Linear regression Linear response function Linear search Linear separability Linear space Linear space (geometry) Linear span Linear speedup theorem Linear subspace Linear system Linear system of divisors Linear-quadratic regulator Linear-quadratic-Gaussian control Linearity of differentiation Linearity of integration Linearization Linearly ordered group Line–sphere intersection Link (geometry) Link (knot theory) Link concordance Linking number Linnik's constant Linnik's theorem Lions–Lax–Milgram theorem Liouville function Liouville number Liouville surface Liouville's equation Liouville's theorem Liouville's theorem (Hamiltonian) Liouville's theorem (complex analysis) Liouville's theorem (conformal mappings) Liouville–Neumann series Lipschitz continuity Lipschitz domain Liquid schedule Lissajous curve List coloring List coloring conjecture List edge-coloring List of African American mathematicians List of Banach spaces List of Boolean algebra topics List of Cambridge mathematicians List of Fourier analysis topics List of Fourier-related transforms List of Indian mathematicians List of International Mathematical Olympiad participants List of International Mathematical Olympiads List of Lie group topics List of Muslim mathematicians List of NP-complete problems List of PSPACE-complete problems List of Runge–Kutta methods List of SIAM academic members List of Slovenian mathematicians List of Ukrainian mathematicians List of United States regional mathematics competitions List of Wenninger polyhedron models List of Wranglers of the University of Cambridge List of abstract algebra topics List of algebraic coding theory topics List of algebraic geometry topics List of algebraic number theory topics List of algebraic structures List of algebraic surfaces List of algebraic topology topics List of algorithm general topics List of algorithms List of amateur mathematicians List of axioms List of books in computational geometry List of calculus topics List of canonical coordinate transformations List of category theory topics List of chaotic maps List of character tables for chemically important 3D point groups List of circle topics List of cohomology theories List of combinatorial computational geometry topics List of combinatorics topics List of commutative algebra topics List of complex analysis topics List of complexity classes List of computability and complexity topics List of computer graphics and descriptive geometry topics List of conjectures List of convexity topics List of convolutions of probability distributions List of coordinate charts List of cryptographic key types List of curve topics List of curves List of cycles List of differential geometry topics List of differentiation identities List of digital estimation techniques List of disproved mathematical ideas List of dynamical systems and differential equations topics List of equations List of equations in classical mechanics List of examples in general topology List of exceptional set concepts List of exponential topics List of factorial and binomial topics List of female mathematicians List of films about mathematicians List of finite element software packages List of finite simple groups List of first-order theories List of forcing notions List of formulae involving π List of formulas in Riemannian geometry List of fractal topics List of fractals by Hausdorff dimension List of free electronic journals in mathematics List of functional analysis topics List of general topology topics List of geometers List of geometric shapes List of geometric topology topics List of geometry topics List of graph theory terms List of graph theory topics List of graphical methods List of graphs List of group theory topics List of harmonic analysis topics List of homological algebra topics List of hypergeometric identities List of important publications in mathematics List of important publications in statistics List of impossible puzzles List of inequalities List of information theory topics List of integrals of area hyperbolic functions List of integrals of exponential functions List of integrals of hyperbolic functions List of integrals of inverse hyperbolic functions List of integrals of inverse trigonometric functions List of integrals of irrational functions List of integrals of logarithmic functions List of integrals of rational functions List of integrals of trigonometric functions List of integration and measure theory topics List of knapsack problems List of knot theory topics List of large cardinal properties List of lemmas List of letters used in mathematics and science List of limits List of linear algebra references List of linear algebra topics List of logarithm topics List of logarithmic identities List of logic topics List of logicians List of manifolds List of mathematical examples List of mathematical functions List of mathematical identities List of mathematical knots and links List of mathematical logic topics List of mathematical probabilists List of mathematical proofs List of mathematical series List of mathematical shapes List of mathematical societies List of mathematical theories List of mathematical topics in classical mechanics List of mathematical topics in quantum theory List of mathematical topics in relativity List of mathematicians List of mathematicians (A) List of mathematicians (B) List of mathematicians (C) List of mathematicians (D) List of mathematicians (E) List of mathematicians (F) List of mathematicians (G) List of mathematicians (H) List of mathematicians (I) List of mathematicians (J) List of mathematicians (K) List of mathematicians (L) List of mathematicians (M) List of mathematicians (N) List of mathematicians (O) List of mathematicians (P) List of mathematicians (Q) List of mathematicians (R) List of mathematicians (S) List of mathematicians (T) List of mathematicians (U) List of mathematicians (V) List of mathematicians (W) List of mathematicians (X) List of mathematicians (Y) List of mathematicians (Z) List of mathematicians who studied chess List of mathematics articles List of mathematics competitions List of mathematics history topics List of mathematics reference tables List of mathematics-based methods List of matrices List of misnamed theorems List of moment of inertia tensors List of multivariable calculus topics List of nonlinear partial differential equations List of number theory topics List of numbers List of numbers in various languages List of numeral system topics List of numerical analysis software List of numerical analysis topics List of numerical computational geometry topics List of operators List of optimization algorithms List of order topics List of partial differential equation topics List of partition topics List of permutation topics List of planar symmetry groups List of polygons, polyhedra and polytopes List of polynomial topics List of presidents of the Associação Brasileira de Estatística List of prime numbers List of probability distributions List of probability topics List of properties of sets of reals List of publications by Emmy Noether List of published false theorems List of random number generators List of real analysis topics List of recreational number theory topics List of regular polytopes List of representation theory topics List of rotational symmetry polyhedral sets List of rules of inference List of scientific journals in mathematics List of scientific journals in mathematics education List of scientific journals in probability List of scientific journals in statistics List of scientific theories and laws List of set theory topics List of simple Lie groups List of small groups List of solution strategies for differential equations List of special functions and eponyms List of spherical symmetry groups List of statements undecidable in ZFC List of statistical topics List of statisticians List of statistics topics List of stochastic processes topics List of string theory topics List of surfaces List of terms relating to algorithms and data structures List of textbooks in statistical mechanics List of theorems List of topics named after Alfred Tarski List of topics named after Carl Friedrich Gauss List of topics named after Hermann Weyl List of topics named after Leonhard Euler List of topics named after Srinivasa Ramanujan List of topics related to π List of topology topics List of transforms List of triangle topics List of trigonometric identities List of trigonometry topics List of types of functions List of undecidable problems List of uniform polyhedra List of uniform polyhedra by Wythoff symbol List of uniform polyhedra by spherical triangle List of uniform polyhedra by vertex figure List of uniform tilings List of variational topics List of vector identities List of vector spaces in mathematics List of wave topics List of wavelet-related transforms List of winners of the Mathcounts competition List of works designed with golden ratio List of works designed with the golden ratio List of zero terms List ranking List-decoding Lists of integrals Lists of mathematics topics Literal (mathematical logic) Littelmann path model Little's law Little's lemma Littlewood conjecture Littlewood polynomial Littlewood's law Littlewood's three principles of real analysis Littlewood-Offord problem Littlewood–Richardson rule Lituus Lituus (mathematics) Liu Hui's π algorithm Liénard equation Ljung-Box test Lloyd's algorithm Lo Shu Square Lobachevsky Medal Local analysis Local boundedness Local class field theory Local cohomology Local convergence Local convex hull Local diffeomorphism Local field Local flatness Local homeomorphism Local independence Local linearity Local martingale Local optimum Local property Local quantum field theory Local regression Local ring Local search (optimization) Local system Local time (mathematics) Local trace formula Local zeta-function Locality (statistics) Localization (mathematics) Localization of a category Localization of a module Localization of a ring Localization of a topological space Locally Hausdorff space Locally compact group Locally compact quantum group Locally compact space Locally connected space Locally constant function Locally convex topological vector space Locally cyclic group Locally discrete collection Locally finite collection Locally finite group Locally finite measure Locally finite poset Locally free sheaf Locally integrable function Locally nilpotent Locally normal space Locally regular space Locally simply connected space Location (topology) Location arithmetic Location estimation in sensor networks Location parameter Location-scale family Lochs' theorem Locus (mathematics) Loewner's torus inequality Log-Laplace distribution Log-linear modeling Log-log graph Log-logistic distribution Log-normal distribution Logarithm Logarithm of a matrix Logarithmic convolution Logarithmic decrement Logarithmic derivative Logarithmic distribution Logarithmic form Logarithmic growth Logarithmic integral function Logarithmic mean Logarithmic norm Logarithmic scale Logarithmic spiral Logarithmically concave function Logarithmically concave measure Logarithmically convex function Logarithmically-spaced Dirac comb Logic Logic Lane Logic alphabet Logic in China Logic in Islamic philosophy Logic redundancy Logical NOR Logical axiom Logical biconditional Logical conjunction Logical connective Logical consequence Logical constant Logical disjunction Logical equality Logical graph Logical harmony Logical machine Logical matrix Logical topology Logicism Logics for computability Logistic distribution Logistic function Logistic map Logistic regression Logit Logit analysis in marketing LogitBoost Logmoment generating function Logrank test Lommel function Lommel polynomial London Mathematical Society Lonely runner conjecture Long and short scales Long division Long line (topology) Long prime Long-range dependency Long-tail traffic Longest common subsequence problem Longest element of a Coxeter group Longest increasing subsequence Longest path problem Longest uncrossed knight's path Longitude Look-and-say sequence Looman–Menchoff theorem Loomis-Whitney inequality Loop (graph theory) Loop (topology) Loop algebra Loop braid group Loop group Loop integral Loop space Loop subdivision surface Loop theorem Loop-erased random walk Lorentz covariance Lorentz group Lorentz transformation Lorenz attractor Lorenz curve Loss function Loss of significance Lotka's law Lotka–Volterra equation Lottery mathematics Lovász conjecture Lovász local lemma Low (computability) Low basis theorem Low birth weight paradox Low-dimensional topology Low-discrepancy sequence Lower half-plane Lower limit topology Lowest common ancestor Lowest common denominator Lowndean Professor of Astronomy and Geometry Lozanić's triangle Lozenge Loève Prize Lp space Lr-diode Lubell-Yamamoto-Meshalkin inequality Lucas chain Lucas number Lucas prime Lucas pseudoprime Lucas sequence Lucas' theorem Lucas-Carmichael number Lucas-Lehmer-Riesel test Lucasian Professor of Mathematics Lucas–Lehmer test Lucas–Lehmer test for Mersenne numbers Luce's choice axiom Lucifer (cipher) Lucky number Lucky numbers of Euler Lucky prime Ludics Luhn algorithm Luhn mod N algorithm Lumer-Phillips theorem Lunar theory Lune (mathematics) Lusin space Lusin's theorem Lusser's law Lute of Pythagoras Luxembourg Mathematical Society Luzin N property Luzin set Lwow-Warsaw School of Logic Lwów School of Mathematics Lyapunov equation Lyapunov exponent Lyapunov fractal Lyapunov function Lyapunov redesign Lyapunov stability Lyapunov theory Lyapunov time Lyapunov's central limit theorem Lychrel number Lyndon word Lyndon–Hochschild–Serre spectral sequence Lyons group Lyusternik–Schnirelmann category L² cohomology Lévy C curve Lévy continuity theorem Lévy distribution Lévy family of graphs Lévy flight Lévy metric Lévy process Lévy's constant Lévy's convergence theorem Lévy's modulus of continuity Lévy-Prokhorov metric Löb's theorem Löwenheim–Skolem theorem M-estimator M-group M-matrix M-separation M-set M. C. Escher's legacy M. Riesz extension theorem M/M/1 model MANCOVA MASCOS MAX-3SAT(13) MCAR MCS algorithm MD5 MINQUE MPS (format) MPrime MTD-f MU puzzle MULTI-S01 MUSCL scheme MV-algebra Maass wave form Mac Lane's planarity criterion MacCormack method MacLaurin's inequality MacTutor History of Mathematics archive Macaulay brackets Macbeath surface Macdonald conjecture Macdonald polynomial Machin-like formula Mackey space Mackey topology Madelung equations Magic circle (mathematics) Magic constant Magic cube Magic cube classes Magic gopher Magic hexagon Magic hyperbeam Magic hypercube Magic series Magic square Magic star Magic tesseract Magma (algebra) Magnetogravitic tensor Magnitude (mathematics) Magnitude condition Magnus series Mahalanobis distance Maharam's theorem Mahler measure Mahler's compactness theorem Mahler's inequality Mahler's theorem Mahlo cardinal Main diagonal Main effect Majorana equation Majority function Majorization Malament-Hogarth spacetime Malcev algebra Malfatti circles Malgrange preparation theorem Malgrange–Ehrenpreis theorem Malliavin calculus Malliavin derivative Malliavin's absolute continuity lemma Mallows' Cp Malnormal subgroup Malthusian equilibrium Malthusian growth model Management science Manakov system Mandart inellipse Mandelbrot Competition Mandelbrot set Manifold Manifold (magazine) Manifold Destiny Manifold decomposition Manin obstruction Manin-Mumford conjecture Mann–Whitney U Mantel test Mantissa Many-one reduction Many-sorted logic Map (mathematics) Map algebra Map of lattices Map projection Map-coloring games Maplets for Calculus Mapping class group Mapping cone Mapping cone (homological algebra) Mapping cylinder Mapping torus Marchenko equation Marching cubes Marching squares Marching triangles Marcinkiewicz interpolation theorem Marden's theorem Margin of error Marginal distribution Marginal likelihood Marginal model Marginal stability Marginal value theorem Margulis lemma Marian star Maris-McGwire-Sosa pair Mark V Shaney Mark and recapture Mark-Houwink equation Markov additive process Markov algorithm Markov blanket Markov brothers' inequality Markov chain Markov chain Monte Carlo Markov chain geostatistics Markov chain mixing time Markov decision process Markov information source Markov logic network Markov network Markov number Markov partition Markov perfect Markov process Markov property Markov spectrum Markov's inequality Markov's principle Markovian arrival processes Markus-Yamabe conjecture Marsaglia polar method Marshall Hall's conjecture Marshallian demand function Martin measure Martin's axiom Martin's maximum Martingale (probability theory) Martingale central limit theorem Martingale difference sequence Martingale representation theorem Maschke's theorem Mason's rule Mason-Weaver equation Mass flux Massachusetts Association of Math Leagues Massera's lemma Massey product Master equation Master theorem Matched filter Matching Matching distance Matching pursuit Mate Tusanga Matematička gimnazija Material conditional Material derivative Material nonimplication Math A Math Country Math Curse Math Field Day Math Girl Math Images Project Math Is Fun Math League Math Suks (song) Math circle Math for America Math rock Math wars Math worksheet generator MathChallengers MathFest MathML MathPath MathWorld Mathcore Mathcounts Mathemagician Mathematica Mathematical Association Mathematical Association of America Mathematical Contest in Modeling Mathematical Correspondent Mathematical Kangaroo Mathematical Olympiad Program Mathematical Olympiads for Elementary and Middle Schools Mathematical Programming Society Mathematical Reviews Mathematical Sciences Foundation Mathematical Sciences Research Institute Mathematical Society of Japan Mathematical Tables Project Mathematical Treatise in Nine Sections Mathematical alphanumeric symbols Mathematical analysis Mathematical anxiety Mathematical beauty Mathematical biology Mathematical challenges Mathematical chemistry Mathematical coincidence Mathematical constant Mathematical constants (sorted by continued fraction representation) Mathematical diagram Mathematical discussion of rangekeeping Mathematical economics Mathematical finance Mathematical folklore Mathematical formulation of quantum mechanics Mathematical game Mathematical induction Mathematical jargon Mathematical joke Mathematical journal Mathematical logic Mathematical manipulative Mathematical markup languages Mathematical maturity Mathematical methods in electronics Mathematical model Mathematical morphology Mathematical notation Mathematical object Mathematical physics Mathematical practice Mathematical problem Mathematical programming with equilibrium constraints Mathematical proof Mathematical puzzle Mathematical singularity Mathematical sociology Mathematical software Mathematical statistics Mathematical structure Mathematical table Mathematical variety Mathematically Correct Mathematician Mathematics Mathematics Genealogy Project Mathematics Magazine Mathematics Subject Classification Mathematics and Informatics Quarterly Mathematics and architecture Mathematics and art Mathematics and fiber arts Mathematics as a language Mathematics education Mathematics education in Australia Mathematics education in New York Mathematics in medieval Islam Mathematics of CRC Mathematics of Computation Mathematics of Operations Research Mathematics of Sudoku Mathematics of bookmaking Mathematics of general relativity Mathematics of paper folding Mathematics, Form and Function Mathematische Annalen Mathematische Arbeitstagung Mathematische Zeitschrift Mathematisches Forschungsinstitut Oberwolfach Mathi (numerical system) Mathieu function Mathieu group Mathland Mathlete Mathman Mathnet Maths, Stats & OR Network Mathscape Matiyasevich's theorem Matricization Matrix (mathematics) Matrix addition Matrix calculus Matrix chain multiplication Matrix coefficient Matrix congruence Matrix decomposition Matrix determinant lemma Matrix differential equation Matrix exponential Matrix function Matrix grammar Matrix group Matrix method Matrix model Matrix multiplication Matrix norm Matrix normal distribution Matrix of ones Matrix pencil Matrix population models Matrix representation Matrix representation of conic sections Matrix ring Matrix theory Matrix unit Matrix-free methods Matroid Matroid embedding Mauchly's sphericity test Maupertuis' principle Maurer–Cartan form Mautner's lemma MaverickCrunch Max Noether's theorem Max Planck Institute for Mathematics Max Planck Institute for Mathematics in the Sciences Max-flow min-cut theorem Max-plus algebra Maxima and minima Maximal arc Maximal compact subgroup Maximal consistent set Maximal cut Maximal element Maximal ergodic theorem Maximal function Maximal ideal Maximal independent set Maximal intersecting family Maximal semilattice quotient Maximal set Maximal subgroup Maximal torus Maximising measure Maximum a posteriori Maximum common subgraph isomorphism problem Maximum cut Maximum entropy probability distribution Maximum flow problem Maximum length sequence Maximum likelihood Maximum likelihood sequence estimation Maximum modulus principle Maximum power theorem Maximum principle Maximum satisfiability problem Maximum spacing estimation Maximum subarray problem Maximum term method Maximum-margin hyperplane Maximum-minimums identity Maxwell speed distribution Maxwell stress tensor Maxwell's equations Maxwell's equations in curved spacetime Maxwell's theorem Maxwell–Boltzmann distribution Maxwell–Boltzmann statistics May spectral sequence May's theorem Maya numerals Mayer f-function Mayer-Vietoris Mayer–Vietoris sequence Maze generation algorithm Mazur manifold Mazur's lemma Mazur's torsion theorem Mazur–Ulam theorem McCarthy 91 function McCarthy Formalism McCullagh's parametrization of the Cauchy distributions McDiarmid's inequality McNemar's test McShane's identity Meagre set Mean Mean absolute error Mean absolute percentage error Mean and predicted response Mean curvature Mean curvature flow Mean difference Mean integrated squared error Mean of circular quantities Mean reciprocal rank Mean square quantization error Mean square weighted deviation Mean squared error Mean squared prediction error Mean value theorem Mean value theorem (divided differences) Mean-reverting process Meander (mathematics) Measurable cardinal Measurable function Measure (mathematics) Measure of non-compactness Measure-preserving dynamical system Measurement of a Circle Measuring function Mechanica Mechanical equilibrium Mechanical wave Medial Medial axis Medial rhombic triacontahedron Medial triangle Median Median (geometry) Median absolute deviation Median algebra Median cut Median filter Median graph Median polish Median test Mediant (mathematics) Mediator variable Meet (mathematics) Megaprime Mehrotra predictor-corrector method Meijer G-Function Meissel-Mertens constant Meixner polynomials Mel frequency cepstral coefficient Mellin inversion theorem Mellin transform Melnikov distance Membership function (mathematics) Memoization Memorylessness Men of Mathematics Menelaus' theorem Menger curvature Menger sponge Menger's theorem Mental Calculation World Cup Mercator projection Mercator series Mercer's condition Mercer's theorem Mereology Mereotopology Mergelyan's theorem Meromorphic function Mersenne conjectures Mersenne number Mersenne prime Mersenne twister Mertens conjecture Mertens function Mertens' theorems Meshfree methods Mesocompact space Meta-analysis Metaanalytic thinking Metabelian group Metabiaugmented dodecahedron Metabiaugmented hexagonal prism Metabiaugmented truncated dodecahedron Metabidiminished icosahedron Metabidiminished rhombicosidodecahedron Metabigyrate rhombicosidodecahedron Metacompact space Metacyclic group Metagrobology Metagyrate diminished rhombicosidodecahedron Metaheuristic Metalanguage Metalogic Metamath Metamathematics Metanilpotent group Metaplectic group Metaplectomorphism Metastability Metatheorem Metcalfe's law Method of Fluxions Method of analytic tableaux Method of averaging Method of characteristics Method of distinguished element Method of exhaustion Method of indivisibles Method of lines Method of matched asymptotic expansions Method of moments (probability theory) Method of moments (statistics) Method of steepest descent Method of successive substitution Method of undetermined coefficients Method of variation of parameters Method ringing Methoden der mathematischen Physik Methods of computing square roots Methods of contour integration Metra Potential Method Metric (mathematics) Metric (vector bundle) Metric connection Metric derivative Metric dimension (graph theory) Metric map Metric outer measure Metric signature Metric space Metric space aimed at its subspace Metric tensor Metric tensor (general relativity) Metrization theorem Metropolis light transport Metropolis–Hastings algorithm Meusnier's theorem Mex (mathematics) Mexican hat wavelet Mexican paradox Meyer's theorem Mian–Chowla sequence Mice problem Michael selection theorem Michaelis–Menten kinetics Michigan Mathematical Journal Microbundle Microlocal analysis Micromort Microscopic traffic flow model Microsimulation Mid-range Middle-square method Midhinge Midpoint Midpoint circle algorithm Midpoint method Midsphere Midy's theorem Mihăilescu's theorem Miklós Schweitzer Competition Millennium Mathematics Project Millennium Prize Problems Miller index Miller–Rabin primality test Milliard Milliken's tree theorem Milliken-Taylor theorem Million Millionth Mills prime Mills' constant Milman's reverse Brunn-Minkowski inequality Milman–Pettis theorem Milnor conjecture Milnor conjecture (topology) Milnor map Milnor number Milnor–Thurston kneading theory Milstein method Milü Mimetic Mimetic (mathematics) Min-entropy Min-max theorem Minakshisundaram-Pleijel zeta function Minimal counterexample Minimal model Minimal model (birational geometry) Minimal model (set theory) Minimal negation operator Minimal polynomial (field theory) Minimal polynomial (linear algebra) Minimal prime (commutative algebra) Minimal prime (number theory) Minimal realization Minimal surface Minimal volume Minimax Minimax approximation algorithm Minimax estimator Minimax eversion Minimum bounding box Minimum bounding box algorithms Minimum bounding rectangle Minimum cost flow problem Minimum degree algorithm Minimum distance Minimum distance estimation Minimum energy control Minimum evolution Minimum mean square error Minimum phase Minimum polynomial extrapolation Minimum spanning tree Minimum weight Minimum-variance unbiased estimator Minitab Minkowski addition Minkowski content Minkowski diagram Minkowski distance Minkowski functional Minkowski inequality Minkowski plane Minkowski space Minkowski's bound Minkowski's first inequality for convex bodies Minkowski's question mark function Minkowski's theorem Minkowski-Bouligand dimension Minkowski-Hlawka theorem Minkowski-Steiner formula Minlos' theorem Minnesota State High School Mathematics League Minor (graph theory) Minor (linear algebra) Minority game Minute of arc Miquel's theorem Mirimanoff's congruence Mirror image Mirror symmetry Mirror symmetry (string theory) Misfit (short story) Misiurewicz point Misner space Missing square puzzle Missing values Misuse of statistics Misère Mitchell's embedding theorem Mittag-Leffler Institute Mittag-Leffler function Mittag-Leffler star Mittag-Leffler's theorem Mittenpunkt Miwins dice Mixed boundary condition Mixed complementarity problem Mixed logit Mixed radix Mixed tensor Mixed-design analysis of variance Mixing (mathematics) Mixing (physics) Mixmath Mixture (probability) Mixture density Mixture model Mizar system Mock modular form Mod n cryptanalysis Modal algebra Modal analysis using FEM Modal companion Modal matrix (linear algebra) Modal operator Mode (statistics) Model (abstract) Model category Model checking Model complete theory Model elimination Model of computation Model output statistics Model predictive control Model selection Model structure Model theory Modeling and analysis of financial markets Models of non-Euclidean geometry Moderator variable Modern Arabic mathematical notation Modern Curriculum Press Modes of convergence Modes of convergence (annotated index) Modified Morlet wavelet Modified Richardson iteration Modified Wigner distribution function Modified discrete cosine transform Modular Lie algebra Modular arithmetic Modular curve Modular decomposition Modular equation Modular exponentiation Modular form Modular group Modular invariance Modular lattice Modular multiplicative inverse Modular product of graphs Modular representation theory Modular subgroup Modularity (networks) Modularity theorem Modulated complex lapped transform Module (mathematics) Moduli (physics) Moduli of algebraic curves Moduli scheme Moduli space Modulo Modulo (jargon) Modulus (algebraic number theory) Modulus and characteristic of convexity Modulus of continuity Modus ponens Modus tollens Mohr-Coulomb theory Mohr–Mascheroni theorem Moiré pattern Moksha numerals Molecular graph Molien series Mollifier Moment (mathematics) Moment map Moment matrix Moment of inertia Moment problem Moment-generating function Momentum (electromagnetic simulator) Momentum flux Monad (category theory) Monad (functional programming) Monad (non-standard analysis) Monadic Boolean algebra Monadic predicate calculus Monge array Monge cone Monge equation Monge's theorem Monge–Ampère equation Monic Monkey saddle Monochromatic triangle Monodromy Monodromy matrix Monodromy theorem Monogenic (mathematics) Monogenic field Monogenous Monoid Monoid (category theory) Monoid ring Monoidal adjunction Monoidal category Monoidal functor Monoidal monad Monoidal natural transformation Monoidal t-norm logic Monomial Monomial basis Monomial group Monomial order Monomial representation Monomorphism Monopole (mathematics) Monostatic polytope Monotone class theorem Monotone convergence theorem Monotone cubic interpolation Monotone likelihood ratio property Monotone polygon Monotonic function Monotonically normal space Monster Lie algebra Monster group Monster vertex algebra Monstrous moonshine Montante's method Monte Carlo algorithm Monte Carlo integration Monte Carlo method Monte Carlo method for photon transport Monte Carlo methods for option pricing Monte Carlo methods in finance Monte Carlo molecular modeling Monte Carlo option model Monte Carlo project Montel space Montel's theorem Montgomery reduction Montonen-Olive duality Monty Hall problem Moody chart Moody's Mega Math Challenge Moore curve Moore determinant Moore graph Moore method Moore neighborhood Moore plane Moore space Moore space (algebraic topology) Moore space (topology) Moore's law Moore-Penrose pseudoinverse Moral certainty Moral graph Moran's I Morava K-theory Mordell conjecture Mordell curve Mordell–Weil theorem More maths grads Moreau's necklace-counting function Moreau's theorem Morera's theorem Morgan Prize Morin surface Morita conjectures Morita equivalence Morlet wavelet Morley rank Morley's categoricity theorem Morley's theorem Morley's trisector theorem Morphism Morphological computation Morphological computation (robotics) Morrie's law Morse homology Morse theory Morse-Palais lemma Morse–Kelley set theory Mortar methods Morton number Morton number (number theory) Morton's theorem Mosco convergence Moscow Mathematical Journal Moscow Mathematical Papyrus Moscow Mathematical Society Most probable number Most-perfect magic square Mostow rigidity theorem Mostowski collapse lemma Motion vector Motions in the time-frequency distribution Motive (algebraic geometry) Motivic cohomology Motivic integration Motor variable Motzkin number Motzkin prime Moufang loop Moufang plane Moufang polygon Moulton plane Mountain climbing problem Mountain pass theorem Mouse (set theory) Movable singularity Move-to-front transform Moving average Moving frame Moving least squares Moving sofa problem Moyal product Mrs. Miniver's problem Mu Alpha Theta Mu Alpha Theta National Log 1 Contest Mu calculus Muckenhoupt weights Mueller calculus Muhyi al-Dīn al-Maghribī Muirhead's inequality Mukai-Fourier transform Multi-Criteria Decision Analysis Multi-adjoint logic programming Multi-commodity flow problem Multi-compartment model Multi-index notation Multi-valued logic Multibody system Multibrot set Multicategory Multicollinearity Multicomplex number Multidimensional analysis Multidimensional panel data Multidimensional scaling Multidisciplinary design optimization Multifactor dimensionality reduction Multifractal system Multigrade operator Multigraph Multigrid method Multilateration Multilevel model Multilevel programming Multilinear algebra Multilinear form Multilinear map Multimagic cube Multimagic square Multinomial Multinomial distribution Multinomial logit Multinomial probit Multinomial test Multinomial theorem Multiobjective optimization Multiphysics Multiple (mathematics) Multiple comparisons Multiple correlation Multiple cross products Multiple discriminant analysis Multiple edges Multiple integral Multiple of the median Multiple rule-based problems Multiple testing correction Multiple zeta function Multiple-conclusion logic Multiple-indicator kriging Multiple-try Metropolis Multiplication Multiplication ALU Multiplication algorithm Multiplication operator Multiplication sign Multiplication table Multiplication theorem Multiplicative calculus Multiplicative cascade Multiplicative digital root Multiplicative distance Multiplicative function Multiplicative group Multiplicative group of integers modulo n Multiplicative inverse Multiplicative number theory Multiplicative order Multiplicative partition Multiplicity (mathematics) Multiplier (Fourier analysis) Multiplier algebra Multipliers and centralizers (Banach spaces) Multiply perfect number Multiply-accumulate Multiply-with-carry Multipole expansion Multipole moments Multiprocessor scheduling Multiresolution analysis Multiscale decision making Multiscale mathematics Multiset Multistage sampling Multitaper Multitrait-multimethod matrix Multiunit auction Multivalued function Multivariable calculus Multivariate Polya distribution Multivariate Student distribution Multivariate adaptive regression splines Multivariate analysis Multivariate analysis of variance Multivariate division algorithm Multivariate gamma function Multivariate interpolation Multivariate normal distribution Multivariate random variable Multivariate statistics Multivector Multiview orthographic projection Mumford conjecture Murderous Maths Musean hypernumber Music and mathematics Musical isomorphism Mutation (knot theory) Mutual coherence (linear algebra) Mutual information Mutual recursion Mutually exclusive events My Philosophical Development Mycielskian Myers's theorem Myhill isomorphism theorem Myhill–Nerode theorem Myriad Myriagonal number Mysterious duality Mythical number Ménage problem Möbius configuration Möbius energy Möbius function Möbius inversion formula Möbius ladder Möbius strip Möbius transform Möbius transformation Möbius–Kantor graph Møller-Plesset perturbation theory Müller's method Müntz–Szász theorem N!-conjecture N-Mahlo cardinal N-body problem N-category N-category number N-connected space N-dimensional sequential move puzzles N-dimensional space N-huge cardinal N-jet N-monoid N-player game N-set N-skeleton N-sphere NAPCS NE (complexity) NFSNet NL (complexity) NP (complexity) NP-Intermediate NP-complete NP-easy NP-equivalent NP-hard NSMB (mathematics) NTIME NURMS NYC HOLD Nabla symbol Nachbin's theorem Nagata ring Nagata's conjecture on curves Nagata-Biran conjecture Nagata–Smirnov metrization theorem Nagel point Nagell–Lutz theorem Nahm equations Naimark's dilation theorem Naimark's problem Naive Bayes classifier Naive Set Theory (book) Naive set theory Nakagami distribution Nakai conjecture Nakayama lemma Names for the number 0 Names of large numbers Napier's bones Napierian logarithm Napkin folding problem Napoleon's problem Napoleon's theorem Narayana number Narcissistic number Narrow class group Nash embedding theorem Nash equilibrium Nash functions Nash's theorem Nash-Moser theorem Nasik magic hypercube Nassim Nicholas Taleb National Assessment & Testing National Board for Higher Mathematics National Conference on Mathematical and Computational Models National Council of Teachers of Mathematics National Research Institute for Mathematics and Computer Science Natural computation Natural deduction Natural density Natural exponential family Natural filtration Natural logarithm Natural neighbor Natural number Natural number object Natural process variation Natural proof Natural pseudodistance Natural topology Natural transformation Navier–Stokes equations Navier–Stokes existence and smoothness Navigrid Naylor Prize and Lectureship Near-field (mathematics) Near-miss Johnson solid Near-semiring Nearest integer function Nearest neighbor graph Nearest neighbor search Nearest neighbour algorithm Nearest-neighbor interpolation Nearring Necessary and sufficient condition Necker cube Necklace (combinatorics) Necklace problem Necklace splitting problem Necktie paradox Needleman-Wunsch algorithm NegaFibonacci coding Negacyclic convolution Negafibonacci Negamax Negation Negation normal form Negative and non-negative numbers Negative base Negative binomial distribution Negative feedback Negative pedal curve Negative probability Negative relationship Negentropy Negligible set Neighborhood semantics Neighbourhood (graph theory) Neighbourhood (mathematics) Neighbourhood statistics Neighbourhood system Nelder-Mead method Nemeth Braille Nemmers Prize in Mathematics Nemytskii operator Neper Nephroid Nernst equation Nerve (category theory) Nerve of a covering Nesbitt's inequality Nest algebra Nested intervals Nested radical Nested sampling algorithm Nested stack automaton Nesting algorithm Net (mathematics) Net (polyhedron) Network (mathematics) Network Probability Matrix Network analysis Network automaton Network calculus Network coding Network dynamics Network theory Network topology Networks and Spatial Economics Neugebauer equations Neumann boundary condition Neumann series Neumann-Dirichlet method Neumann-Neumann methods Neural network Neusis construction Neutral vector Nevanlinna Prize Nevanlinna theory Nevanlinna–Pick interpolation Neville's algorithm New Foundations New Math New Math (song) New York City Interscholastic Mathematics League New York Number Theory Seminar New York State Mathematics League New digraph reconstruction conjecture NewPGen Newcastle–Ottawa scale Newcomb's paradox Newman's lemma Newman-Penrose formalism Newman-Shanks-Williams prime Newmark-beta method Newsvendor Newton fractal Newton polygon Newton polynomial Newton's identities Newton's inequalities Newton's laws of motion Newton's method Newton's method in optimization Newton's notation Newtonian limit Newtonian potential Newton–Cotes formulas Newton–Pepys problem NextEinstein Neyman-Pearson lemma Nice name Nichols plot Nielsen realization problem Nielsen theory Nielsen transformation Nielsen's spiral Nielsen-Thurston classification Niemeier lattice Nijenhuis bracket Nijenhuis–Richardson bracket Nikodym set Nilmanifold Nilpotent Nilpotent Lie algebra Nilpotent cone Nilpotent group Nilpotent matrix Nilpotent operator Nilpotent orbit Nilradical Nim Nimber Nine lemma Nine-point circle Nine-point hyperbola Niven's constant No cloning theorem No free lunch in search and optimization No free lunch theorem No wandering domain theorem No-communication theorem No-go theorem No-three-in-line problem Noble polyhedron Node (autonomous system) Nodec space Nodoid Noether Lecture Noether normalization lemma Noether's second theorem Noether's theorem Noether's theorem on rationality for surfaces Noetherian Noetherian module Noetherian ring Noetherian topological space Noisy text Noisy-channel coding theorem Nome (mathematics) Nominal category Nominal number Nomogram Nomological network Non-Borel set Non-Desarguesian plane Non-Euclidean crystallographic group Non-Euclidean geometry Non-Hausdorff manifold Non-Newtonian calculus Non-abelian Non-abelian class field theory Non-adjacent form Non-analytic smooth function Non-associative algebra Non-classical analysis Non-commutative harmonic analysis Non-compact stencil Non-equilibrium thermodynamics Non-homogeneous Poisson process Non-increasing sequence Non-integer representation Non-linear least squares Non-logical symbol Non-measurable set Non-negative matrix factorization Non-parametric statistics Non-perturbative Non-positive curvature Non-smooth mechanics Non-standard analysis Non-standard arithmetic Non-standard calculus Non-standard model Non-standard positional numeral systems Non-well-founded set theory Nonabelian group Nonagon Nonagonal number Nonary Nonassociative ring Nonblocking minimal spanning switch Noncentral F-distribution Noncentral chi distribution Noncentral chi-square distribution Noncentral hypergeometric distributions Noncentral parameter Noncentral t-distribution Noncentrality parameter Noncommutative geometry Noncommutative logic Noncommutative quantum field theory Noncommutative topology Nonconvex uniform polyhedron Noncototient Noncrossing partition Nondeterministic algorithm Nondimensionalization Nonelementary integral Nonextensive entropy Nonholonomic system Nonhypotenuse number Nonlinear Schrödinger equation Nonlinear acoustics Nonlinear autoregressive exogenous model Nonlinear conjugate gradient method Nonlinear dimensionality reduction Nonlinear eigenproblem Nonlinear functional analysis Nonlinear programming Nonlinear regression Nonlinear system Nonmetricity tensor Nonnegative matrix Nonnegative rank (linear algebra) Nonparametric regression Nonprobability sampling Nonstandard finite difference scheme Nontotient Nontransitive dice Nontransitive game Nontrivial Nonuniform rational B-spline Norbert Wiener Center for Harmonic Analysis and Applications Norbert Wiener Prize in Applied Mathematics Nordic mathematical competition Norm (group) Norm (mathematics) Norm form Norm of an ideal Norm variety Normal (mathematics) Normal basis Normal bundle Normal convergence Normal coordinates Normal crossing divisor Normal curve equivalent Normal distribution Normal extension Normal family Normal form Normal form (term rewriting) Normal function Normal mapping Normal matrix Normal measure Normal modal logic Normal mode Normal morphism Normal number Normal operator Normal order of an arithmetic function Normal polytope Normal scheme Normal sequence Normal space Normal subgroup Normal surface Normal tube Normal variance-mean mixture Normal-gamma distribution Normal-inverse Gaussian distribution Normal-scaled inverse gamma distribution Normalisation by evaluation Normality test Normalization (statistics) Normalization property (lambda-calculus) Normalized number Normalizing constant Normally distributed and uncorrelated does not imply independent Normed division algebra Normed vector space North Suburban Mathematics League Norton's theorem Norwegian Mathematical Society Norwegian Statistical Association Notation for differentiation Notation in probability and statistics Nothing up my sleeve number Notices of the American Mathematical Society Nova fractal Novikov conjecture Novikov's condition Novum Organum Nowhere continuous function Nowhere dense set Nowhere-zero flow Nth root Nth root algorithm NuPRL Nuclear operator Nuclear space Nuisance variable Null (mathematics) Null distribution Null graph Null hypothesis Null move Null result Null set Null vector Nullcline Nullity (graph theory) Nullity theorem Numb3rs Number Number Enigmas Number Theory Foundation Number line Number names Number sentence Number system Number theory Number-theoretic transform Numbering (computability theory) Numbering scheme Numbertime Numeracy Numeral Numeral system Numerator Numerical Analysis (book) Numerical Mathematics Consortium Numerical Recipes Numerical analysis Numerical aperture Numerical approximations of π Numerical continuation Numerical differentiation Numerical diffusion Numerical digit Numerical error Numerical integration Numerical linear algebra Numerical model of solar system Numerical ordinary differential equations Numerical parameter Numerical partial differential equations Numerical polynomial Numerical prefix Numerical range Numerical resistivity Numerical sign problem Numerical stability Numerical weather prediction Numerically effective Numerology Numerov's method Nyaya Nyquist ISI criterion Nyquist plot Nyquist stability criterion Nyquist–Shannon sampling theorem Nyström method Nyāya Sūtras Néron model Néron–Severi group Néron–Tate height Nörlund-Rice integral O'Nan group O(n) O-minimal theory OMDoc Obelus Oberwolfach Prize Object of the mind Object theory Oblate spheroid Oblate spheroidal coordinates Oblique projection Oblique reflection Observability Observability Gramian Observable subgroup Observable variable Observational study Obstacle avoidance Obstacle problem Obstruction theory Occupancy theorem Occurrences of Grandi's series Occurs check Octacross Octacube (mathematics) Octadecagon Octadecimal Octaexon Octagon Octagonal antiprism Octagonal bipyramid Octagonal number Octagonal prism Octagonal trapezohedron Octagram Octahedral number Octahedral prism Octahedral symmetry Octahedron Octahemioctahedron Octal Octal game Octal games Octant Octeract Octeractic honeycomb Octomino Octonion Octonion algebra Odd greedy expansion Odd number theorem Odds Odds algorithm Odds ratio Odlyzko-Schönhage algorithm Of the form Official statistics Offset binary Ogden's lemma Ogive Olami-Feder-Christensen model Olbers' paradox Olimpíada Brasileira de Matemática Oloid Olympiade Mathématique Belge Omega constant Omega equation Omega-logic Omitted-variable bias Omnibus test Omnitruncated 120-cell Omnitruncated 24-cell Omnitruncated 5-cell Omnitruncated 6-simplex Omnitruncated cubic honeycomb Omnitruncated hexateron Omnitruncated tesseract Omnitruncated triangular-hexagonal prismatic honeycomb Omnitruncation On Formally Undecidable Propositions of Principia Mathematica and Related Systems On Numbers and Games On Spirals On the Number of Primes Less Than a Given Magnitude On the Sphere and Cylinder On-Line Encyclopedia of Integer Sequences One half One-dimensional symmetry group One-form One-parameter group One-seventh area triangle One-sided limit One-way ANOVA One-way function Online codes Ono's inequality Onsager reciprocal relations Ontological maximalism Open and closed maps Open book decomposition Open mapping theorem Open mapping theorem (complex analysis) Open mapping theorem (functional analysis) Open sentence Open set Open-loop model OpenMath Operad theory Operand Operation (mathematics) Operation theory Operational calculus Operations research Operator Operator (physics) Operator K-theory Operator algebra Operator associativity Operator norm Operator space Operator system Operator theory Operator topology Opial property Opinion poll Oppenheim conjecture Opposite Optical axis Opticks Optimal control Optimal design Optimal matching Optimal stopping Optimal substructure Optimality criterion Optimization (computer science) Optimization (mathematics) Optional stopping theorem Opuscula Mathematica Oracle machine Orbifold Orbifold notation Orbit (dynamics) Orbit portrait Orbital momentum vector Orbital state vectors Order (group theory) Order (information processing) Order (number theory) Order (ring theory) Order dimension Order isomorphism Order of a kernel Order of integration (calculus) Order of magnitude Order of operations Order statistic Order theory Order topology Order type Order-3 bisected heptagonal tiling Order-3 heptagonal tiling Order-3 icosahedral honeycomb Order-3 snub heptagonal tiling Order-3 truncated heptagonal tiling Order-4 dodecahedral honeycomb Order-4 pentagonal tiling Order-5 cubic honeycomb Order-5 dodecahedral honeycomb Order-5 square tiling Order-7 triangular tiling Order-7 truncated triangular tiling Order-embedding Ordered exponential Ordered field Ordered geometry Ordered graph Ordered group Ordered logit Ordered pair Ordered partition of a set Ordered probit Ordered ring Ordered semigroup Ordered set Ordered subset expectation maximization Ordered vector space Orders of approximation Ordinal analysis Ordinal arithmetic Ordinal collapsing function Ordinal definable set Ordinal notation Ordinal number Ordinal number (linguistics) Ordinal optimization Ordinal scale Ordinary differential equation Ordinary mathematics Ordination (statistics) Ore condition Ore extension Ore's theorem Organon Orientability Orientation (geometry) Orientation (mathematics) Orientation character Orientation entanglement Oriented coloring Oriented projective geometry Orientifold Origin (mathematics) Original proof of Gödel's completeness theorem Ornstein–Uhlenbeck operator Ornstein–Uhlenbeck process Ornstein–Zernike equation Orthant Orthocentric system Orthocompact space Orthocomplemented lattice Orthogon Orthogonal Procrustes problem Orthogonal complement Orthogonal convex hull Orthogonal coordinates Orthogonal functions Orthogonal group Orthogonal matrix Orthogonal polynomials Orthogonal trajectory Orthogonal wavelet Orthogonality Orthogonality (term rewriting) Orthogonality principle Orthogonalization Orthographic projection Orthographic projection (cartography) Orthographic projection (geometry) Orthometric height Orthonormal basis Orthonormal frame Orthonormal function system Orthonormality Orthostochastic matrix Orthotomic Ortsbogen theorem Oscillating reciprocation Oscillation Oscillation (differential equation) Oscillation (mathematics) Oscillator Toda Oscillon Osculating circle Osculating curve Osculating plane Oseledets theorem Ostomachion Ostrowski Prize Ostrowski's theorem Ostrowski–Hadamard gap theorem Oswald Veblen Prize in Geometry Out(Fn) Outcome (game theory) Outer automorphism Outer automorphism group Outer billiard Outer measure Outer product Outlier Oval Oval (projective plane) Overconvergent modular form Overdetermined system Overdispersion Overfitting Overlap (term rewriting) Overlap matrix Overlap-add method Overlap-save method Overlapping interval topology Overspill Ovoid (polar space) Ovoid (projective geometry) Oxford Set of Mathematical Instruments P = NP problem P-Laplacian P-adic analysis P-adic number P-adic order P-compact group P-group P-matrix P-rep P-value P-vector P-y method PARAFAC PCF theory PDE surface PDIFF PEPA PLS (complexity) PRISM (Portal Resources for Indiana Science and Mathematics) PRO (category theory) PROP PSL(2,7) PSPACE PSPACE-hard Pacific Journal of Mathematics Package-merge algorithm Packed storage matrix Packing dimension Packing measure Packing problem Padovan cuboid spiral Padovan polynomials Padovan sequence Padua points Padé approximant Padé table Page's trend test Paid survey Painlevé transcendents Painter's algorithm Pair of pants Pair of spaces Paired comparison analysis Pairing Pairing function Pairwise Pairwise coprime Pairwise independence Paitamaha Siddhanta Palais-Smale compactness condition Paley construction Paley graph Paley-Wiener integral Paley–Wiener theorem Paley–Zygmund inequality Palindromic number Palindromic polynomial Palindromic prime Pan-African Congress of Mathematicians Pandiagonal magic cube Pandigital number Panel data Panjer recursion Panmagic square Pantriagdiag magic cube Pantriagonal magic cube Paper bag problem Pappus chain Pappus configuration Pappus graph Pappus's centroid theorem Pappus's hexagon theorem Parabiaugmented dodecahedron Parabiaugmented hexagonal prism Parabiaugmented truncated dodecahedron Parabidiminished rhombicosidodecahedron Parabigyrate rhombicosidodecahedron Parabola Parabolic Lie algebra Parabolic constant Parabolic coordinates Parabolic cylinder function Parabolic cylindrical coordinates Parabolic fractal distribution Parabolic geometry Parabolic induction Parabolic partial differential equation Paraboloid Paraboloidal coordinates Paracompact space Paraconsistent logic Paraconsistent mathematics Paradiseo Paradox of enrichment Paradoxes of set theory Paradoxical set Parafree group Paragyrate diminished rhombicosidodecahedron Parallax Parallel (geometry) Parallel curve Parallel mesh generation Parallel postulate Parallel tempering Parallel transport Parallelepiped Parallelizable manifold Parallelogon Parallelogram Parallelogram law Parallelogram of force Parameter Parameter space Parametric array Parametric continuity Parametric derivative Parametric equation Parametric family Parametric model Parametric operator Parametric oscillator Parametric statistics Parametric surface Parametrix Parametrization Paranormal space Paranormal subgroup Parasitic number Parastatistics Paratopological group Paravector Pareto analysis Pareto chart Pareto distribution Pareto efficiency Pareto index Pareto interpolation Pareto principle Pareto set Pariah group Paris–Harrington theorem Parity (mathematics) Parity bit Parity game Parity of a permutation Parity problem (sieve theory) Parity-check matrix Parker spiral Parker vector Parker-Sochacki method Park–Miller random number generator Parovicenko space Parrondo's paradox Parry-Daniels map Parry-Sullivan invariant Parseval's identity Parseval's theorem Partial Element Equivalent Circuit (PEEC) Partial correlation Partial derivative Partial differential equation Partial equivalence relation Partial evaluation Partial fraction Partial fractions in complex analysis Partial fractions in integration Partial function Partial geometry Partial isometry Partial least squares regression Partial leverage Partial linear space Partial order reduction Partial regression plot Partial residual plot Partial trace Partially observable Markov decision process Partially ordered set Partially-defined operator Particle filter Particle in a box Particle in a ring Particle in a spherically symmetric potential Particle physics and representation theory Particle statistics Particle swarm optimization Particle-in-cell Particular point topology Particular values of the Gamma function Partisan game Partition (number theory) Partition function (mathematics) Partition of a set Partition of an interval Partition of unity Partition regular Partition topology Parts-per notation Party-list proportional representation Pascal matrix Pascal's calculator Pascal's pyramid Pascal's rule Pascal's simplex Pascal's theorem Pascal's triangle Pasch's axiom Pasch's theorem Patch test (finite elements) Path (graph theory) Path (topology) Path analysis (statistics) Path coefficient Path coloring Path cover Path decomposition Path graph Path integral Path integral Monte Carlo Path of least resistance Path space Pathological (mathematics) Pattern Pattern blocks Pattern formation Pattern theory Patterson function Pauli group Pauli matrices Paulisa Siddhanta Pea pattern Peak (geometry) Peakon Peano axioms Peano existence theorem Peano space Peano-Russell notation Pearson distribution Pearson product-moment correlation coefficient Pearson's chi-square test Peaucellier-Lipkin linkage Pedal curve Pedal triangle Pedoe's inequality Peetre theorem Peetre's inequality Peirce quincuncial projection Peirce's criterion Peirce's law Peixoto's theorem Pell number Pell's equation Penalty method Pencil (mathematics) Pendent Pendulum (mathematics) Penney's game Penrose diagram Penrose graphical notation Penrose stairs Penrose tiling Penrose transform Penrose triangle Pentachoron Pentacross Pentadecagon Pentadecimal Pentadiagonal matrix Pentagon Pentagon tiling Pentagonal antiprism Pentagonal bifrustum Pentagonal cupola Pentagonal dipyramid Pentagonal gyrobicupola Pentagonal gyrocupolarotunda Pentagonal hexecontahedron Pentagonal icositetrahedron Pentagonal number Pentagonal number theorem Pentagonal orthobicupola Pentagonal orthobirotunda Pentagonal orthocupolarotunda Pentagonal prism Pentagonal pyramid Pentagonal pyramidal number Pentagonal rotunda Pentagonal trapezohedron Pentagram Pentagrammic antiprism Pentagrammic crossed-antiprism Pentagrammic prism Pentahedron Pentakis dodecahedron Pentaprism Pentatope number Penteract Penteractic honeycomb Pentimal system Pentomino Per mil Percent sign Percentage Percentage point Percentile Percentile rank Percolation Percolation theory Percolation threshold Perfect core Perfect graph Perfect graph theorem Perfect group Perfect hash function Perfect magic cube Perfect map Perfect measure Perfect number Perfect power Perfect ruler Perfect set property Perfect space Perfect spline Perfect square Perfect totient number Perfectly matched layer Perimeter Period (number) Period mapping Period-doubling bifurcation Periodic boundary conditions Periodic continued fraction Periodic function Periodic group Periodic point Periodic points of complex quadratic mappings Periodic variation Periodogram Peripheral cycle Perko pair Perlin noise Permanent Permanent is sharp-P-complete Permutable prime Permutable subgroup Permutation Permutation (music) Permutation automaton Permutation cipher Permutation graph Permutation group Permutation matrix Permutohedron Perpendicular Perpendicular distance Perplexity Perrin number Perron number Perron's formula Perron–Frobenius theorem Persistence of a number Perspective (geometry) Perspective (graphical) Persymmetric matrix Perturbation theory Perturbation theory (quantum mechanics) Perverse sheaf Petersen graph Petersson inner product Peter–Weyl theorem Petrick's method Petrie polygon Petrov classification Petrovsky lacuna Pettis integral Pettis' theorem Pfaffian Pfaffian function Pfeffer integral Pfister form Pharmaceutical Statistics Phase (waves) Phase diagram Phase field models Phase line Phase plane Phase plane method Phase portrait Phase response Phase space Phase space method Phase velocity Phase-type distribution Phason Phasor (physics) Phelim Boyle Phi-hiding assumption Philo line Philosophical interpretation of classical physics Philosophiæ Naturalis Principia Mathematica Philosophy of Arithmetic (book) Philosophy of Mathematics Education Journal Philosophy of mathematics Philosophy of mathematics education Philosophy of probability Phragmén-Lindelöf principle Phutball Physica A Physical Intervention in Computational Optimization Physical constant Physical geodesy Physical knot theory Physics Physics (Aristotle) Physics of computation Pi Pi (film) Pi (letter) Pi Day Pi Mu Epsilon Pi culture Pi system Pi-calculus PiHex Picard group Picard horn Picard theorem Picard-Fuchs equation Picard–Lindelöf theorem Pick matrix Pick's theorem Pickover stalk Picone identity Picture function Pidgin code Pie chart Piecewise Piecewise linear Piecewise linear continuation Piecewise linear function Piecewise linear manifold Piecewise syndetic set Pierpont prime Pigeonhole principle Pignistic probability Piling-up lemma Pillai prime Pillai's conjecture Pin group Pinch point (mathematics) Pincherle derivative Ping-pong lemma Pinsky phenomenon Pinwheel tiling Piphilology Pisano period Pisot-Vijayaraghavan number Pitchfork bifurcation Pitteway triangulation Pivot element Pivot theorem Pivotal quantity Pixlet Plan for Establishing Uniformity in the Coinage, Weights, and Measures of the United States Plan view Planar Planar (computer graphics) Planar algebra Planar graph Planar lamina Planar projection Planar separator theorem Planar straight-line graph Planar ternary ring Planarity testing Plancherel theorem Plancherel theorem for spherical functions Plane (geometry) Plane at infinity Plane curve Plane geometry Plane partition Plane symmetry Plane wave PlanetMath Plane–sphere intersection Planimeter Planisphaerium Plasma stability Plastic number Plate notation Plate trick Plateau (mathematics) Plateau's laws Plateau's problem Plato's number Platonic hydrocarbon Platonic solid Playfair cipher Plimpton 322 Plotkin bound Plucker matrix Plugging in (algebra) Pluricanonical ring Pluriharmonic function Pluripolar set Plurisubharmonic function Plus Magazine Plus and minus signs Plus construction Plus-minus sign Plücker coordinates Plücker embedding Plücker formula Pochhammer k-symbol Pochhammer symbol Pocket Cube Pohlig–Hellman algorithm Poincaré Seminars Poincaré conjecture Poincaré disk model Poincaré duality Poincaré group Poincaré half-plane model Poincaré inequality Poincaré map Poincaré metric Poincaré model Poincaré recurrence theorem Poincaré residue Poincaré transformation Poincaré-Lindstedt method Poincaré–Bendixson theorem Poincaré–Birkhoff–Witt theorem Poincaré–Hopf theorem Poinsot's spirals Point (geometry) Point accepted mutation Point at infinity Point estimation Point finite collection Point group Point groups in three dimensions Point groups in two dimensions Point in polygon Point location Point mass Point plotting Point process Point reflection Point set triangulation Point source Point-biserial correlation coefficient Point-free Point-line-plane postulate Pointclass Pointed set Pointed space Pointless topology Pointwise Pointwise convergence Pointwise product Poisson algebra Poisson bracket Poisson distribution Poisson hidden Markov model Poisson integral formula Poisson kernel Poisson limit theorem Poisson manifold Poisson process Poisson random measure Poisson regression Poisson ring Poisson sampling Poisson summation formula Poisson superalgebra Poisson supermanifold Poisson's equation Poisson–Lie group Poker probability Poker probability (Omaha) Poker probability (Texas hold 'em) Polar action Polar coordinate system Polar curve Polar decomposition Polar distance (geometry) Polar distribution Polar homology Polar set Polar set (potential theory) Polar sine Polar space Polar topology Polarization identity Polarization of an algebraic form Pole (complex analysis) Pole and polar Policy capturing Polignac's conjecture Polish Logic Polish Mathematical Society Polish School of Mathematics Polish notation Polish space Polite number Political forecasting Pollard's lambda algorithm Pollard's p - 1 algorithm Pollard's rho algorithm Pollard's rho algorithm for logarithms Pollock octahedral numbers conjecture Poloidal toroidal decomposition Poly-Bernoulli number Polyabolo Polychoric correlation Polychoron Polyconvex function Polycube Polycyclic group Polydisc Polydivisible number Polydrafter Polyform Polygamma function Polygon Polygon triangulation Polygonal chain Polygonal number Polygonizer Polyharmonic spline Polyhedral combinatorics Polyhedral compound Polyhedral graph Polyhedral space Polyhedron Polyhedron model Polyhex (mathematics) Polyiamond Polylogarithm Polylogarithmic Polymatroid Polynomial Polynomial Diophantine equation Polynomial SOS Polynomial and rational function modeling Polynomial arithmetic Polynomial basis Polynomial chaos Polynomial code Polynomial expansion Polynomial expression Polynomial factorization Polynomial function theorems for zeros Polynomial interpolation Polynomial lemniscate Polynomial long division Polynomial matrix Polynomial remainder theorem Polynomial ring Polynomial sequence Polynomial time Polynomial-time reduction Polynomially reflexive space Polynormal subgroup Polyomino Polyphase matrix Polystick Polysyllogism Polytetrahedron Polytomous Rasch model Polytope Polytope graph Polytree Polytrope Pompeiu problem Pompeiu's theorem Poncelet Prize Poncelet point Poncelet's porism Poncelet–Steiner theorem Pons asinorum Pontryagin class Pontryagin duality Pontryagin's minimum principle Pooled standard deviation Pooled variance Pooling design Popular mathematics Population balance equation Population modeling Population process Population viability analysis Population-based incremental learning Porism Porous set Port-Royal Logic Portable, Extensible Toolkit for Scientific Computation Portal:Mathematics Portmanteau test Portmanteau theorem Portuguese Mathematical Society Poset topology Position circle Position vector Positional notation Positive and negative parts Positive and negative sets Positive current Positive definite Positive definite function on a group Positive definite kernel Positive definiteness Positive element Positive feedback Positive form Positive invariant set Positive linear functional Positive map Positive semidefinite Positive set theory Positive-definite function Positive-definite matrix Positively separated sets Possibility theory Possible Worlds (play) Post correspondence problem Post's inversion formula Post's lattice Post's theorem Post-Newtonian expansion Post-hoc analysis Postage stamp problem Postcondition Posterior probability Postnikov system Post–Turing machine Posynomial Potential Potential energy surface Potential flow Potential function Potential isomorphism Potential theory Potsdam Miracle Potts model Pourbaix diagram Poussin proof Power associativity Power automorphism Power center (geometry) Power closed Power function Power graph analysis Power iteration Power law Power mean Power of a point Power of graph Power of two Power series Power series method Power set Power sum symmetric polynomial Power transform Powerful number Powerful p-group Poynting vector Practical number Pramana Prametric space Pramāṇa-samuccaya Pre-Abelian category Pre-algebra Pre-measure Preadditive category Precalculus Precession Preclosure operator Preconditioned conjugate gradient method Preconditioner Predicate (mathematics) Predicate calculus Predicate functor logic Predicate logic Predicate variable Prediction interval Predictive analytics Predictive informatics Predictive modelling Predictive validity Predictor-corrector method Predual Preferential attachment Preferred number Prefix code Prefix grammar Pregeometry Pregeometry (model theory) Prehomogeneous vector space Preimage theorem Preintuitionism Premature convergence Prenex normal form Preorder Preordered class Preparata code Preparation theorem Presburger arithmetic Prescisive abstraction Prescribed Ricci curvature problem Prescribed scalar curvature problem Presentation complex Presentation of a group Presentation of a monoid Presheaf (category theory) President of the American Statistical Association President of the Institute of Mathematical Statistics President of the Royal Statistical Society President of the Statistical Society of Canada Pretopological space Pretzel link Prevalent and shy sets Prewellordering Prim's algorithm Primal graph Primality certificate Primality test Primary Mathematics World Contest Primary cyclic group Primary pseudoperfect number Prime (order theory) Prime Obsession Prime Pages Prime constant Prime decomposition (3-manifold) Prime factor Prime gap Prime geodesic Prime ideal Prime ideal theorem Prime k-tuple Prime knot Prime model Prime number Prime number theorem Prime power Prime quadruplet Prime reciprocal magic square Prime ring Prime signature Prime triplet Prime zeta function Prime-counting function Prime-factor FFT algorithm Prime95 PrimeGrid Primefree sequence Primes in arithmetic progression Primeval number Primitive element Primitive element (finite field) Primitive element theorem Primitive equations Primitive ideal Primitive notion Primitive permutation group Primitive polynomial Primitive recursive arithmetic Primitive recursive function Primitive ring Primitive root Primitive root modulo n Primitive semiperfect number Primon gas Primorial Primorial prime Princess and monster game Principal axis theorem Principal branch Principal bundle Principal component regression Principal components analysis Principal curvature Principal geodesic analysis Principal homogeneous space Principal ideal Principal ideal domain Principal ideal ring Principal ideal theorem Principal indecomposable module Principal part Principal series representation Principal value Principia Mathematica Principle of bivalence Principle of compositionality Principle of distributivity Principle of indifference Principle of least action Principle of maximum entropy Principles and Standards for School Mathematics Principles of Theoretical Logic Prior Analytics Prior probability Prism (geometry) Prismatic compound of antiprisms Prismatic compound of antiprisms with rotational freedom Prismatic compound of prisms Prismatic compound of prisms with rotational freedom Prismatic pentagonal tiling Prismatic surface Prismatic uniform polyhedron Prismatoid Pro-p group Pro-simplicial set Probabilistic Turing machine Probabilistic argumentation Probabilistic design Probabilistic encryption Probabilistic interpretation of Taylor series Probabilistic latent semantic analysis Probabilistic logic Probabilistic method Probabilistic metric space Probabilistic number theory Probabilistic proofs of non-probabilistic theorems Probabilistic proposition Probabilistic relational model Probability Probability and statistics Probability axioms Probability density function Probability derivations for making rank-based hands in Omaha hold 'em Probability distribution Probability distribution function Probability interpretations Probability mass function Probability matching Probability metric Probability of kill Probability of making the nut low hand in Omaha hold 'em Probability space Probability theory Probability vector Probability-generating function Probable prime Probit Probit model Problem domain Problem of Apollonius Problem of multiple generality Problem of points Problems in Latin squares Problems in loop theory and quasigroup theory Problems involving arithmetic progressions Proceedings of the American Mathematical Society Proceedings of the Steklov Institute of Mathematics Procept Process Window Index Process capability Process control Process optimization Procrustes analysis Procrustes transformation Product (category theory) Product (mathematics) Product category Product integral Product measure Product metric Product of group subsets Product of groups Product of rings Product order Product rule Product term Product topology Proebsting's paradox Professor Moriarty Professor of Mathematics, Glasgow Professor's Cube Profinite group Profunctor Prognostic equation Prognostics Program in Mathematics for Young Scientists Progression (mathematics) Progressive function Progressively measurable process Proizvolov's identity Proj construction Project Euler Project NExT Projected dynamical system Projection (linear algebra) Projection (mathematics) Projection (set theory) Projection method Projection pursuit Projection-slice theorem Projection-valued measure Projectionless algebra Projective Hilbert space Projective algebraic manifold Projective cone Projective connection Projective cover Projective differential geometry Projective frame Projective geometry Projective geometry axioms Projective harmonic conjugates Projective hierarchy Projective line Projective linear group Projective module Projective object Projective plane Projective representation Projective space Projective transformation Projective unitary group Projective vector field Prokhorov's theorem Prolate spheroid Prolate spheroidal coordinates Prolate spheroidal wave functions Promoting adversaries Promptuary Pronic number Pronormal subgroup Prony equation Proof (2005 film) Proof (play) Proof by exhaustion Proof by intimidation Proof by verbosity Proof calculus Proof complexity Proof mining Proof net Proof of Bertrand's postulate Proof of Bhaskara's lemma Proof of Stein's example Proof of concept Proof of impossibility Proof of the Euler product formula for the Riemann zeta function Proof of the law of large numbers Proof of weak Scholz conjecture Proof procedure Proof sketch for Gödel's first incompleteness theorem Proof that 22/7 exceeds π Proof that e is irrational Proof that holomorphic functions are analytic Proof that the sum of the reciprocals of the primes diverges Proof that π is irrational Proof theory Proof without words Proof-theoretic semantics Proofs and Refutations Proofs from THE BOOK Proofs of Fermat's little theorem Proofs of Fermat's theorem on sums of two squares Proofs of quadratic reciprocity Proofs of trigonometric identities Propagation of uncertainty Propensity probability Propensity score Propensity score matching Proper convex function Proper forcing axiom Proper linear model Proper map Proper morphism Proper transfer function Properly discontinuous action Properties of polynomial roots Property (philosophy) Property B Property P conjecture Property of Baire Proportion (architecture) Proportional control Proportional hazards models Proportionality (mathematics) Proposition (mathematics) Propositional calculus Propositional formula Propositional variable Propositiones ad Acuendos Juvenes Prosecutor's fallacy Prosthaphaeresis Proth number Proth prime Proth's theorem Proto-Indo-European numerals Prototile Protractor Prouhet-Tarry-Escott problem Prouhet–Thue–Morse constant Provability logic Provable prime Provincial Mathematical Olympiad Proximity problems Proximity space Proxy (statistics) Pruning (algorithm) Prym variety Prékopa-Leindler inequality Prüfer domain Prüfer group Prüfer rank Prüfer sequence Pseudo algebraically closed field Pseudo-Anosov map Pseudo-Euclidean space Pseudo-Hadamard transform Pseudo-Riemannian manifold Pseudo-Zernike polynomials Pseudo-abelian category Pseudo-arc Pseudo-differential operator Pseudo-monotone operator Pseudo-order Pseudo-spectral method Pseudo-zero set Pseudocircle Pseudocompact space Pseudoconvexity Pseudocount Pseudoelementary class Pseudoforest Pseudogroup Pseudoholomorphic curve Pseudolikelihood Pseudomathematics Pseudometric Pseudometric space Pseudonormal space Pseudoprime Pseudorandom generator Pseudorandom generator theorem Pseudorandom noise Pseudorandom number generator Pseudorandom number sequence Pseudorandomness Pseudoreplication Pseudoscalar Pseudospectrum Pseudosphere Pseudotensor Pseudotriangle Pseudovector Psi function Psychological statistics Psychologism Ptolemy's theorem Pu's inequality Public-key cryptography Publications Mathématiques de l'IHÉS Pugh's closing lemma Pui Ching Invitational Mathematics Competition Puiseux expansion Pulation square Pullback Pullback (category theory) Pullback (differential geometry) Pullback attractor Pullback bundle Pumping lemma Pumping lemma for context-free languages Pumping lemma for regular languages Puppe sequence Pure function Pure mathematics Pure subgroup Pure submodule Pure type system Purification of quantum state Push-relabel maximum flow algorithm Pushforward Pushforward (differential) Pushforward (homology) Pushforward measure Pushout (category theory) Pyramid Pyramid (geometry) Pyramidal number Pyritohedron Pythagoras tree Pythagorean addition Pythagorean comma Pythagorean expectation Pythagorean means Pythagorean prime Pythagorean quadruple Pythagorean theorem Pythagorean trigonometric identity Pythagorean triple Pythagorean tuning P²-irreducible Pépin's test Pólya Prize Pólya Prize (LMS) Pólya Prize (SIAM) Pólya conjecture Pólya enumeration theorem Pólya-Vinogradov inequality Q factor Q test Q-Pochhammer symbol Q-Q plot Q-Vandermonde identity Q-analog Q-analysis Q-derivative Q-difference polynomial Q-exponential Q-statistic Q-systems Q-theta function Q.E.D. QED project QR algorithm QR decomposition Quadratic Quadratic Gauss sum Quadratic assignment problem Quadratic classifier Quadratic differential Quadratic eigenvalue problem Quadratic equation Quadratic field Quadratic form Quadratic form (statistics) Quadratic function Quadratic growth Quadratic integral Quadratic irrational Quadratic polynomial Quadratic programming Quadratic reciprocity Quadratic residue Quadratic residue code Quadratic residuosity problem Quadratic sieve Quadratic variation Quadratically constrained quadratic program Quadratrix Quadrature domains Quadrature mirror filter Quadric Quadric (projective geometry) Quadrifolium Quadrilateral Quadruple Qualitative variation Quality control and genetic algorithms Quantale Quantaloid Quantification Quantificational variability effect Quantifier elimination Quantile Quantile function Quantile regression Quantitative analyst Quantitative behavioral finance Quantitative models of the action potential Quantitative trait locus Quantity Quantum Fourier transform Quantum Markov chain Quantum Monte Carlo Quantum annealing Quantum calculus Quantum chaos Quantum cohomology Quantum field theory Quantum graph Quantum group Quantum harmonic oscillator Quantum mechanics Quantum operation Quantum probability Quantum random walk Quantum state Quaquaversal tiling Quarter cubic honeycomb Quarter period Quartic Quartic equation Quartic function Quartic graph Quartic plane curve Quartic reciprocity Quartic surface Quartile Quartile coefficient of dispersion Quasi-Frobenius Lie algebra Quasi-Hopf algebra Quasi-Lie algebra Quasi-Monte Carlo method Quasi-Monte Carlo methods in finance Quasi-Newton method Quasi-algebraically closed field Quasi-arithmetic mean Quasi-bialgebra Quasi-bipartite graph Quasi-birth-death process Quasi-continuous function Quasi-derivative Quasi-empirical method Quasi-empiricism Quasi-empiricism in mathematics Quasi-finite field Quasi-finite morphism Quasi-interior Quasi-invariant measure Quasi-isomorphism Quasi-likelihood Quasi-polynomial Quasi-probability distribution Quasi-regular representation Quasi-set theory Quasi-threshold graph Quasi-triangular Quasi-Hopf algebra Quasiconformal mapping Quasicontraction semigroup Quasiconvex function Quasicrystal Quasideterminant Quasidihedral group Quasifield Quasigroup Quasiidentity Quasimetric space Quasinorm Quasinormal operator Quasinormal subgroup Quasiperfect number Quasiperiodic function Quasiperiodic motion Quasiperiodic tiling Quasipositive matrix Quasiprojective variety Quasiregular polyhedron Quasiregular rhombic tiling Quasisimple group Quasispecies model Quasithin group Quasitopological space Quasitransitive relation Quasitriangular Hopf algebra Quasivariety Quater-imaginary base Quaternary numeral system Quaternion Quaternion Society (1899 - 1913) Quaternion algebra Quaternion group Quaternion-Kähler manifold Quaternion-Kähler symmetric space Quaternionic projective space Quaternionic representation Quaternionic vector space Quaternions and spatial rotation Queueing delay Queueing model Queueing theory Quibinary Quillen adjunction Quillen–Suslin theorem Quinary Quincunx Quincunx matrix Quine–McCluskey algorithm Quintic equation Quipu Quiver (mathematics) Quota sampling Quote notation Quotient Quotient category Quotient group Quotient module Quotient of subspace theorem Quotient ring Quotient rule Quotient space Quotient space (linear algebra) Quotientable automorphism Quotition R. A. Fisher Lectureship RANSAC REVSTAT ROAM RSA RSA Factoring Challenge RSA numbers RSA-100 RSA-1024 RSA-110 RSA-120 RSA-129 RSA-130 RSA-140 RSA-150 RSA-1536 RSA-155 RSA-160 RSA-170 RSA-180 RSA-190 RSA-200 RSA-2048 RSA-210 RSA-220 RSA-230 RSA-232 RSA-240 RSA-250 RSA-260 RSA-270 RSA-280 RSA-290 RSA-300 RSA-309 RSA-310 RSA-320 RSA-330 RSA-340 RSA-350 RSA-360 RSA-370 RSA-380 RSA-390 RSA-400 RSA-420 RSA-430 RSA-440 RSA-450 RSA-460 RSA-470 RSA-480 RSA-490 RSA-500 RSA-576 RSA-617 RSA-640 RSA-704 RSA-768 RSA-896 RTCP hierarchical aggregation Rabdology Rabin automaton Rabin signature algorithm Rabinovich-Fabrikant equations Rabinowitsch trick Racah W-coefficient Racah polynomials Racetrack (game) Racks and quandles Radar chart Rademacher complexity Rademacher distribution Rademacher's theorem Rader's FFT algorithm Radial basis function Radial basis function network Radial line Radian Radiative flux Radical axis Radical of a Lie algebra Radical of a module Radical of an algebraic group Radical of an ideal Radical of an integer Radical polynomial Radicial morphism Radio navigation Radius Radius of convergence Radius of curvature Radius of curvature (applications) Radius of curvature (mathematics) Radius of gyration Radix Radix economy Radix point Rado graph Rado's theorem Rado's theorem (Ramsey theory) Radon measure Radon space Radon transform Radon's theorem Radonifying function Radon–Nikodym theorem Radon–Riesz property Ragsdale conjecture Raised cosine distribution Raising and lowering indices Ramanujan graph Ramanujan prime Ramanujan summation Ramanujan theta function Ramanujan's congruences Ramanujan's constant Ramanujan's lost notebook Ramanujan's sum Ramanujan-Nagell equation Ramanujan-Soldner constant Ramanujan–Petersson conjecture Ramer-Douglas-Peucker algorithm Ramification Ramified forcing Ramp function Ramsey RESET test Ramsey cardinal Ramsey theory Ramsey's theorem Rand index Random coil Random compact set Random dynamical system Random effects estimation Random effects model Random element Random field Random forest Random geometric graph Random graph Random matrix Random measure Random minimal spanning tree Random multinomial logit Random naive Bayes Random number Random optimization Random permutation Random permutation statistics Random regular graph Random sample Random sequence Random variable Random variate Random walk Random walk hypothesis Randomization Randomized response Randomness Randomness tests Range (mathematics) Range (statistics) Range searching Rank (differential topology) Rank (graph theory) Rank (linear algebra) Rank (set theory) Rank correlation Rank filter Rank of a group Rank of an abelian group Rank product Rank-into-rank Rank-size distribution Ranked poset Rankine's method Rankine-Hugoniot equation Ranking Rankit Rank–nullity theorem Rao–Blackwell theorem Raptor code Rarita-Schwinger equation Rasch model Rasch model estimation Rasiowa-Sikorski lemma Rata Die Rate distortion theory Rate function Rate of convergence Ratio Ratio distribution Ratio test Rational consequence relation Rational dependence Rational function Rational homotopy theory Rational mapping Rational normal curve Rational number Rational point Rational representation Rational root theorem Rational sieve Rational singularity Rational surface Rational trigonometry Rational variety Rational zeta series Rationalisation (mathematics) Ratner's theorems Rauch comparison theorem Raven paradox Raw score Ray transfer matrix analysis Rayleigh distribution Rayleigh quotient Rayleigh quotient iteration Rayleigh-Ritz method Reachability Reaction–diffusion system Reactive search Real algebraic geometry Real analysis Real analytic Eisenstein series Real closed field Real computation Real curve Real line Real matrices (2 x 2) Real number Real part Real point Real projective line Real projective plane Real projective space Real representation Real structure Real tree Realcompact space Reality condition Reality structure Realizability Realization (probability) Realization (systems) Rearrangement inequality Reassignment method Recall bias Receiver operating characteristic Reciprocal Fibonacci constant Reciprocal Gamma function Reciprocal difference Reciprocal lattice Reciprocal polynomial Reciprocal rule Reciprocating oscillation Reciprocation Reciprocation (motion) Reciprocity (mathematics) Reciprocity (projective geometry) Reconstruction conjecture Record linkage Recreational mathematics Rectangle Rectangle method Rectangular elastica Rectangular function Rectifiable set Rectification (geometry) Rectified 120-cell Rectified 24-cell Rectified 5-cell Rectified 600-cell Rectified 8-cell Rectified cubic honeycomb Rectified hexateron Rectilinear polygon Recurrence period density entropy Recurrence plot Recurrence quantification analysis Recurrence relation Recurrent point Recursion Recursion (computer science) Recursion theory Recursionism Recursive Bayesian estimation Recursive definition Recursive function Recursive indexing Recursive language Recursive languages and sets Recursive ordinal Recursive partitioning Recursive set Recursive tree Recursively enumerable language Recursively enumerable set Red auxiliary numbers Red-black tree Redescending M-estimator Redmond-Sun conjecture Reduced cost Reduced derivative Reduced homology Reduced product Reduced residue system Reduced ring Reduct Reductio ad absurdum Reduction Reduction (mathematics) Reduction (recursion theory) Reduction of order Reduction of the structure group Reduction strategy Reductive dual pair Reductive group Redundancy (information theory) Redundant binary representation Reeb foliation Reed-Muller expansion Reed–Muller code Reeh–Schlieder theorem Refactorable number Reference class forecasting Reference class problem Refinable function Refinement calculus Refinement monoid Reflecting cardinal Reflection (linear algebra) Reflection (mathematics) Reflection formula Reflection group Reflection principle Reflection symmetry Reflection through the origin Reflective subcategory Reflexive operator algebra Reflexive relation Reflexive space Reform mathematics Regiomontanus' angle maximization problem Regional Mathematical Olympiad Register machine Regression Analysis of Time Series Regression analysis Regression dilution Regression discontinuity Regression fallacy Regression toward the mean Regular 4-polytope Regular Division of the Plane Regular Hadamard matrix Regular Polytopes (book) Regular cardinal Regular category Regular chain Regular conditional probability Regular function Regular grammar Regular graph Regular grid Regular homotopy Regular isotopy Regular local ring Regular matrix Regular measure Regular number Regular p-group Regular paperfolding sequence Regular part Regular polygon Regular polyhedron Regular polytope Regular prime Regular representation Regular ring Regular semigroup Regular sequence (algebra) Regular singular point Regular skew polyhedron Regular space Regularity theorem for Lebesgue measure Regularization (mathematics) Regularization (physics) Regularized Gamma function Regulated function Regulated integral Regulator Reidemeister move Reification (statistics) Reinhardt cardinal Reisner Papyrus Rejection sampling Related rates Relation (mathematics) Relation algebra Relation composition Relation construction Relation reduction Relational algebra Relational calculus Relational model Relations between Fourier transforms and Fourier series Relativator Relative contact homology Relative difference Relative dimension Relative homology Relative interior Relative neighborhood graph Relative risk Relative standard deviation Relative tensor product Relative term Relatively compact subspace Relatively complemented lattice Relativistic Breit–Wigner distribution Relativistic heat conduction Relaxation method Relaxation technique (mathematics) Relevance logic Reliability (statistics) Reliability theory Rellich-Kondrachov theorem Remainder Remarkable cardinal Remez algorithm Remez inequality Removable singularity Rencontres numbers Rendezvous problem Renewal theory Renormalization group Repdigit Repeating decimal Replication (statistics) Replicator equation Representable functor Representation of a Lie group Representation of a Lie superalgebra Representation rigid group Representation ring Representation theorem Representation theory Representation theory of Hopf algebras Representation theory of SL2(R) Representation theory of SU(2) Representation theory of diffeomorphism groups Representation theory of finite groups Representation theory of the Galilean group Representation theory of the Lorentz group Representation theory of the Poincaré group Representation theory of the symmetric group Representation validity Representations of e Representative function Reproducing kernel Hilbert space Reptation Monte Carlo Repulsive particle swarm optimization Repunit Repunit prime Resampling (statistics) Rescaled range Research Institute for Mathematical Sciences Research Students Conference Probability and Statistics Research subject Residual (mathematics) Residual (numerical analysis) Residual property Residual property (mathematics) Residual sum of squares Residual topology Residual variance Residually finite group Residuated Boolean algebra Residuated lattice Residuated mapping Residue (complex analysis) Residue class-wise affine group Residue field Residue number system Residue theorem Resistance distance Resistive ballooning mode Resolution (algebra) Resolution (logic) Resolution (mathematics) Resolution of singularities Resolvable space Resolvent formalism Resolvent set Resonance Resource bounded measure Response bias Response rate Restricted maximum likelihood Restricted partial quotients Restricted product Restricted randomization Restricted representation Restricted sumset Restriction (mathematics) Restriction of scalars Resultant Rethinking Mathematics Retract (group theory) Return period Reuleaux tetrahedron Reuleaux triangle Reverse Monte Carlo Reverse Polish notation Reverse mathematics Reverse perspective Reverse-delete algorithm Reversible diffusion Reversible dynamics Reversible jump Revising opinions in statistics Rewriting Reynolds stresses Rhind Mathematical Papyrus Rhind Mathematical Papyrus 2/n table Rho calculus Rhode Island Math League Rhombic dodecahedral honeycomb Rhombic dodecahedron Rhombic enneacontahedron Rhombic icosahedron Rhombic triacontahedron Rhombicosahedron Rhombicosidodecahedron Rhombicuboctahedron Rhombidodecadodecahedron Rhombitriangular-hexagonal prismatic honeycomb Rhombo-hexagonal dodecahedron Rhombohedron Rhomboid Rhombus Rhumb line Ribbon Hopf algebra Ribbon knot Riccati equation Ricci curvature Ricci decomposition Ricci flow Ricci-flat manifold Rice distribution Rice's theorem Rice-Shapiro theorem Richard Rusczyk Richard's paradox Richards equation Richardson Chair of Applied Mathematics Richardson extrapolation Richardson's theorem Richardson-Lucy deconvolution Ridders' method Ridge (differential geometry) Ridge (geometry) Ridge detection Riemann Xi function Riemann curvature tensor Riemann form Riemann hypothesis Riemann integral Riemann mapping theorem Riemann problem Riemann series theorem Riemann solver Riemann sphere Riemann sum Riemann surface Riemann tensor (general relativity) Riemann zeta function Riemann's differential equation Riemann-Hilbert Riemann-Hurwitz formula Riemann-Lebesgue lemma Riemann-Roch theorem for smooth manifolds Riemann-Siegel theta function Riemann-Stieltjes integral Riemann-von Mangoldt formula Riemannian Penrose inequality Riemannian circle Riemannian connection on a surface Riemannian geometry Riemannian manifold Riemannian submanifold Riemannian submersion Riemann–Hilbert correspondence Riemann–Liouville differintegral Riemann–Roch theorem Riesel number Riesz function Riesz mean Riesz potential Riesz representation theorem Riesz sequence Riesz space Riesz theorem Riesz transform Riesz's lemma Riesz-Thorin theorem Riesz–Fischer theorem Rigged Hilbert space Right angle Right half-plane Right inverse Right prime Right quotient Right-truncatable prime Rigid analytic space Rigid body Rigid body dynamics Rigid origami Rigidity (mathematics) Rijndael S-box Rijndael mix columns Ring (mathematics) Ring (mathematics)/Draft Ring extension Ring homomorphism Ring of integers Ring of sets Ring theory Ringed space Rippling Risch algorithm Rising sun lemma Risk function Risk theory Risk-neutral measure Rithmomachy Ritz method Rng (algebra) Road accident statistics on a model-by-model basis Road coloring problem Robbins algebra Robbins constant Robbins lemma Robbins pentagon Robbins theorem Robbins' problem (of optimal stopping) Robert Parris Moses Robertson–Seymour theorem Robin Hood index Robin boundary condition Robinson arithmetic Robinson's joint consistency theorem Robinson–Schensted algorithm Robust optimization Robust regression Robust statistics Robustification Rocket City Math League Rod (geometry) Rod calculus Rodrigues' formula Rodrigues' rotation formula Roe solver Rogers–Ramanujan continued fraction Rogers–Ramanujan identities Rokhlin's theorem Rolf Schock Prizes Rolle's theorem Romaka Siddhanta Roman abacus Roman arithmetic Roman letters used in mathematics Roman numerals Roman surface Romanian numbers Romberg's method Roof pitch Rook polynomial Rook's graph Room square Root (mathematics) Root datum Root group Root locus Root mean square Root mean square deviation Root mean square deviation (bioinformatics) Root mean square fluctuation Root of unity Root rectangle Root system Root test Root-finding algorithm Rooted graph Rooted product of graphs Ropelength Rose (mathematics) Rose (topology) Rosenbrock function Rosenbrock methods Rosser's theorem Rosser's trick Rota-Baxter algebra Rotary reciprocation Rotating calipers Rotation Rotation (mathematics) Rotation around a fixed axis Rotation group Rotation matrix Rotation number Rotation of axes Rotation operator Rotation operator (vector space) Rotation representation (mathematics) Rotation system Rotational invariance Rotational-vibrational coupling Roth's theorem Rothamsted Experimental Station Rothe-Hagen identity Rotor (mathematics) Rouché's theorem Rough number Rough set Roulette (curve) Round function Round number Round robin test Round-off error Rounding Rouse Ball Professor of Mathematics Route inspection problem Routh's theorem Routh–Hurwitz stability criterion Routh–Hurwitz theorem Row and column spaces Row echelon form Row equivalence Row space Row vector Rowbottom cardinal Royal Statistical Society Rubik's Cube Rubik's Cube group Rubik's Revenge Rubin Causal Model Rudin–Shapiro sequence Rudvalis group Ruffini's rule Ruin theory Rule of 72 Rule of Sarrus Rule of nines (mathematics) Rule of product Rule of succession Rule of sum Ruled surface Ruler Rules for the Direction of the Mind Rules of passage (logic) Run-length encoding Runcinated 120-cell Runcinated 24-cell Runcinated alternated cubic honeycomb Runcinated pentachoron Runcinated tesseract Runcination Runcitruncated 120-cell Runcitruncated 16-cell Runcitruncated 24-cell Runcitruncated 5-cell Runcitruncated 600-cell Runcitruncated cubic honeycomb Runcitruncated tesseract Runge's phenomenon Runge's theorem Runge–Kutta method (SDE) Runge–Kutta methods Runge–Kutta–Fehlberg method Ruppeiner geometry Ruppert's algorithm Rupture field Russell's paradox Russo-Vallois integral Russo–Dye theorem Ruth-Aaron pair Ruziewicz problem Rvachev function Ryll-Nardzewski fixed point theorem Rédei's theorem Rényi entropy Rössler attractor S plane S transform S-duality S-matrix S-unit S.O.S. Mathematics SA subgroup SASTRA Ramanujan Prize SAT Subject Test in Mathematics Level 1 SAT Subject Test in Mathematics Level 2 SCORUS SEAMEO Mathematics Olympiad SETAR (model) SHEEP (symbolic computation system) SI prefix SIPTA SK-42 Reference System SKI combinator calculus SL2(R) SNARK theorem prover SO(4) SO(5) SO(8) SORT (journal) SOS (game) SPQR-tree SQ universal group SR1 formula STAR model STEM fields SYZ conjecture Saccheri quadrilateral Sacks spiral Sacred geometry Saddle point Saddle surface Saddle-node bifurcation Sadleirian Professor of Pure Mathematics Safe prime Safe sex makespan Safety in numbers Sail curve Saint-Venant's compatibility condition Saint-Venant's principle Saint-Venant's theorem Sainte-Laguë method Salem Prize Salem number Salinon Sammon's projection Sample (statistics) Sample continuous process Sample exclusion dimension Sample mean and sample covariance Sample size Sample space Sampling (statistics) Sampling design Sampling distribution Sampling equiprobably with dice Sampling error Sampling fraction Sampling risk Sampling theory Sampling variogram Sand table Sangaku Santa Fe Institute Sard's theorem Sargan test Sasakian manifold Satake diagram Satellite knot Satisfiability Modulo Theories Sato-Tate conjecture Saturated array Saturated measure Saturated model Saturation (graph theory) Saturation arithmetic Savilian Professor of Geometry Savitzky–Golay smoothing filter Sawtooth wave Saxon (teaching method) Sazonov's theorem Scalar (mathematics) Scalar curvature Scalar field Scalar multiplication Scalar potential Scalar resolute Scalar-tensor theory Scale analysis (mathematics) Scale analysis (statistics) Scale factor Scale invariance Scale parameter Scale-free network Scale-invariant feature transform Scale-inverse-chi-square distribution Scaleogram Scaling (geometry) Scaling limit Scatter matrix Scattering theory Scatterplot Schanuel's conjecture Schanuel's lemma Schatten class operator Schatten norm Schauder basis Schauder fixed point theorem Scheffé's method Scheil equation Scheinerman's conjecture Scheme (mathematics) Scherk surface Schiffler point Schiffler's theorem Schild's ladder Schilder's theorem Schinzel's hypothesis H Schlegel diagram Schläfli orthoscheme Schläfli symbol Schläfli-Hess polychoron Schmidt decomposition Schmidt's theorem Schnirelmann density Schnorr group Schnorr signature Schnyder's theorem Schoch circles Schoch line Schoen-Yau conjecture Schoenflies notation Scholarpedia Scholz conjecture Schoof's algorithm Schoof–Elkies–Atkin algorithm School Mathematics Project School Mathematics Study Group School of Mathematics, University of Manchester Schopenhauer's criticism of the proofs of the parallel postulate Schottky group Schottky problem Schouten tensor Schouten–Nijenhuis bracket Schramm–Loewner evolution Schreier conjecture Schreier refinement theorem Schreier vector Schreier's subgroup lemma Schreier-Sims algorithm Schroeder-Bernstein theorem for measurable spaces Schröder number Schröder–Bernstein theorems for operator algebras Schrödinger equation Schrödinger method Schubert calculus Schubert variety Schuette–Nesbitt formula Schur complement Schur complement method Schur decomposition Schur multiplier Schur orthogonality relations Schur polynomial Schur test Schur's inequality Schur's lemma Schur's property Schur's theorem Schur-convex function Schur–Weyl duality Schur–Zassenhaus theorem Schwartz kernel theorem Schwartz set Schwartz space Schwartz-Bruhat function Schwartz–Zippel lemma and testing polynomial identities Schwarz alternating method Schwarz formula Schwarz lemma Schwarz reflection principle Schwarz triangle Schwarz-Christoffel mapping Schwarzian derivative Schwarzschild coordinates Schwarzschild metric Schwarz–Ahlfors–Pick theorem Schwenk's theorem Schwinger function Schwinger model Schwinger parametrization Schwinger's variational principle Schönhage–Strassen algorithm Schönhardt polyhedron Scientific WorkPlace Scientific calculator Scientific notation Score (statistics) Score test Scorer's function Scoring algorithm Scott continuity Scott core theorem Scott domain Scott information system Scottish Café Scott–Potter set theory Screened Poisson equation Screw axis Scripta Mathematica Sculpted prim Search algorithm Seashell surface Secant Secant line Secant method Secant variety Second Hardy-Littlewood conjecture Second derivative test Second fundamental form Second generation wavelet transform Second moment method Second order cellular automaton Second partial derivative test Second-countable space Second-order arithmetic Second-order cone programming Second-order logic Second-order predicate Secondary calculus and cohomological physics Secondary measure Secondary polynomials Secretary problem Section (category theory) Section (fiber bundle) Section (group theory) Section (mathematics) Section modulus Sectional curvature Sectrix of Maclaurin Secular variation Secure SMS Messaging Protocol Sedenion Sedleian Professor of Natural Philosophy Seemingly unrelated regression Segal conjecture Segal-Shale-Weil distribution Segmented regression Segre classification Segre embedding Seiberg duality Seiberg–Witten invariant Seidel adjacency matrix Seifert conjecture Seifert fiber space Seifert surface Seifert-Weber space Seifert–van Kampen theorem Seismic tomography Selberg class Selberg integral Selberg sieve Selberg trace formula Selberg zeta function Selection bias Selection error Self number Self-Indication Assumption Doomsday argument rebuttal Self-adjoint Self-adjoint operator Self-affinity Self-avoiding walk Self-complementary graph Self-concordant function Self-consistent mean field (biology) Self-descriptive number Self-information Self-linking number Self-organized criticality Self-selection Self-similarity Self-verifying theories Selfridge's conjecture Sellmeier equation Semantic mapping (statistics) Semantic relatedness Semantic similarity Semi-Hilbert space Semi-Markov process Semi-continuity Semi-differentiability Semi-elliptic operator Semi-implicit Euler method Semi-linear resolution Semi-local ring Semi-locally simply connected Semi-major axis Semi-minor axis Semi-s-cobordism Semi-simple operator Semi-symmetric graph Semialgebraic set Semialgebraic space Semiautomaton Semicircle Semicubical parabola Semidefinite embedding Semidefinite programming Semidiameter Semidirect product Semifield Semigroup Semigroup action Semigroupoid Semilattice Semilog graph Semimartingale Semimetric space Semimodular lattice Seminormal subgroup Semiparametric model Semiparametric regression Semiperfect magic cube Semiperfect number Semiperfect ring Semiperimeter Semipermutable subgroup Semiprime Semiprime ring Semiprimitive ring Semiregular 4-polytope Semiregular k 21 polytope Semiregular polyhedron Semiregular space Semiring Semiset Semisimple Semisimple Lie algebra Semisimple algebra Semisimple algebraic group Semisimple module Semistable abelian variety Semivariance Senary Senior Whitehead Prize Sensitivity and specificity Sentence (mathematical logic) Sentence logic Separability Separable algebra Separable differential equation Separable extension Separable partial differential equation Separable polynomial Separable sigma algebra Separable space Separated sets Separating axis theorem Separating set Separation axiom Separation of variables Separation test Separatrix (dynamical systems) Separoid Septemvigesimal Septenary Sequence Sequence space Sequence transformation Sequent Sequent calculus Sequential Minimal Optimization Sequential analysis Sequential closure operator Sequential dynamical system Sequential estimation Sequential probability ratio test Sequential quadratic programming Sequential space Sequentially compact space Serial relation Series (mathematics) Series acceleration Series-parallel graph Series-parallel networks problem Serpentine curve Serpentine shape Serre conjecture Serre conjecture (number theory) Serre duality Serre spectral sequence Serre's multiplicity conjectures Serre's property FA Serre–Swan theorem Seshadri constant Sesquilinear form Set (computer science) Set (mathematics) Set Theory: An Introduction to Independence Proofs Set cover problem Set notation Set of all sets Set of uniqueness Set packing Set partitioning in hierarchical trees Set system of finite character Set theoretic programming Set theory Set theory (music) Set theory of the real line Set-builder notation Set-theoretic definition of natural numbers Set-theoretic limit Set-theoretic topology Sethi-Ullman algorithm Setoid Seven Bridges of Königsberg Seven Bridges of Königsberg/key Seven circles theorem Seven-dimensional cross product Seven-number summary Seventeen or Bust Several complex variables Severi-Brauer variety Sexagesimal Sextic plane curve Sexy prime Shabakh Shadow price Shallow water equations Shams al-Dīn al-Samarqandī Shanks' square forms factorization Shanks-Tonelli algorithm Shannon index Shannon number Shannon switching game Shannon wavelet Shannon's expansion Shannon's source coding theorem Shannon–Hartley theorem Shannon–Weaver model Shape Shape analysis Shape factor (image analysis and microscopy) Shape moiré Shape of the Universe Shape optimization Shape parameter Shape theory (mathematics) Shapiro inequality Shapiro polynomials Shapiro's lemma Shapiro-Wilk test Sharkovskii's theorem Sharp map Shaw Prize Sheaf (mathematics) Sheaf cohomology Sheaf extension Sheaf spanned by global sections Shear mapping Shear matrix Sheffer sequence Sheffer stroke Shekel function Shelah cardinal Shell integration Shephard's lemma Shephard's problem Sherman–Morrison formula Shift matrix Shift operator Shift rule Shift space Shift theorem Shifted Gompertz distribution Shifting nth root algorithm Shilov boundary Shimura variety Shock capturing methods Shoelace formula Shooting method Shor's algorithm Short division Short five lemma Short-time Fourier transform Shortcut model Shortest common supersequence Shortest path problem Shortest path tree Shortlex order Shrikhande graph Shrinkage (statistics) Shrinkage estimator Shrinking space Shuffling Shulba Sutras Shusaku number Siamese method Sichel distribution Sicherman dice Sides of an equation Sidon set Siegel disc Siegel modular form Siegel upper half-space Siegel zero Siegel's lemma Siegel's theorem on integral points Siegel-Tukey test Siegel–Walfisz theorem Sierpinski carpet Sierpinski number Sierpinski triangle Sierpiński arrowhead curve Sierpiński curve Sierpiński space Sierpiński's constant Sieve (category theory) Sieve (mathematics) Sieve estimator Sieve of Atkin Sieve of Eratosthenes Sieve of Sundaram Sieve theory Sigma additivity Sigma approximation Sigma function Sigma-algebra Sigma-ideal Sigma-ring Sigmoid function Sign convention Sign function Sign relational complex Sign test Signal theory Signal-to-noise statistic Signature (logic) Signature (mathematics) Signature (topology) Signature matrix Signature of a knot Signature operator Signed distance function Signed graph Signed measure Signed number representations Signed-digit representation Significance arithmetic Significand Significant figures Signomial Silver machine Silver ratio Silver rectangle Silverman's game Silverman–Toeplitz theorem Sim (pencil game) Similar matrix Similarity (geometry) Similarity invariance Similarity matrix Similarity transformation Similarly sorted Simple (abstract algebra) Simple Lie group Simple algebra Simple extension Simple function Simple group Simple harmonic motion Simple linear regression Simple magic cube Simple magic square Simple module Simple moving average crossover Simple polygon Simple polytope Simple random sample Simple rational approximation Simple ring Simple set Simple theorems in the algebra of sets Simple-homotopy equivalence Simplex Simplex algorithm Simplex graph Simplicial approximation theorem Simplicial category Simplicial complex Simplicial homology Simplicial manifold Simplicial polytope Simplicial set Simplification (logic) Simply connected at infinity Simply connected space Simply typed lambda calculus Simpson index Simpson's paradox Simpson's rule Simson line Simulated annealing Simulation preorder Simultaneous equation model Simultaneous equations Sinc function Sine and cosine transforms Sine wave Sine-Gordon equation Singapore Math Method Singapore Mathematical Olympiad Single linkage clustering Single machine scheduling Single-entry matrix Single-valued function Singleton (mathematics) Singleton bound Singly and doubly even Singmaster's conjecture Singular cardinals hypothesis Singular control Singular distribution Singular function Singular homology Singular integral Singular measure Singular perturbation Singular point of a curve Singular point of an algebraic variety Singular solution Singular value Singular value decomposition Singularity function Singularity theory Sinkov statistic Sinusoidal model Sinusoidal spiral Sion's minimax theorem Sir Cumference (series of books) Sir Edmund Whittaker Memorial Prize Situation calculus Six circles theorem Six exponentials theorem Sixth Term Examination Paper Size function Size functor Size homotopy group Size pair Skein relation Skeleton (category theory) Skellam distribution Sketch (mathematics) Skew Skew lattice Skew lines Skew normal distribution Skew polygon Skew-Hamiltonian matrix Skew-Hermitian matrix Skew-symmetric graph Skew-symmetric matrix Skewes' number Skewness Skewness risk Skip counting Sklar's theorem Skoda-El Mir theorem Skolem hull Skolem normal form Skolem's paradox Skolem–Noether theorem Skorokhod integral Skorokhod's embedding theorem Skorokhod's representation theorem Skyline matrix Sl2-triple Slack variable Slam-dunk Slant height Slashed zero Slater determinant Sleeping Beauty problem Slender group Slepian's lemma Slerp Slewing Slice genus Slice knot Slice sampling Slide rule Sliding mode control Sloan Fellowship Slope Slope field Slothouber–Graatsma puzzle Slowly varying function Slutsky's theorem Smale's paradox Smale's problems Small Latin squares and quasigroups Small Veblen ordinal Small area estimation Small cancellation theory Small circle Small cubicuboctahedron Small ditrigonal dodecicosidodecahedron Small ditrigonal icosidodecahedron Small dodecahemicosahedron Small dodecahemidodecahedron Small dodecicosahedron Small dodecicosidodecahedron Small gross Small icosicosidodecahedron Small icosihemidodecahedron Small number Small retrosnub icosicosidodecahedron Small rhombidodecahedron Small rhombihexahedron Small rhombitriheptagonal tiling Small rhombitrihexagonal tiling Small set Small set (category theory) Small set (combinatorics) Small snub icosicosidodecahedron Small stellated 120-cell Small stellated dodecahedron Small stellated truncated dodecahedron Small subgroup confinement attack Small triambic icosahedron Small world routing Small-angle approximation Small-world network Smallest circle problem Smallest grammar problem Smarandache function Smarandache-Wellin number Smash product Smearing retransformation Smith conjecture Smith normal form Smith number Smith's Prize Smith-Volterra-Cantor set Smith-Waterman algorithm Smith–Minkowski–Siegel mass formula Smn theorem Smooth coarea formula Smooth function Smooth infinitesimal analysis Smooth number Smooth vector Smoothed analysis Smoothed particle hydrodynamics Smoothing Smoothing spline Snake lemma Snake-in-the-box SnapPea Snark (graph theory) Snowball sampling Snub 24-cell Snub cube Snub disphenoid Snub dodecadodecahedron Snub dodecahedron Snub hexagonal tiling Snub icosidodecadodecahedron Snub polyhedron Snub square antiprism Snub square prismatic honeycomb Snub square tiling Snub triangular-hexagonal prismatic honeycomb Soap bubble Sober space Sobolev conjugate Sobolev inequality Sobolev space Sociable number Social statistics Social-circles network model Society for Industrial and Applied Mathematics Society of Mathematicians, Physicists and Astronomers of Slovenia Société Mathématique de France Socle (mathematics) Soddy's hexlet Soft independent modelling of class analogies Software for calculating π Sokhatsky-Weierstrass theorem Solder form Sole sufficient operator Solenoid (mathematics) Solenoidal vector field Solid Klein bottle Solid angle Solid geometry Solid harmonics Solid modeling Solid of revolution Solid sweep Solid torus Solitary wave Soliton Soliton (optics) Solovay–Strassen primality test Solution of the Poincaré conjecture Solution set Solvable Solvable Lie algebra Solvable group Solved game Solving quadratic equations with continued fractions Solving the geodesic equations Solvmanifold Soma cube Somer-Lucas pseudoprime Sommerfeld identity Sommerfeld radiation condition Somos' quadratic recurrence constant Sophie Germain prime Sophomore's dream Sorgenfrey plane Sorting algorithm Soul theorem Soundness South East Asian Mathematics Competition Souček space Space Space (mathematics) Space diagonal Space form Space group Space hierarchy theorem Space partitioning Space-filling curve Spaceland (novel) Spacetime algebra Spacetime symmetries Spacetime topology Spadikam Span (category theory) Spanier-Whitehead duality Spanning tree (mathematics) Spark (mathematics) Sparse PCA Sparse approximation Sparse binary polynomial hashing Sparse graph Sparse graph code Sparse grid Sparse matrix Sparse vector Sparsely totient number Spatial analysis Spatial dependence Spatial descriptive statistics Spatial network Spatial variability Spearman's rank correlation coefficient Spearman-Brown prediction formula Special affine group Special cases of Apollonius' problem Special divisor Special functions Special linear group Special number field sieve Special ordered set Special right triangles Special unitary group Specialization (pre)order Species discovery curve Species-area curve Specification (regression) Spectral asymmetry Spectral concentration problem Spectral density Spectral density estimation Spectral element method Spectral flux Spectral gap Spectral geometry Spectral graph theory Spectral layout Spectral leakage Spectral method Spectral radius Spectral sequence Spectral set Spectral space Spectral theorem Spectral theory Spectral theory of compact operators Spectral theory of ordinary differential equations Spectral width Spectrum (functional analysis) Spectrum (homotopy theory) Spectrum bias Spectrum continuation analysis Spectrum of a C*-algebra Spectrum of a ring Spectrum of a theory Speed of light (cellular automaton) Speed prior Speedup theorem Spence's function Sperner family Sperner's lemma Sphaleron Sphenic number Sphenocorona Sphenomegacorona Sphere Sphere packing Sphere theorem Sphere theorem (3-manifolds) Sphere-world Sphereland Spherical 3-manifold Spherical angle Spherical cap Spherical coordinate system Spherical design Spherical function Spherical geometry Spherical harmonics Spherical mean Spherical model Spherical multipole moments Spherical pendulum Spherical polyhedron Spherical space form conjecture Spherical trigonometry Spherically symmetric spacetime Sphericity Sphericon Spheroid Spheroidal wave equation Spheroidal wave function Spider diagram Spidron Spieker circle Spigot algorithm Spijker's lemma Spiking neural network Spin connection Spin group Spin network Spin representation Spin structure Spin tensor Spin(7)-manifold Spin-statistics theorem Spin-weighted spherical harmonics Spinor Spinor bundle Spinors in three dimensions Spiral Spiral of Theodorus Spiric section Spirograph Spline (mathematics) Spline interpolation Split graph Split link Split-biquaternion Split-complex number Split-octonion Split-quaternion Split-radix FFT algorithm Split-step method Splitter (geometry) Splitting circle method Splitting field Splitting lemma Splitting lemma (functions) Splitting of prime ideals in Galois extensions Splitting principle Splitting theorem Spontaneous symmetry breaking Spoof (game) Sporadic group Spouge's approximation Sprague–Grundy theorem Spread of a matrix Spread polynomials Spring (mathematics) Springer correspondence Sprouts (game) Spurious relationship Square (algebra) Square (geometry) Square One Television Square antiprism Square bifrustum Square cupola Square gyrobicupola Square lattice Square number Square orthobicupola Square pyramid Square pyramidal number Square root Square root day Square root of 2 Square root of 3 Square root of 5 Square root of a matrix Square tiling Square triangular number Square truncated trapezohedron Square wave Square wheel Square-cube law Square-free Square-free integer Square-free polynomial Square-lattice Ising model Squared deviations Squared triangular number Squarefree word Squaregraph Squaring the circle Squaring the square Squeeze mapping Squeeze operator Squeeze theorem Squircle Squoval Srivastava code St. Petersburg Department of Steklov Institute of Mathematics of Russian Academy of Sciences St. Petersburg Mathematical Society St. Petersburg paradox Stability Stability (probability) Stability group Stability of the Solar System Stability radius Stability spectrum Stability theory Stabilizer Stabilizer code Stable attractor Stable curve Stable group Stable homotopy category Stable homotopy theory Stable manifold Stable manifold theorem Stable map Stable marriage problem Stable module category Stable normal bundle Stable polynomial Stable roommates problem Stable set Stable theory Stable vector bundle Stably free module Stack (descent theory) Stairstep interpolation Stalk (sheaf) Stallings theorem about ends of groups Stallings-Zeeman theorem Stand and Deliver Standard Boolean model Standard algorithms Standard array Standard basis Standard conjectures on algebraic cycles Standard deviation Standard error (statistics) Standard form Standard map Standard model (disambiguation) Standard normal deviate Standard normal table Standard part function Standard probability space Standard score Standard torus Standardized coefficient Standardized moment Standardized rate Stanford University Mathematics Camp Stanine Stanley's reciprocity theorem Stanley-Wilf conjecture Star (game) Star (graph theory) Star coloring Star domain Star height Star height problem Star number Star polygon Star polyhedron Star product Star refinement Star transform Star-free language Star-shaped polygon Stark conjectures Stark–Heegner theorem Stars and bars (probability) Stata State (functional analysis) State diagram State observer State space (controls) State space (physics) Statements true in L Static analysis Static spacetime Stationary distribution Stationary ergodic process Stationary phase Stationary phase approximation Stationary point Stationary process Stationary sequence Stationary set Stationary spacetime Stationary wavelet transform Statistic Statistical Applications in Genetics and Molecular Biology Statistical Methods for Research Workers Statistical Science Statistical Society of Canada Statistical assembly Statistical assumption Statistical benchmarking Statistical classification Statistical conclusion validity Statistical dispersion Statistical distance Statistical ensemble (mathematical physics) Statistical epidemiology Statistical finance Statistical geography Statistical graphics Statistical hypothesis testing Statistical independence Statistical inference Statistical interference Statistical literacy Statistical mechanics Statistical methodology analysis reporting technique Statistical model Statistical model validation Statistical noise Statistical parameter Statistical parametric mapping Statistical population Statistical power Statistical probability Statistical process control Statistical proof Statistical randomness Statistical regularity Statistical signal processing Statistical significance Statistical survey Statistical syllogism Statistical theory Statistical unit Statistical weight Statisticians' and engineers' cross-reference of statistical terms Statistics Steady state Steane code Steenrod algebra Stefan problem Stefan's formula Steffensen's inequality Steffensen's method Stein manifold Stein's example Stein's lemma Stein's method Stein's unbiased risk estimate Stein-Strömberg theorem Steinberg group (K-theory) Steinberg representation Steiner chain Steiner inellipse Steiner points Steiner surface Steiner system Steiner tree Steiner's problem Steiner-Lehmus theorem Steinhaus longimeter Steinhaus theorem Steinhaus-Johnson-Trotter algorithm Steinhaus–Moser notation Steinitz exchange lemma Steinmetz solid Steklov Institute of Mathematics Stella (software) Stella octangula Stellated truncated hexahedron Stellation Stellation diagram Stemplot Stencil (numerical analysis) Stencil codes Stencil jumping Stengle's Positivstellensatz Step function Step response Stepped Reckoner Stepwise regression Steradian Stereographic projection Sterling program Stern prime Stern-Brocot tree Stewart's theorem Stewart-Walker lemma Stick number Stickelberger's theorem Stiefel manifold Stiefel–Whitney class Stieltjes constants Stieltjes matrix Stieltjes moment problem Stieltjes transformation Stiff equation Stigler diet Stigler's conjecture Stimulus-response model Stinespring factorization theorem Stirling number Stirling numbers and exponential generating functions Stirling numbers of the first kind Stirling numbers of the second kind Stirling transform Stirling's approximation Stochastic Stochastic approximation Stochastic calculus Stochastic context-free grammar Stochastic differential equation Stochastic gradient descent Stochastic hill climbing Stochastic kernel estimation Stochastic matrix Stochastic modelling (insurance) Stochastic optimization Stochastic ordering Stochastic partial differential equation Stochastic process Stochastic processes and boundary value problems Stochastic programming Stochastic simulation Stochastic tunneling Stochastically stable equilibrium Stokes' theorem Stolarsky mean Stolz-Cesàro theorem Stone duality Stone functor Stone method Stone's representation theorem for Boolean algebras Stone's theorem on one-parameter unitary groups Stoneham number Stone–Weierstrass theorem Stone–von Neumann theorem Stone–Čech compactification Stopped process Stopping time Strachey method for magic squares Strahler Stream Order Straight and Crooked Thinking Straight skeleton Straightedge Strassen algorithm Strassmann's theorem Strategic move Strategy-stealing argument Stratification (mathematics) Stratified Morse theory Stratified sampling Stratonovich integral Stream function Streamline diffusion Streett automaton Stress majorization Stress-energy tensor Stress-energy-momentum pseudotensor Stress-strain curve Stretched exponential function Stretching field Strict Strict differentiability Strict function Strict logic Strict weak ordering Strictly convex space Strictly non-palindromic number Strictly positive measure Strictly proper Strictly simple group String art String diagram String graph String rewriting Strip algebra Strobogrammatic number Strobogrammatic prime Strong Law of Small Numbers Strong antichain Strong cardinal Strong coloring Strong generating set Strong generative capacity Strong monad Strong operator topology Strong partition cardinal Strong perfect graph theorem Strong prime Strong prior Strong product of graphs Strong pseudoprime Strong topology Strong topology (polar topology) Strongly compact cardinal Strongly connected component Strongly minimal theory Strongly monotone Strongly positive bilinear form Strongly regular graph Strongly ribbon category Strophoid Structural analysis Structural equation modeling Structural equation modelling Structural induction Structural mechanics Structural proof theory Structural rule Structural stability Structure (category theory) Structure (mathematical logic) Structure space Structure tensor Structure theorem for Gaussian measures Structure theorem for finitely generated modules over a principal ideal domain Structured program theorem Struve function Strähle construction Stuck unknot Student's t-distribution Student's t-test Studentized range Studentized residual Studia Mathematica Stunted projective space Sturm separation theorem Sturm's theorem Sturm-Liouville theory Sturm-Picone comparison theorem Sturmian word Stuttering equivalence Størmer number Størmer's theorem Suanpan Sub-Riemannian manifold Sub-exponential time Subabnormal subgroup Subadditivity Subalgebra Subbase Subbundle Subcategory Subclass (set theory) Subclass reachability Subcoloring Subcompact cardinal Subcontrary mean Subcountability Subderivative Subdirect irreducible Subdirect product Subdivided interval categories Subdivision surface Subfactor Subfactorial Subfunctor Subgradient method Subgraph isomorphism problem Subgroup Subgroup analysis Subgroup growth Subgroup series Subgroup test Subharmonic function Subjective expected utility Subjective logic Subjectivism Sublime number Sublinear function Submanifold Submatrix Submaximal space Submersion (mathematics) Subnet (mathematics) Subnormal operator Subnormal subgroup Subobject Subobject classifier Subquadratic time Subquotient Subring Subring test Subsequence Subsequential limit Subset Subset sum problem Subshift of finite type Subspace Subspace (linear algebra) Subspace theorem Subspace topology Substitution instance Substitution matrix Substitution model Substitution of variables Substitution tiling Substitution-permutation network Substructural logic Substructure Subtangent Subtended angle Subtle cardinal Subtract a square Subtraction Subtraction without borrowing Subtractive notation Successive linear programming Successive over-relaxation Successive parabolic interpolation Successor cardinal Successor ordinal Sucker bet Sudan function Sudoku Sufficiency (statistics) Sufficiently connected Sufficiently large Suken Sullivan conjecture Sum of Logic Sum of combinatorial games Sum of normally distributed random variables Sum of squares Sum rule Sum rule in differentiation Sum rule in integration Sum-free sequence Sum-free set Sum-product number Summa Summa (mathematics) Summary statistics Summation Summation by parts Summation of Grandi's series Sumset Sumudu transform Sun's curious identity Sunrise problem Super PI Super Virasoro algebra Super vector space Super-Poulet number Super-logarithm Super-prime Super-root Superabundant number Superadditive Superalgebra Supercell (crystal) Supercombinator Supercommutative algebra Supercompact Supercompact cardinal Supercompact space Superconvergence Superdiffeomorphism Superellipse Superellipsoid Superformula Supergroup (physics) Superior highly composite number Superiority and Inferiority Ranking Method Supermanifold Supermathematics Supermatrix Supermodular Supermodule Supernatural numbers Superoperator Superparticular number Superpattern Superperfect group Superperfect number Superposition calculus Superposition principle Superprocess Superquadrics Superreal number Supersingular K3 surface Supersingular prime Supersolvable group Superspace Superstatistics Superstrong cardinal Supersymmetry as a quantum group Supertoroid Supertrace Supervisory control theory Supnick matrix Supplementary angles Support (mathematics) Support (measure theory) Support function Support vector machine Supporting hyperplane Supremum Surcomplex number Surface Surface Water Simulation Modelling Programme Surface area Surface bundle Surface bundle over the circle Surface fractal Surface gradient Surface integral Surface map Surface normal Surface of constant width Surface of general type Surface of revolution Surface subgroup conjecture Surface-to-surface intersection problem Surgery theory Surjective function Surplus variable Surreal number Surrogate model Survey Methodology Survey sampling Survival analysis Survival function Survivorship bias Surya Siddhanta Suslin cardinal Suslin set Suslin tree Suslin's problem Suspension (topology) Suzhou numerals Suzuki group Swami Bharati Krishna Tirtha's Vedic mathematics Swap regret Swarm intelligence Swastika curve Sweep line algorithm Swift-Hohenberg equation Swiss Mathematical Society Swiss cheese (mathematics) Switching function Syllogism Sylow subgroup Sylow theorems Sylver coinage Sylvester Medal Sylvester equation Sylvester matrix Sylvester's determinant theorem Sylvester's formula Sylvester's law of inertia Sylvester's sequence Sylvester–Gallai theorem Symbol of a differential operator Symbolic Cholesky decomposition Symbolic combinatorics Symbolic computation Symbolic dynamics Symbolic integration Symbolic logic Symbolic method Symmedian Symmetric algebra Symmetric bilinear form Symmetric derivative Symmetric design Symmetric difference Symmetric function Symmetric graph Symmetric group Symmetric hypergraph theorem Symmetric inverse semigroup Symmetric matrix Symmetric monoidal category Symmetric polynomial Symmetric product of an algebraic curve Symmetric relation Symmetric space Symmetric space (disambiguation) Symmetric tensor Symmetrically continuous function Symmetrization Symmetry Symmetry (biology) Symmetry breaking Symmetry combinations Symmetry group Symmetry in mathematics Symmetry in physics Symmetry of second derivatives Symmetry set Symplectic cut Symplectic filling Symplectic geometry Symplectic group Symplectic integrator Symplectic manifold Symplectic matrix Symplectic representation Symplectic space Symplectic sum Symplectic vector field Symplectic vector space Symplectization Symplectomorphism Syncategorematic term Synchronization of chaos Synchrotron function Syndetic set Synergetics coordinates Syntactic monoid Syntax (logic) Synthetic differential geometry Synthetic geometry Syntractrix System F System equivalence System identification System of imprimitivity System of linear equations Systematic code Systematic sampling Systoles of surfaces Systolic freedom Systolic geometry Sz.-Nagy's dilation theorem Szegő polynomial Szekeres snark Szemerédi regularity lemma Szemerédi's theorem Szemerédi–Trotter theorem Szilassi polyhedron Szpiro's conjecture Szász-Mirakyan operator Szász-Mirakyan-Kantorovich operator Séminaire Nicolas Bourbaki Séminaire Nicolas Bourbaki (1950–1959) Séminaire Nicolas Bourbaki (1960–1969) T-duality T-group T-group (mathematics) T-integration T-norm T-norm fuzzy logics T-schema T-square (fractal) T-symmetry T-table T-theory T.C. Mits T1 space TIFR Centre Table of Clebsch-Gordan coefficients Table of Lie groups Table of Newtonian series Table of bases Table of divisors Table of graphs Table of limits Table of logic symbols Table of mathematical symbols Table of mathematical symbols by introduction date Table of polyhedron dihedral angles Table of prime factors Table of spherical harmonics Table of vertex-symmetric digraphs Tabu search TacTix Tachytrope Tacnode Taguchi methods Tail sequence Tait conjectures Tait's conjecture Tajima's D Tak (function) Takagi existence theorem Takens' theorem Takeuti conjecture Tally counter Tally marks Talyrond Tamari lattice Tame group Tameness conjecture Tammes problem Tanaka equation Tanaka's formula Tangent Tangent bundle Tangent circles Tangent cone Tangent developable Tangent half-angle formula Tangent lines to circles Tangent measure Tangent space Tangent vector Tangential and normal components Tangential angle Tangential developable Tangential quadrilateral Tangle (mathematics) Tangloids Tanh-sinh quadrature Taniyama-Shimura conjecture Tannaka-Krein duality Tannakian category Tanner graph Tarjan's off-line least common ancestors algorithm Tarjan's strongly connected components algorithm Tarry point Tarski monster group Tarski's axiomatization of the reals Tarski's axioms Tarski's circle-squaring problem Tarski's exponential function problem Tarski's plank problem Tarski's problem Tarski's undefinability theorem Tarski–Grothendieck set theory Tarski–Kuratowski algorithm Tarski–Seidenberg theorem Tarski–Vaught test Tate cohomology group Tate conjecture Tate module Tate's algorithm Tau-function Taubes's Gromov invariant Taut foliation Taut submanifold Tautochrone curve Tautological bundle Tautological one-form Tautology (logic) Tav (number) Taxicab geometry Taxicab number Taylor expansions for the moments of functions of random variables Taylor series Taylor's theorem Taylor–Proudman theorem TeX Teakettle principle Techniques for differentiation Techno-mathematics Technology Innovations in Statistics Education Technometrics Teichmüller space Telegraph process Telegrapher's equations Telescoping series Tellegen's theorem Tempered representation Temperley-Lieb algebra Temporal finitism Temporal logic Temporal mean Tennenbaum's theorem Tensor Tensor (intrinsic definition) Tensor algebra Tensor bundle Tensor contraction Tensor density Tensor field Tensor product Tensor product of Hilbert spaces Tensor product of algebras Tensor product of fields Tensor product of graphs Tensor product of modules Tensor product of quadratic forms Tensor-hom adjunction Tent map Teragon Term (mathematics) Term algebra Term logic Term test Termination proof Ternary Golay code Ternary logic Ternary numeral system Ternary operation Ternary search Terrell rotation Tessarine Tessellation Tesseract Tesseractic honeycomb Test-retest Testimator Testing hypotheses suggested by the data Tetracategory Tetrad (index notation) Tetrad formalism Tetradecagon Tetradecahedron Tetradecimal Tetragonal trapezohedron Tetrahedral number Tetrahedral prism Tetrahedral symmetry Tetrahedral-octahedral honeycomb Tetrahedron Tetrahedron packing Tetrahemihexahedron Tetrakis hexahedron Tetrakis square tiling Tetralemma Tetramagic cube Tetramagic square Tetrated dodecahedron Tetration Tetraview Tetromino Texas Math and Science Coaches Association Thabit number Thai numerals Thales' theorem The American Statistician The Analyst The Analyst, or, Mathematical Museum The Annotated Turing The Beauty of Fractals The Book of Squares The Compendious Book on Calculation by Completion and Balancing The Design of Experiments The Doctrine of Chances The Dot and the Line The Econometrics Journal The Emperor's New Mind The Fibonacci Association The Foundations of Arithmetic The Fourth Dimension (book) The Geography of Thought The Geometer's Sketchpad The Geometry Center The Ground of Arts The Harmful Effects of Algorithms in Grades 1-4 The Hitchhiker's Guide to Calculus The Institute of Mathematics and its Applications The International Research Forums on Statistical Reasoning, Thinking, and Literacy The Lady Tasting Tea The Laws of Thought The Library of Babel The Long Tail The Magical Bag of Mathematical Tricks The Man Who Counted The Man Who Knew Infinity The Man Who Loved Only Numbers The Martians (group) The Mathematical Diary The Mathematical Experience The Mathematical Institute, University of Oxford The Mathematical Intelligencer The Mathematics Educator The Method of Mechanical Theorems The Music of the Primes The New York Journal of Mathematics The Nine Chapters on the Mathematical Art The Number Devil The Oxford Murders (film) The Parrot's Theorem The Planiverse The Princeton Companion to Mathematics The Principles of Mathematics The Quadrature of the Parabola The Review of Economics and Statistics The Road to Reality: A Complete Guide to the Laws of the Universe The Sand Reckoner The Schoolmaster's Assistant, Being a Compendium of Arithmetic both Practical and Theoretical The Sea Island Mathematical Manual The Story of Maths The Swallow's Tail The Unreasonable Effectiveness of Mathematics in the Natural Sciences The Value of Science The Whetstone of Witte The Wright 3 The chemical basis of morphogenesis The fifty nine icosahedra The six symmetries of music The vector of a quaternion The writing of Principia Mathematica Theil index Theorem Theorem of the cube Theorem on friends and strangers Theorema Egregium Theorems and definitions in linear algebra Theoretical and experimental justification for the Schrödinger equation Theoretical biology Theoretical physics Theory Theory (mathematical logic) Theory of Games and Economic Behavior Theory of computation Theory of conjoint measurement Theory of equations Theory of relations There is no infinite-dimensional Lebesgue measure Thermal history modelling Thermodynamic system Thermomagnetic convection Thermometer code Thesaurus Logarithmorum Completus Theta characteristic Theta function Theta representation Theta-divisor Thick set Thiele's interpolation formula Thin group Thin plate spline Thin set (Serre) Think-a-Dot Thirring model Thirty-six officers problem Thom conjecture Thom space Thomae's function Thomas A. Scott Professorship of Mathematics Thompson group (finite) Thompson groups Thomson cubic Thomson problem Thomson's lamp Thread angle Three Prisoners problem Three-dimensional graph Three-dimensional space Threshold graph Thue equation Thue number Thue's theorem Thue-Morse sequence Thue–Siegel–Roth theorem Thurston elliptization conjecture Thurston-Bennequin number Thâbit ibn Kurrah rule Thébault's theorem Tidal tensor Tietze extension theorem Tietze transformations Tight closure Tight span Tightness of measures Tijdeman's theorem Tikhonov regularization Tiling by regular polygons Tiling with rectangles Tilted large deviation principle Time dependent vector field Time derivative Time domain Time evolution Time hierarchy theorem Time reversibility Time scale calculus Time series Time use survey Time value of money Time-frequency analysis Time-frequency representation Time-invariant system Time-variant system Timelike homotopy Timelike simply connected Timelike topological feature Timeline of abelian varieties Timeline of algebra Timeline of algebra and geometry Timeline of algorithms Timeline of calculus and mathematical analysis Timeline of classical mechanics Timeline of mathematical logic Timeline of mathematics Timeline of number theory Timeline of numerals and arithmetic Timeline of probability and statistics Timetable of Greek mathematicians Tinkerbell map Tipping point Titanic prime Titchmarsh convolution theorem Titchmarsh theorem Tits alternative Tits group To Mock a Mockingbird Tobit model Toda bracket Toda field theory Todd class Todd–Coxeter algorithm Toeplitz algebra Toeplitz matrix Toeplitz operator Togliatti surface Tohoku Mathematical Journal Tokutomi Mathematics Contest Tolerance interval Tolerant sequence Tomahawk (geometric shape) Tombstone (typography) Tomita–Takesaki theory Tomographic reconstruction Tonelli–Hobson test Tonelli's theorem Tonelli's theorem (functional analysis) Tonelli–Hobson test Tonnetz Toom–Cook multiplication Top (mathematics) Top-dimensional form Topic outline of algebra Topic outline of arithmetic Topic outline of calculus Topic outline of discrete mathematics Topic outline of geometry Topic outline of logic Topic outline of mathematics Topic outline of statistics Topic outline of trigonometry Topogravitic tensor Topological Boolean algebra Topological K-theory Topological abelian group Topological algebra Topological censorship Topological combinatorics Topological conjugacy Topological data analysis Topological defect Topological degree theory Topological derivative Topological divisor of zero Topological dynamics Topological entropy Topological game Topological graph theory Topological group Topological half-exact functor Topological index Topological indistinguishability Topological manifold Topological modular forms Topological module Topological pair Topological property Topological quantum computer Topological quantum field theory Topological quantum number Topological ring Topological semigroup Topological sorting Topological space Topological tensor product Topological vector space Topologically stratified space Topologist's sine curve Topology Topology (journal) Topology optimization Toponogov's theorem Topos Tor functor Toral Lie algebra Torelli theorem Toric geometry Toric manifold Toric section Tornado code Tornado diagram Toroid Toroid (geometry) Toroidal and poloidal Toroidal coordinates Toroidal graph Toronto space Torque Torricelli's equation Torsion Torsion (algebra) Torsion of a curve Torsion subgroup Torsion tensor Torsion-free Torsion-free abelian groups of rank 1 Tortuosity Torus Torus bundle Torus knot Total angular momentum quantum number Total coloring Total coloring conjecture Total curvature Total derivative Total least squares Total order Total quotient ring Total relation Total sum of squares Total variation Total variation diminishing Totally bounded space Totally disconnected group Totally disconnected space Totally indescribable cardinal Totally positive matrix Totally real number field Touchard polynomials Tournament (graph theory) Tournament of the Towns Tower of Hanoi Toy model Toy problem Toy theorem Trace (linear algebra) Trace class Trace diagram Trace formula Trace identity Trace monoid Trace operator Traced monoidal category Trachtenberg system Track transition curve Tractatus Logico-Philosophicus Tractor bundle Tractrix Tracy-Singh product Traditional mathematics Traffic flow Trailing zero Train track Train track map Trajectory Trajectory optimization Trakhtenbrot's theorem Transaction logic Transactions of the American Mathematical Society Transcendence (mathematics) Transcendence degree Transcendence theory Transcendental curve Transcendental equation Transcendental function Transcendental number Transcritical bifurcation Transfer (group theory) Transfer function Transfer matrix Transfer operator Transfer principle Transferable belief model Transfinite arithmetic Transfinite induction Transfinite number Transform (mathematics) Transform theory Transformation (geometry) Transformation geometry Transformation matrix Transformation semigroup Transforming polynomials Transiogram Transition function Transitive closure Transitive dependency Transitive reduction Transitive relation Transitive set Transitively normal subgroup Transitivity (mathematics) Translation (geometry) Translation plane Translational symmetry Transparallel processing Transparent Intensional Logic Transport function Transport of structure Transportation network (graph theory) Transportation theory Transpose Transpose graph Transposition (mathematics) Transposition cipher Transversal Transversal (disambiguation) Transversal (geometry) Transversal line Transversality Transverse measure Transylvania lottery Trapdoor function Trapezium Trapezium rule Trapezo-rhombic dodecahedral honeycomb Trapezo-rhombic dodecahedron Trapezohedron Trapezoid Traveler's dilemma Travelling salesman problem Tree (descriptive set theory) Tree (graph theory) Tree (set theory) Tree decomposition Tree rearrangement Tree rotation Tree spanner Tree traversal Tree-graded space Trefftz method Trefoil knot Trellis (graph) Trend analysis Trend estimation Trend surface analysis Treviso Arithmetic Tri-oval Triadic relation Triakis icosahedron Triakis octahedron Triakis tetrahedron Triakis triangular tiling Trial division Triality Triangle Triangle group Triangle inequality Triangle wave Triangle-free graph Triangular bifrustum Triangular coordinates Triangular cupola Triangular dipyramid Triangular distribution Triangular factor Triangular function Triangular hebesphenorotunda Triangular matrix Triangular number Triangular orthobicupola Triangular prism Triangular prismatic honeycomb Triangular tiling Triangular-hexagonal prismatic honeycomb Triangulated category Triangulation Triangulation (advanced geometry) Triangulation (topology) Triaugmented dodecahedron Triaugmented hexagonal prism Triaugmented triangular prism Triaugmented truncated dodecahedron Tricategory Trichotomy Trichotomy (mathematics) Tricolorability Tricorn (mathematics) Tricubic interpolation Trident curve Tridiagonal matrix Tridiagonal matrix algorithm Tridiminished icosahedron Tridiminished rhombicosidodecahedron Tridyakis icosahedron Trigamma function Trigenus Trigger strategy Trigonal trapezohedron Trigonometric functions Trigonometric integral Trigonometric interpolation Trigonometric moment problem Trigonometric number Trigonometric polynomial Trigonometric series Trigonometric substitution Trigonometry Trigonometry in Galois fields Trigyrate rhombicosidodecahedron Triheptagonal tiling Trihexagonal tiling Trilateration Trilinear coordinates Trilinear interpolation Trillionth Trilon Trimagic cube Trimagic square Trimean Trimetric projection Trimmed estimator Trimorphic number Trinification Trinomial Trinomial expansion Triple Triple bar Triple correlation Triple product Triple product property Triple product rule Triple spiral Triple system Triple torus Trisected perimeter point Trisectrix Trisectrix of Maclaurin Triskaidecagon Trispectrum Trivial (mathematics) Trivial group Trivial measure Trivial representation Trivial ring Trivial topology Trochoid Tromino Tropical geometry Trott curve Trudinger's theorem True (Unix) True quantified Boolean formula True variance Truncatable prime Truncated 120-cell Truncated 16-cell Truncated 24-cell Truncated 5-cell Truncated 600-cell Truncated alternated cubic honeycomb Truncated cube Truncated cubic honeycomb Truncated cuboctahedron Truncated distribution Truncated dodecadodecahedron Truncated dodecahedron Truncated great dodecahedron Truncated great icosahedron Truncated hexagonal prismatic honeycomb Truncated hexagonal tiling Truncated icosahedron Truncated icosidodecahedron Truncated mean Truncated normal distribution Truncated octahedral prism Truncated octahedron Truncated power function Truncated regression model Truncated rhombic dodecahedron Truncated rhombic triacontahedron Truncated square prismatic honeycomb Truncated square tiling Truncated tesseract Truncated tetrahedral prism Truncated tetrahedron Truncated trapezohedron Truncated triakis tetrahedron Truncation Truncation (geometry) Truncation (statistics) Truncation error Truncus Trust region Truth table Truth table reduction Tsallis entropy Tsallis statistics Tschirnhaus transformation Tschirnhausen cubic Tsen's theorem Tsirelson space Tsirelson's bound Tube lemma Tubular neighborhood Tucker decomposition Tucker's lemma Tukey lambda distribution Tukey median Tukey's test Tukey-Kramer method Tunnell's theorem Tuple Tuple relational calculus Tupper's self-referential formula Turbulence Turing completeness Turing degree Turing jump Turing machine Turing machine examples Turing machine gallery Turing reduction Turing tarpit Turing's proof Turn (geometry) Turnstile (symbol) Turán graph Turán number Turán sieve Turán's inequalities Turán's theorem Tusi-couple Tutte matrix Tutte polynomial Tutte theorem Tutte-Berge formula Tutte–Coxeter graph Tverberg theorem Tweedie distributions Twelfth root of two Twelvefold way Twiddle factor Twin Prime Search Twin paradox Twin prime Twin prime conjecture Twisted K-theory Twisted cubic Twisted, The Distorted Mathematics of Greenhouse Denial Twisting properties Twistor correspondence Twistor space Twistor theory Two New Sciences Two envelopes problem Two's complement Two-center bipolar coordinates Two-dimensional graph Two-dimensional singular value decomposition Two-element Boolean algebra Two-fluid model Two-form Two-graph Two-point tensor Two-sided Laplace transform Two-tailed test Two-vector Tychonoff plank Tychonoff space Tychonoff's theorem Type (model theory) Type I and type II errors Type inhabitation problem Type theory Type-1 Gumbel distribution Type-2 Gumbel distribution Typed lambda calculus Typical set Typographical Number Theory Typographical conventions in mathematical formulae U-duality U-quadratic distribution U-statistic UCT Mathematics Competition UV fixed point Ugly duckling theorem Ulam numbers Ulam spiral Ultraconnected space Ultrafilter Ultrafinitism Ultrahyperbolic wave equation Ultralimit Ultrametric space Ultraparallel theorem Ultraproduct Ultrastrong topology Ultraweak topology Umbilic torus Umbilical point Umbral calculus Umbrella sampling Unary coding Unary function Unary numeral system Unary operation Unbiased estimation of standard deviation Unbounded system Uncertainty Uncertainty principle Uncertainty quantification Uncle Petros and Goldbach's Conjecture Unconditional convergence Uncorrelated Uncorrelated asymmetry Uncountable set Undecidable Undecidable problem Undecimal Undergraduate Texts in Mathematics Undulating number Unduloid Unexpected hanging paradox Unfoldable cardinal Unfolding Unicode Geometric Shapes Unicode numerals Unicoherent Unicursal hexagram Unification Uniform 1 k2 polytope Uniform 2 k1 polytope Uniform Polychora Project Uniform absolute continuity Uniform absolute-convergence Uniform algebra Uniform antiprismatic prism Uniform boundedness Uniform boundedness principle Uniform coloring Uniform continuity Uniform convergence Uniform distribution (continuous) Uniform distribution (discrete) Uniform great rhombicosidodecahedron Uniform great rhombicuboctahedron Uniform integrability Uniform isomorphism Uniform norm Uniform polychoron Uniform polyhedron Uniform polyhedron compound Uniform polyteron Uniform polytope Uniform price auction Uniform property Uniform space Uniform tessellation Uniform theory of diffraction Uniform tiling Uniform tilings in hyperbolic plane Uniform tree Uniform-cost search Uniformizable space Uniformization Uniformization (set theory) Uniformization theorem Uniformly Cauchy sequence Uniformly connected space Uniformly convex space Uniformly hyperfinite algebra Uniformly most powerful test Unifying theories in mathematics Unimodal function Unimodular Unimodular form Unimodular lattice Unimodular matrix Unimodular polynomial matrix Union (set theory) Union of Czech mathematicians and physicists Union-closed sets conjecture Unipotent Unique factorization domain Unique negative dimension Unique prime Unique sink orientation Uniquely colorable graph Uniqueness quantification Uniqueness theorem Unistochastic matrix Unit (ring theory) Unit circle Unit cube Unit disk Unit disk graph Unit distance graph Unit fraction Unit function Unit interval Unit measure Unit propagation Unit ring Unit root Unit root test Unit sphere Unit square Unit tangent bundle Unit vector Unit-weighted regression Unital Unitarian trick Unitary divisor Unitary equivalence Unitary group Unitary matrix Unitary method Unitary operator Unitary perfect number Unitary representation Unitary transformation United Kingdom Mathematics Trust United States of America Mathematical Olympiad United States of America Mathematical Talent Search Unitized risk Units conversion by factor-label Unity amplitude Univalent function Univariate Univariate distribution Universal C*-algebra Universal algebra Universal algebraic geometry Universal approximation theorem Universal bundle Universal coefficient theorem Universal composability Universal enveloping algebra Universal graph Universal instantiation Universal property Universal quantification Universal set Universality (dynamical systems) Universally Baire set Universally measurable set Universe (mathematics) University of Chicago School Mathematics Project University of Copenhagen Institute for Mathematical Sciences University of Houston College of Natural Sciences and Mathematics University of Minnesota Talented Youth Mathematics Program University of Waterloo Faculty of Mathematics Unknot Unknotting problem Unknown unknown Unlink Unreasonable ineffectiveness of mathematics Unrooted tree path lengths Unsolved Problems in Number Theory Unsolved problems in computer science Unsolved problems in mathematics Unsolved problems in statistics Unstructured grid Untouchable number Unusual number Up to Uplift modelling Upper and lower bounds Upper and lower probabilities Upper convected time derivative Upper half-plane Upper set Upper topology Upwind scheme Urelement Urn problem Urnfield culture numerals Ursell function Urysohn universal space Urysohn's lemma Usage analysis Uses of trigonometry Utility maximization problem Utilization Utilization distribution Utm theorem Utpala VC dimension VEGAS algorithm Vacuous truth Vague topology Valentin Vornicu Validity (statistics) Valuation (algebra) Valuation (logic) Valuation (mathematics) Valuation (measure theory) Valuation of options Valuation ring Valuative criterion Value (mathematics) Value at risk Value distribution theory of holomorphic functions Vampire number Van Aubel's theorem Van Deemter's equation Van Hiele levels Van Kampen diagram Van Wijngaarden transformation Van der Corput sequence Van der Pol oscillator Van der Waerden notation Van der Waerden number Van der Waerden test Van der Waerden's theorem Vandermonde matrix Vandermonde polynomial Vandermonde's identity Vandiver's conjecture Vanish at infinity Vanishing cycle Vantieghems theorem Varadhan's lemma Variable Variable rules analysis Variable structure system Variable-length code Variable-order Bayesian network Variable-order Markov model Variance Variance inflation factor Variance reduction Variance-gamma distribution Variance-to-mean ratio Variation (game tree) Variation ratio Variational Bayesian methods Variational Monte Carlo Variational inequality Variational integrator Variational method (quantum mechanics) Variational perturbation theory Variational principle Variational vector field Variety (universal algebra) Varifold Varignon's theorem Variogram Vasishtha Siddhanta Vaughan's identity Vaught conjecture Vaught's test Veblen function Veblen ordinal Vector Analysis Vector Laplacian Vector Laplacian/Proofs Vector area Vector autoregression Vector bundle Vector bundles on algebraic curves Vector calculus Vector calculus identities Vector field Vector field reconstruction Vector fields in cylindrical and spherical coordinates Vector fields on spheres Vector flow Vector measure Vector notation Vector operator Vector potential Vector projection Vector quadruple product Vector quantization Vector soliton Vector space Vector spaces without fields Vector spherical harmonics Vector-valued differential form Vector-valued function Vectorial Mechanics Vectorization (mathematics) Vectors in three-dimensional space Vedic Mathematics (book) Vedic square Velocity Venn diagram Verbal arithmetic Verdier duality Verhoeff algorithm Verifiable random function Verlet integration Verma module Veronese surface Versine Versor Vertex (curve) Vertex (geometry) Vertex (graph theory) Vertex angle Vertex arrangement Vertex configuration Vertex cover problem Vertex cycle cover Vertex enumeration problem Vertex figure Vertex operator algebra Vertex separator Vertex-transitive graph Vertical (angles) Vertical bundle Vertical direction Vertical exaggeration Vertical tangent Vertical translation Vesica piscis Vibrating string Vibrations of a circular drum Vickrey auction Vicsek fractal Vieth-Muller circle Vietoris–Begle mapping theorem Vietoris–Rips complex Vigenère cipher Vigesimal Villarceau circles Vinculum (symbol) Vinogradov's theorem Virasoro algebra Virial theorem Virtual displacement Virtual knot Virtual manipulatives for mathematics Virtual work Virtually Virtually Haken conjecture Virtually fibered conjecture Viscosity solution Visibility (geometry) Visibility graph Visibility graph analysis Visibility polygon Visual Calculus Visual hull Viswanath's constant Vitale's random Brunn-Minkowski inequality Vitali convergence theorem Vitali covering lemma Vitali set Vitali–Hahn–Saks theorem Viterbi algorithm Viviani's curve Viviani's theorem Vizing's conjecture Vizing's planar graph conjecture Viète's formula Viète's formulas Vlaamse Wiskunde Olympiade Voigt notation Voigt profile Voltage graph Volterra integral equation Volterra operator Volterra series Volterra space Volterra's function Volume Volume and surface elements in different co-ordinate systems Volume entropy Volume form Volume integral Volume mesh Volumetric flux Von Mangoldt function Von Mises distribution Von Mises–Fisher distribution Von Neumann algebra Von Neumann bicommutant theorem Von Neumann cardinal assignment Von Neumann conjecture Von Neumann neighborhood Von Neumann paradox Von Neumann regular ring Von Neumann stability analysis Von Neumann universe Von Neumann's inequality Von Neumann's theorem Von Neumann–Bernays–Gödel set theory Von Staudt–Clausen theorem Vopěnka's principle Vorlesungen über Zahlentheorie Voronoi diagram Voronoi tessellation Vortex Vortex dynamics Vortical Vorticity equation Voting paradox Voxel Vuong's closeness test Vysochanskiï-Petunin inequality WKB approximation Wadge hierarchy Wagstaff prime Wald test Wald's equation Wald-Wolfowitz runs test Waldhausen category Wall-Sun-Sun prime Wallenius' noncentral hypergeometric distribution Wallis product Wallman compactification Wallpaper group Walrasian auction Walsh code Walsh function Walsh matrix Wandering set Wang B-machine Wang and Landau algorithm Wang tile Wannier function Ware Tetralogy Waring's prime number conjecture Waring's problem Warnsdorff's algorithm Warped geometry Warsaw School of Mathematics Wasserstein metric Watchman route problem Water, gas, and electricity Waterfall chart Waterman polyhedron Watt's curve Watts and Strogatz model Wave Wave equation Wave front set Wave vector Wavelet Wavelet compression Wavelet modulation Wavelet packet decomposition Wavelet series Wavelet transform Wave–particle duality Weaire-Phelan structure Weak Hausdorff space Weak coloring Weak convergence Weak convergence (Hilbert space) Weak derivative Weak equivalence Weak formulation Weak generative capacity Weak interpretability Weak n-category Weak operator topology Weak order of permutations Weak solution Weak topology Weak topology (polar topology) Weakly NP-complete Weakly additive Weakly compact Weakly compact cardinal Weakly contractible Weakly harmonic function Weakly hyper-Woodin cardinal Weakly measurable function Weakly normal subgroup Weakly o-minimal structure Weakly symmetric space Weber function Weber's theorem Wedderburn's little theorem Wedderburn-Etherington number Wedge (geometry) Wedge sum Weeks manifold Weibull distribution Weierstrass M-test Weierstrass factorization theorem Weierstrass function Weierstrass functions Weierstrass p Weierstrass point Weierstrass preparation theorem Weierstrass product inequality Weierstrass ring Weierstrass theorem Weierstrass transform Weierstrass's elliptic functions Weierstrass–Casorati theorem Weierstrass–Enneper parameterization Weighing matrix Weight (representation theory) Weight (strings) Weight function Weighted context-free grammar Weighted geometric mean Weighted matroid Weighted mean Weighted random Weighted space Weil cohomology theory Weil conjecture Weil conjecture on Tamagawa numbers Weil conjectures Weil pairing Weil reciprocity law Weil restriction Weil's criterion Weil–Châtelet group Weinberg-Witten theorem Weingarten equations Weinstein conjecture Weird number Weitzenböck identity Weitzenböck's inequality Welch's t test Welch-Satterthwaite equation Well-behaved Well-defined Well-formed formula Well-founded relation Well-order Well-ordering principle Well-ordering theorem Well-pointed category Well-posed problem Well-quasi-ordering Welsh mathematicians Welsh numerals Wess-Zumino-Witten model Weyl algebra Weyl character formula Weyl differintegral Weyl group Weyl quantization Weyl tensor Weyl transformation Weyl's criterion Weyl's inequality Weyl's lemma (Laplace equation) Weyl's postulate Weyl's theorem Weyl-Berry conjecture Weyl-Brauer matrices Weyl–Schouten theorem What Is Mathematics? Wheat and chessboard problem Wheel factorization Wheel graph Wheel theory Where Mathematics Comes From Whewell equation Whipple formulae Whipple's index White Light (novel) White noise White test Whitehead Prize Whitehead conjecture Whitehead group Whitehead link Whitehead manifold Whitehead problem Whitehead product Whitehead theorem Whitehead torsion Whitehead's lemma Whitehead's point-free geometry Whitney conditions Whitney covering lemma Whitney disk Whitney embedding theorem Whitney extension theorem Whitney immersion theorem Whitney umbrella Whittaker and Watson Whittaker model Whittaker–Shannon interpolation formula Whole number Wick product Wick rotation Wiedersehen pair Wieferich at Home Wieferich pair Wieferich prime Wieferich's theorem Wiener deconvolution Wiener equation Wiener filter Wiener index Wiener process Wiener sausage Wiener's tauberian theorem Wiener-Ikehara theorem Wiener–Hopf method Wiener–Khinchin theorem Wightman axioms Wigner D-matrix Wigner distribution function Wigner quasi-probability distribution Wigner semicircle distribution Wigner's classification Wigner's theorem Wigner-Eckart theorem Wigner-d'Espagnat inequality Wijsman convergence Wike's law of low odd primes Wilcoxon signed-rank test Wild knot Wildfire modeling Wilf–Zeilberger pair Wilkinson's polynomial Wilks Memorial Award Wilks' lambda distribution Will Rogers phenomenon William Floyd (mathematician) William Lowell Putnam Mathematical Competition Williams' p + 1 algorithm Willmore conjecture Willmore energy Wilson polynomials Wilson prime Wilson quotient Wilson's theorem Winding number Window function Wine/water mixing problem Wing shape optimization Winning Ways for your Mathematical Plays Winsorising Winsorized mean Wireworld Wirtinger inequality (2-forms) Wirtinger's inequality Wirtinger's inequality for functions Wishart distribution Witch of Agnesi Without loss of generality Witt algebra Witt group Witt vector Witt's theorem Wittgenstein's rod Wold decomposition Wold's theorem Wolf Prize Wolf Prize in Mathematics Wolf summation Wolfe conditions Wolfram Demonstrations Project Wolfram's 2-state 3-symbol Turing machine Wolstenholme's theorem Womersley number Woo circles Woodall number Woodbury matrix identity Woodin cardinal Worcester County Mathematics League Word (group theory) Word metric Word problem Word problem (computability) Word problem (mathematics education) Word problem (mathematics) Word problem for groups Word wrap World Mathematics Challenge World Maths Day World line Wormhole Wrangler (University of Cambridge) Wreath product Wright Omega function Writer invariant Writhe Wronskian Wu's method Wyatt Earp effect Wyckoff positions Wythoff construction Wythoff symbol Wythoff's game XTR Xiaolin Wu's line algorithm X–Y–Z matrix Y-homeomorphism Y-intercept Y-Δ transform Yamabe flow Yamabe invariant Yamabe problem Yamartino method Yangian Yang–Baxter equation Yang–Mills existence and mass gap Yang–Mills theory Yarrow algorithm Yates' correction for continuity Yavanajataka Yaw angle Yaw, pitch, and roll Year zero Yetter-Drinfeld category Yoneda lemma Youden's J statistic Young measure Young symmetrizer Young tableau Young's inequality Young's lattice Young–Laplace equation Yuktibhasa Yule–Simon distribution Z function Z notation Z* theorem Z-channel (information theory) Z-factor Z-group Z-matrix (mathematics) Z-order (curve) Z-test Z-transform ZAMM — Journal of Applied Mathematics & Mechanics ZJ theorem Zadoff–Chu sequence Zahorski theorem Zakai equation Zakharov system Zakharov–Schulman system Zappa-Szép product Zarankiewicz problem Zariski geometry Zariski surface Zariski tangent space Zariski topology Zariski's main theorem Zaslavskii map Zassenhaus group Zassenhaus lemma Zech's logarithms Zeckendorf's theorem Zeisel number Zeitschrift für Angewandte Mathematik und Physik Zeller's congruence Zeno's paradoxes Zentralblatt MATH Zenzizenzizenzic Zermelo set theory Zermelo–Fraenkel set theory Zernike polynomials Zero (complex analysis) Zero dagger Zero divisor Zero element Zero game Zero map Zero matrix Zero mode Zero morphism Zero ring Zero set Zero sharp Zero suppression Zero-crossing rate Zero-dimensional space Zero-knowledge proof Zero-one law Zero-order hold Zero-product property Zero-sum problem Zerosumfree monoid Zeroth Zeroth-order logic Zeta constant Zeta distribution Zeta function Zeta function regularization Zeta function universality ZetaGrid Zhegalkin polynomial Zhou Bi Suan Jing Zig-zag lemma Ziggurat algorithm Zigzag code Zionts-Wallenius method Zipf's law Zipf–Mandelbrot law Zipper theorem Znám's problem Zocchihedron Zoll surface Zolotarev's lemma Zonal and meridional Zonal and poloidal Zonal polynomial Zonal spherical function Zonohedron Zorn's lemma Zsigmondy's theorem Zubov's method Zuckerman functor Zuckerman number Éléments de géométrie algébrique Étale cohomology Étale fundamental group Étale morphism Āgamaḍambara Āryabhaṭa numeration Čech cohomology Čech-to-derived functor spectral sequence Černy conjecture Łukasiewicz logic Γ-convergence Δ-hyperbolic space Ε-net Ε-quadratic form Ε₀ Θ (set theory) Μ operator Μ-recursive function Σ-compact space Σ-finite measure Ω-consistent theory Ω-logic −0 (number) −1 (number) −40 (number)
c4c5fd0391be7c60
From Wikipedia, the free encyclopedia - View original article Common symbolsC SI unitfarad Jump to: navigation, search Common symbolsC SI unitfarad Capacitance is the ability of a body to store an electrical charge. Any object that can be electrically charged exhibits capacitance. A common form of energy storage device is a parallel-plate capacitor. In a parallel plate capacitor, capacitance is directly proportional to the surface area of the conductor plates and inversely proportional to the separation distance between the plates. If the charges on the plates are +q and −q respectively, and V gives the voltage between the plates, then the capacitance C is given by C = \frac{q}{V}. which gives the voltage/current relationship I(t) = C \frac{\mathrm{d}V(t)}{\mathrm{d}t}. The capacitance is a function only of the geometry (including their distance) of the conductors and the permittivity of the dielectric. For many dielectrics, the permittivity, and thus the capacitance is independent of the potential difference between the conductors and the total charge on them. The SI unit of capacitance is the farad (symbol: F), named after the English physicist Michael Faraday. A 1 farad capacitor, when charged with 1 coulomb of electrical charge, has a potential difference of 1 volt between its plates.[1] Historically, a farad was regarded as an inconveniently large unit, both electrically and physically. Its subdivisions were invariably used, namely the microfarad, nanofarad and picofarad. More recently, technology has advanced such that capacitors of 1 farad and greater can be constructed in a structure little larger than a coin battery (so-called 'supercapacitors'). Such capacitors are principally used for energy storage replacing more traditional batteries. The energy (measured in joules) stored in a capacitor is equal to the work done to charge it. Consider a capacitor of capacitance C, holding a charge +q on one plate and −q on the other. Moving a small element of charge dq from one plate to the other against the potential difference V = q/C requires the work dW: \mathrm{d}W = \frac{q}{C}\,\mathrm{d}q where W is the work measured in joules, q is the charge measured in coulombs and C is the capacitance, measured in farads. The energy stored in a capacitor is found by integrating this equation. Starting with an uncharged capacitance (q = 0) and moving charge from one plate to the other until the plates have charge +Q and −Q requires the work W: W_\text{charging} = \int_0^Q \frac{q}{C} \, \mathrm{d}q = \frac{1}{2}\frac{Q^2}{C} = \frac{1}{2}QV = \frac{1}{2}CV^2 = W_\text{stored}. Main article: Capacitor The capacitance of the majority of capacitors used in electronic circuits is generally several orders of magnitude smaller than the farad. The most common subunits of capacitance in use today are the microfarad (µF), nanofarad (nF), picofarad (pF), and, in microcircuits, femtofarad (fF). However, specially made supercapacitors can be much larger (as much as hundreds of farads), and parasitic capacitive elements can be less than a femtofarad. Capacitance can be calculated if the geometry of the conductors and the dielectric properties of the insulator between the conductors are known. A qualitative explanation for this can be given as follows. Once a positive charge is put unto a conductor, this charge creates an electrical field, repelling any other positive charge to be moved onto the conductor. I.e. increasing the necessary voltage. But if nearby there is another conductor with a negative charge on it, the electrical field of the positive conductor repelling the second positive charge is weakened (the second positive charge also feels the attracting force of the negative charge). So due to the second conductor with a negative charge, it becomes easier to put a positive charge on the already positive charged first conductor, and vice versa. I.e. the necessary voltage is lowered. As a quantitative example consider the capacitance of a parallel-plate capacitor constructed of two parallel plates both of area A separated by a distance d: \ C=\varepsilon_r\varepsilon_0\frac{A}{d} C is the capacitance, in Farads; A is the area of overlap of the two plates, in square meters; εr is the relative static permittivity (sometimes called the dielectric constant) of the material between the plates (for a vacuum, εr = 1); ε0 is the electric constant (ε0 ≈ 8.854×10−12 F m–1); and d is the separation between the plates, in meters; Capacitance is proportional to the area of overlap and inversely proportional to the separation between conducting sheets. The closer the sheets are to each other, the greater the capacitance. The equation is a good approximation if d is small compared to the other dimensions of the plates so the field in the capacitor over most of its area is uniform, and the so-called fringing field around the periphery provides a small contribution. In CGS units the equation has the form:[2] C=\varepsilon_r\frac{A}{4\pi d} where C in this case has the units of length. Combining the SI equation for capacitance with the above equation for the energy stored in a capacitance, for a flat-plate capacitor the energy stored is: W_\text{stored} = \frac{1}{2} C V^2 = \frac{1}{2} \varepsilon_{r}\varepsilon_{0} \frac{A}{d} V^2. where W is the energy, in joules; C is the capacitance, in farads; and V is the voltage, in volts. Voltage-dependent capacitors[edit] \ dQ=C(V) \, dV where the voltage dependence of capacitance, C(V), stems from the field, which in a large area parallel plate device is given by ε = V/d. This field polarizes the dielectric, which polarization, in the case of a ferroelectric, is a nonlinear S-shaped function of field, which, in the case of a large area parallel plate device, translates into a capacitance that is a nonlinear function of the voltage causing the field.[3][4] Q=\int_0^VC(V) \, dV\ which agrees with Q = CV only when C is voltage independent. dW =Q \, dV =\left[ \int_0^V\ dV' \ C(V') \right] \ dV \ . W = \int_0^V\ dV\ \int_0^V \ dV' \ C(V') = \int_0^V \ dV' \ \int_{V'}^V \ dV \ C(V') = \int_0^V\ dV' \left(V-V'\right) C(V') \ , where interchange of the order of integration is used. Another example of voltage dependent capacitance occurs in semiconductor devices such as semiconductor diodes, where the voltage dependence stems not from a change in dielectric constant but in a voltage dependence of the spacing between the charges on the two sides of the capacitor.[6] This effect is intentionally exploited in diode-like devices known as varicaps. Frequency-dependent capacitors[edit] If a capacitor is driven with a time-varying voltage that changes rapidly enough, then the polarization of the dielectric cannot follow the signal. As an example of the origin of this mechanism, the internal microscopic dipoles contributing to the dielectric constant cannot move instantly, and so as frequency of an applied alternating voltage increases, the dipole response is limited and the dielectric constant diminishes. A changing dielectric constant with frequency is referred to as dielectric dispersion, and is governed by dielectric relaxation processes, such as Debye relaxation. Under transient conditions, the displacement field can be expressed as (see electric susceptibility): \boldsymbol{D(t)}=\varepsilon_0\int_{-\infty}^t \ \varepsilon_r (t-t') \boldsymbol E (t')\ dt' , indicating the lag in response by the time dependence of εr, calculated in principle from an underlying microscopic analysis, for example, of the dipole behavior in the dielectric. See, for example, linear response function.[7][8] The integral extends over the entire past history up to the present time. A Fourier transform in time then results in: \boldsymbol D(\omega) = \varepsilon_0 \varepsilon_r(\omega)\boldsymbol E (\omega)\ , I(\omega) = j\omega Q(\omega) = j\omega \oint_{\Sigma} \boldsymbol D (\boldsymbol r , \ \omega)\cdot d \boldsymbol{\Sigma} \ =\left[ G(\omega) + j \omega C(\omega)\right] V(\omega) = \frac {V(\omega)}{Z(\omega)} \ , \varepsilon_r(\omega) = \varepsilon '_r(\omega) - j \varepsilon ''_r(\omega) = \frac{1}{j\omega Z(\omega) C_0} = \frac{C_{\text{cmplx}}(\omega)}{C_0} \ , Using this measurement method, the dielectric constant may exhibit a resonance at certain frequencies corresponding to characteristic response frequencies (excitation energies) of contributors to the dielectric constant. These resonances are the basis for a number of experimental techniques for detecting defects. The conductance method measures absorption as a function of frequency.[12] Alternatively, the time response of the capacitance can be used directly, as in deep-level transient spectroscopy.[13] Capacitance matrix[edit] The discussion above is limited to the case of two conducting plates, although of arbitrary size and shape. The definition C=Q/V still holds for a single plate given a charge, in which case the field lines produced by that charge terminate as if the plate were at the center of an oppositely charged sphere at infinity. C = Q/V does not apply when there are more than two charged plates, or when the net charge on the two plates is non-zero. To handle this case, Maxwell introduced his coefficients of potential. If three plates are given charges Q_1, Q_2, Q_3, then the voltage of plate 1 is given by V_1 = P_{11}Q_1 + P_{12} Q_2 + P_{13}Q_3, and similarly for the other voltages. Hermann von Helmholtz and Sir William Thomson showed that the coefficients of potential are symmetric, so that P_{12}=P_{21}, etc. Thus the system can be described by a collection of coefficients known as the elastance matrix or reciprocal capacitance matrix, which is defined as: P_{ij} = \frac{\partial V_{i}}{\partial Q_{j}} From this, the mutual capacitance C_{m} between two objects can be defined[17] by solving for the total charge Q and using C_{m}=Q/V. C_m = \frac{1}{(P_{11} + P_{22})-(P_{12} + P_{21})} Since no actual device holds perfectly equal and opposite charges on each of the two "plates", it is the mutual capacitance that is reported on capacitors. The collection of coefficients C_{ij} =\frac{\partial Q_{i}}{\partial V_{j}} is known as the capacitance matrix,[18][19] and is the inverse of the elastance matrix. C=4\pi\varepsilon_0R \, Example values of self-capacitance are: The inter-winding capacitance of a coil, which changes its impedance at high frequencies and gives rise to parallel resonance, is variously called self-capacitance,[23] stray capacitance, or parasitic capacitance. Stray capacitance[edit] Main article: Parasitic capacitance Any two adjacent conductors can be considered a capacitor, though the capacitance is small unless the conductors are close together for long distances or over a large area. This (often unwanted) effect is termed "stray capacitance". Stray capacitance can allow signals to leak between otherwise isolated circuits (an effect called crosstalk), and it can be a limiting factor for proper functioning of circuits at high frequency. Stray capacitance is often encountered in amplifier circuits in the form of feedback capacitance that interconnects the input and output nodes (both defined relative to a common ground). It is often convenient for analytical purposes to replace this capacitance with a combination of one input-to-ground capacitance and one output-to-ground capacitance; the original configuration — including the input-to-output capacitance — is often referred to as a pi-configuration. Miller's theorem can be used to effect this replacement: it states that, if the gain ratio of two nodes is 1/K, then an impedance of Z connecting the two nodes can be replaced with a Z/(1 − k) impedance between the first node and ground and a KZ/(K − 1) impedance between the second node and ground. Since impedance varies inversely with capacitance, the internode capacitance, C, is replaced by a capacitance of KC from input to ground and a capacitance of (K − 1)C/K from output to ground. When the input-to-output gain is very large, the equivalent input-to-ground impedance is very small while the output-to-ground impedance is essentially equal to the original (input-to-output) impedance. Capacitance of simple systems[edit] Calculating the capacitance of a system amounts to solving the Laplace equation 2φ = 0 with a constant potential φ on the surface of the conductors. This is trivial in cases with high symmetry. There is no solution in terms of elementary functions in more complicated cases. For quasi-two-dimensional situations analytic functions may be used to map different geometries to each other. See also Schwarz–Christoffel mapping. Capacitance of simple systems Parallel-plate capacitor \varepsilon A /d Plate CapacitorII.svg ε: Permittivity Coaxial cable \frac{2\pi \varepsilon l}{\ln \left( R_{2}/R_{1}\right) } Cylindrical CapacitorII.svg ε: Permittivity Pair of parallel wires[24]\frac{\pi \varepsilon l}{\operatorname{arcosh}\left( \frac{d}{2a}\right) }=\frac{\pi \varepsilon l}{\ln \left( \frac{d}{2a}+\sqrt{\frac{d^{2}}{4a^{2}}-1}\right) }Parallel Wire Capacitance.svg Wire parallel to wall[24]\frac{2\pi \varepsilon l}{\operatorname{arcosh}\left( \frac{d}{a}\right) }=\frac{2\pi \varepsilon l}{\ln \left( \frac{d}{a}+\sqrt{\frac{d^{2}}{a^{2}}-1}\right) }a: Wire radius d: Distance, d > a l: Wire length Two parallel coplanar strips[25] \varepsilon l \frac{ K\left( \sqrt{1-k^{2}} \right) }{ K\left(k \right) }d: Distance w1, w2: Strip width km: d/(2wm+d) k2: k1k2 K: Elliptic integral l: Length Concentric spheres \frac{4\pi \varepsilon}{\frac{1}{R_1}-\frac{1}{R_2}} Spherical Capacitor.svg ε: Permittivity Two spheres, equal radius[26][27] 2\pi \varepsilon a\sum_{n=1}^{\infty }\frac{\sinh \left( \ln \left( D+\sqrt{D^2-1}\right) \right) }{\sinh \left( n\ln \left( D+\sqrt{ D^2-1}\right) \right) } =2\pi \varepsilon a\left\{ 1+\frac{1}{2D}+\frac{1}{4D^2}+\frac{1}{8D^{3}}+\frac{1}{8D^{4}}+\frac{3}{32D^{5}}+O\left( \frac{1}{D^{6}}\right) \right\} =2\pi \varepsilon a\left\{ \ln 2+\gamma -\frac{1}{2}\ln \left( \frac{d}{a}-2\right) +O\left( \frac{d}{a}-2\right) \right\} a: Radius d: Distance, d > 2a D = d/2a γ: Euler's constant Sphere in front of wall[26]4\pi \varepsilon a\sum_{n=1}^{\infty }\frac{\sinh \left( \ln \left( D+\sqrt{D^{2}-1}\right) \right) }{\sinh \left( n\ln \left( D+\sqrt{ D^{2}-1}\right) \right) } a: Radius d: Distance, d > a D = d/a Sphere 4\pi \varepsilon a a: Radius Circular disc[28] 8\varepsilon a a: Radius Thin straight wire, finite length[29][30][31] \frac{2\pi \varepsilon l}{\Lambda }\left\{ 1+\frac{1}{\Lambda }\left( 1-\ln 2\right) +\frac{1}{\Lambda ^{2}}\left[ 1+\left( 1-\ln 2\right) ^{2}-\frac{\pi ^{2}}{12}\right] +O\left(\frac{1}{\Lambda ^{3}}\right) \right\} a: Wire radius l: Length Λ: ln(l/a) Capacitance of nanoscale systems[edit] The capacitance of nanoscale dielectric capacitors such as quantum dots may differ from conventional formulations of larger capacitors. In particular, the electrostatic potential difference experienced by electrons in conventional capacitors is spatially well-defined and fixed by the shape and size of metallic electrodes in addition to the statistically large number of electrons present in conventional capacitors. In nanoscale capacitors, however, the electrostatic potentials experienced by electrons are determined by the number and locations of all electrons that contribute to the electronic properties of the device. In such devices, the number of electrons may be very small, however, the resulting spatial distribution of equipotential surfaces within the device are exceedingly complex. Single-electron devices[edit] The capacitance of a connected, or "closed", single-electron device is twice the capacitance of an unconnected, or "open", single-electron device.[32] This fact may be traced more fundamentally to the energy stored in the single-electron device whose "direct polarization" interaction energy may be equally divided into the interaction of the electron with the polarized charge on the device itself due to the presence of the electron and the amount of potential energy required to form the polarized charge on the device (the interaction of charges in the device's dielectric material with the potential due to the electron).[33] Few-electron devices[edit] The derivation of a "quantum capacitance" of a few-electron device involves the thermodynamic chemical potential of an N-particle system given by \mu(N) = U(N) - U(N-1) whose energy terms may be obtained as solutions of the Schrödinger equation. The definition of capacitance, {1\over C} \equiv {\Delta \,V\over\Delta \,Q}, with the potential difference \Delta \,V = {\Delta \,\mu \,\over e} = {\mu(N+\Delta \,N) -\mu(N) \over e} may be applied to the device with the addition or removal of individual electrons, \Delta \,N = 1 and \Delta \,Q=e. C_Q(N) = {e^2\over\mu(N+1)-\mu(N)} = {e^2 \over E(N)} is the "quantum capacitance" of the device.[34] This expression of "quantum capacitance" may be written as C_Q(N) = {e^2\over U(N)} which differs from the conventional expression described in the introduction where W_\text{stored} = U, the stored electrostatic potential energy, C = {Q^2\over 2U} by a factor of 1/2 with Q = Ne. However, within the framework of purely classical electrostatic interactions, the appearance of the factor of 1/2 is the result of integration in the conventional formulation, W_\text{charging} = U = \int_0^Q \frac{q}{C} \, \mathrm{d}q which is appropriate since \mathrm{d}q = 0 for systems involving either many electrons or metallic electrodes, but in few-electron systems, \mathrm{d}q \to \Delta \,Q= e. The integral generally becomes a summation. One may trivially combine the expressions of capacitance and electrostatic interaction energy, Q=CV and U=QV, respectively, to obtain, C = {Q\over V} = Q {Q \over U} = {Q^2 \over U} which is similar to the quantum capacitance. A more rigorous derivation is reported in the literature.[35] In particular, to circumvent the mathematical challenges of the spatially complex equipotential surfaces within the device, an average electrostatic potential experiences by each electron is utilized in the derivation. The reason for the apparent mathematical differences is understood more fundamentally as the energy, U(N), of an isolated device (self-capacitance) is twice that stored in a "connected" device in the lower limit N=1. As N grows large, U(N)\to U.[33] Thus, the general expression of capacitance is C(N) = {(Ne)^2 \over U(N)}. In nanoscale devices such as quantum dots, the "capacitor" is often an isolated, or partially isolated, component within the device. The primary differences between nanoscale capacitors and macroscopic (conventional) capacitors are the number of excess electrons (charge carrier, or electrons that contribute to the device's electronic behavior) and the shape and size of metallic electrodes. In nanoscale devices, nanowires consisting of metal atoms typically do not exhibit the same conductive properties as their macroscopic, or bulk material, counterparts. See also[edit] 1. ^ 2. ^ The Physics Problem Solver, 1986, Google books link 3. ^ Carlos Paz de Araujo, Ramamoorthy Ramesh, George W Taylor (Editors) (2001). Science and Technology of Integrated Ferroelectrics: Selected Papers from Eleven Years of the Proceedings of the International Symposium on Integrated Ferroelectrics. CRC Press. Figure 2, p. 504. ISBN 90-5699-704-1.  6. ^ Simon M. Sze, Kwok K. Ng (2006). Physics of Semiconductor Devices (3rd Edition ed.). Wiley. Figure 25, p. 121. ISBN 0-470-06830-2.  7. ^ Gabriele Giuliani, Giovanni Vignale (2005). Quantum Theory of the Electron Liquid. Cambridge University Press. p. 111. ISBN 0-521-82112-6.  9. ^ Horst Czichos, Tetsuya Saito, Leslie Smith (2006). Springer Handbook of Materials Measurement Methods. Springer. p. 475. ISBN 3-540-20785-6.  11. ^ J. Obrzut, A. Anopchenko and R. Nozaki, "Broadband Permittivity Measurements of High Dielectric Constant Films", Proceedings of the IEEE: Instrumentation and Measurement Technology Conference, 2005, pp. 1350–1353, 16–19 May 2005, Ottawa ISBN 0-7803-8879-8 doi:10.1109/IMTC.2005.1604368 12. ^ Dieter K Schroder (2006). Semiconductor Material and Device Characterization (3rd Edition ed.). Wiley. p. 347 ff. ISBN 0-471-73906-5.  13. ^ Dieter K Schroder (2006). Semiconductor Material and Device Characterization (3rd Edition ed.). Wiley. p. 270 ff. ISBN 0-471-73906-5.  14. ^ Simon M. Sze, Kwok K. Ng (2006). Physics of Semiconductor Devices (3rd Edition ed.). Wiley. p. 217. ISBN 0-470-06830-2.  15. ^ Safa O. Kasap, Peter Capper (2006). Springer Handbook of Electronic and Photonic Materials. Springer. Figure 20.22, p. 425.  16. ^ PY Yu and Manuel Cardona (2001). Fundamentals of Semiconductors (3rd Edition ed.). Springer. p. §6.6 Modulation Spectroscopy. ISBN 3-540-25470-6.  17. ^ Jackson, John David (1999), Classical Electrodynamic (3rd. ed.), USA: John Wiley & Sons, Inc., p. 43, ISBN 978-0-471-30932-1  18. ^ Maxwell, James (1873), "3", A treatise on electricity and magnetism, Volume 1, Clarendon Press, pp. 88ff  19. ^ "Capacitance". Retrieved 20 September 2010.  20. ^ William D. Greason (1992). Electrostatic discharge in electronics. Research Studies Press. p. 48. ISBN 978-0-86380-136-5. Retrieved 4 December 2011.  21. ^ Lecture notes; University of New South Wales 22. ^ Tipler, Paul; Mosca, Gene (2004), Physics for scientists and engineers (5th ed.), Macmillan, p. 752, ISBN 978-0-7167-0810-0  23. ^ Massarini, A.; Kazimierczuk, M.K. (1997). "Self-capacitance of inductors". IEEE Transactions on Power Electronics 12 (4): 671–676. doi:10.1109/63.602562. : example of use of term self-capacitance 24. ^ a b Jackson, J. D. (1975). Classical Electrodynamics. Wiley. p. 80.  25. ^ Binns; Lawrenson (1973). Analysis and computation of electric and magnetic field problems. Pergamon Press. ISBN 978-0-08-016638-4.  26. ^ a b Maxwell, J. C. (1873). A Treatise on Electricity and Magnetism. Dover. pp. 266 ff. ISBN 0-486-60637-6.  27. ^ Rawlins, A. D. (1985). "Note on the Capacitance of Two Closely Separated Spheres". IMA Journal of Applied Mathematics 34 (1): 119–120. doi:10.1093/imamat/34.1.119.  28. ^ Jackson, J. D. (1975). Classical Electrodynamics. Wiley. p. 128, problem 3.3.  29. ^ Maxwell, J. C. (1878). "On the electrical capacity of a long narrow cylinder and of a disk of sensible thickness". Proc. London Math. Soc. IX: 94–101. doi:10.1112/plms/s1-9.1.94.  30. ^ Vainshtein, L. A. (1962). "Static boundary problems for a hollow cylinder of finite length. III Approximate formulas". Zh. Tekh. Fiz. 32: 1165–1173.  31. ^ Jackson, J. D. (2000). "Charge density on thin straight wire, revisited". Am. J. Phys 68 (9): 789–799. Bibcode:2000AmJPh..68..789J. doi:10.1119/1.1302908.  32. ^ Raphael Tsu (2011). Superlattice to Nanoelectronics. Elsevier. pp. 312–315. ISBN 978-0-08-096813-1.  33. ^ a b T. LaFave Jr. (2011). "Discrete charge dielectric model of electrostatic energy". J. Electrostatics 69 (6): p. 414–418. doi:10.1016/j.elstat.2011.06.006. Retrieved 12 February 2014.  34. ^ G. J. Iafrate, K. Hess, J. B. Krieger, and M. Macucci (1995). "Capacitive nature of atomic-sized structures". Phys. Rev. B 52 (15). doi:10.1103/physrevb.52.10737.  35. ^ T. LaFave Jr and R. Tsu (March–April 2008). "Capacitance: A property of nanoscale materials based on spatial symmetry of discrete electrons". Microelectronics Journal 39 (3-4): 617–623. doi:10.1016/j.mejo.2007.07.105. Retrieved 12 February 2014.  Further reading[edit] • Tipler, Paul (1998). Physics for Scientists and Engineers: Vol. 2: Electricity and Magnetism, Light (4th ed.). W. H. Freeman. ISBN 1-57259-492-6 • Saslow, Wayne M.(2002). Electricity, Magnetism, and Light. Thomson Learning. ISBN 0-12-619455-6. See Chapter 8, and especially pp. 255–259 for coefficients of potential.
20f565179a09cc25
Schrödinger, Erwin Erwin Schrödinger (1887-1961), an Austrian physicist known for his mathematical development of wave mechanics (1926), a form of quantum mechanics (see quantum theory) and for his formulation of the wave equation (the Schrödinger equation), the most widely used mathematical tool of modern quantum theory. For this work, he shared the 1933 Nobel Prize in Physics with P. A. M. Dirac. His book What is Life? (1945) has inspired many subsequent efforts to explain biological evolution, especially the evolution of complex systems, in terms of the Second Law of Thermodynamics and the concepts of entropy and negative entropy. Cleveland, C. (2006). Schrödinger, Erwin. Retrieved from http://www.eoearth.org/view/article/155892 To add a comment, please Log In.
d61ba6336056eabd
Interpretations of quantum mechanics From Wikipedia, the free encyclopedia Jump to: navigation, search In quantum mechanics, the mathematical formalism is very difficult to interpret physically. However, there are many ideas about the interpretations and meanings of quantum mechanics. There are no facts to prove any interpretation over the others, but there are some that are more accepted than others. Background material[change | change source] Main article: Quantum mechanics The main ideas of quantum mechanics are the postulates of Schrödinger and Heisenberg. The Schrödinger equation is a partial differential equation that describes the wavefunction of an object.[1] The equation can be given by i\hbar \frac{\partial \Psi}{\partial t}=\frac{-h}{2m}\nabla^2 \Psi + V(x)\Psi (x) The basic meaning of this equation is that a particle, such as an electron, is not just a point-like particle, but also a type of wave. The philosophical implications will be explored shortly. Another fundamental of quantum mechanics is the Heisenberg uncertainty principle.[1] This bizarre theory is the idea that the position and the momentum of an object cannot both be known. The greater the certainty of the position of an object, the less the certainty of the momentum of the object. The mathematical formulation of this is given by \Delta x\Delta p>\frac{\hbar}{2} This can further be generalized by stating that \Delta X_1\Delta X_2>\frac{[X_1, X_2]}{2i} Where [X_1, X_2] is the operator of X_1 and X_2. This law also gives rise to an uncertainty between energy and time, which can be expressed in the same way as the relation between momentum and position. Probability waves[change | change source] Another important fact of quantum mechanics is that the electron behaves in a very weird way. At first, no one really knew what the wavefunction meant physically. Max Born, a theoretical physicist, explained that the wavefunction is a probability wave. In other words, wherever the wave is denser, that is where the particle is most likely found, but it won't necessarily be found there. The way to find the probability (P_{[a,b]}) of the position of the particle in the region a<x<b is given by \int_a^b \! |\Psi (x, t)|^2 dx=P_{[a,b]} For example, if P_{[a,b]} is equal to .5, then there is a 50% chance of finding the particle within that region. This shows us that the location of a particle probabilistic; one can never say that the particle will definitely be found at a certain point in space, but rather, one can only give the probability of finding the particle within that region. Copenhagen interpretation[change | change source] The most well-accepted interpretation of quantum mechanics is the idea called Copenhagen interpretation. This interpretation builds upon the probability-wave notion, but brings in a radical new idea called the superposition principle. The best way to explain this principle is by showing it mathematically. If the functions \Psi_1, \Psi_2, \Psi_3, ... \Psi_n are solutions of the Schrödinger equation, then the superposition of those wavefunctions is also a solution. i.e., \Theta = c_1\Psi_1 + c_2\Psi_2 + c_3\Psi_3 + ... c_n\Psi_n Where \theta is the superposition of the various wavefunctions. This idea implies that a particle occupies every possible wavefunction it can. This implies that a particle occupies more than one position at the same time. In other words, a particle exists in at least two different positions simultaneously. When an observer comes and actually measures the position of the particle, something called the wavefunction collapse occurs. So when someone observes the particle, the following happens: In simple terms: when there is no observation or observer, then a particle occupies many positions simultaneously; when an observation takes place, the the wavefunction collapses and the particle exists only in one position. Many-worlds interpretation[change | change source] The many-worlds interpretation is by far the most fantastic interpretation of quantum mechanics. This interpretation says that rather than the wavefunction collapsing, each possible actually occurs, but in separate universes. This means that the universes branch off for each possibility.[2] Quantum determinism[change | change source] The most realistic interpretation, presented by Albert Einstein himself, states that the outcome of some random event is predetermined. So, rather than a particle existing as a probability wave, this interpretation says that the particle only exists only in one position, but we just perceive it to be a probability. This idea is much less popular, but nonetheless mentionable. Which one is right?[change | change source] So, of the three main interpretations of quantum mechanics, which one is correct? Physicists seem to think that the Copenhagen interpretation is the most likely, but no one is for sure. References[change | change source]
e100b93f3fae8684
Magic number (physics) From Wikipedia, the free encyclopedia   (Redirected from Doubly magic) Jump to: navigation, search Graph of isotope stability. In nuclear physics, a magic number is a number of nucleons (either protons or neutrons) such that they are arranged into complete shells within the atomic nucleus. The seven most widely recognised magic numbers as of 2007 are 2, 8, 20, 28, 50, 82, and 126 (sequence A018226 in OEIS). Atomic nuclei consisting of such a magic number of nucleons have a higher average binding energy per nucleon than one would expect based upon predictions such as the semi-empirical mass formula and are hence more stable against nuclear decay. The unusual stability of isotopes having magic numbers means that transuranium elements can be created with extremely large nuclei and yet not be subject to the extremely rapid radioactive decay normally associated with high atomic numbers (as of 2007, the longest-lived known isotope among all of the elements between 110 and 120 lasts only 12 min and the next 22 s).[citation needed] Large isotopes with magic numbers of nucleons are said to exist in an island of stability. Unlike the magic numbers 2–126, which are realized in spherical nuclei, theoretical calculations predict that nuclei in the island of stability are deformed. Before this was realized, higher magic numbers, such as 184 and 258, were predicted based on simple calculations that assumed spherical shapes. It is now believed that the sequence of spherical magic numbers cannot be extended in this way.[citation needed] Origin of the term [edit] According to Steven A. Moszkowski (a student of Maria Goeppert-Mayer), the term "magic number" was coined by Eugene Wigner: "Wigner, too, believed in the liquid drop model, but he recognized, from the work of Maria Mayer, the very strong evidence for the closed shells. It seemed a little like magic to him, and that is how the words ‘Magic Numbers’ were coined.”[1] Double magic [edit] Nuclei which have neutron number and proton (atomic) numbers each equal to one of the magic numbers are called "double magic", and are especially stable against decay. Examples of double magic isotopes include helium-4 (4He), oxygen-16 (16O), calcium-40 (40Ca), calcium-48 (48Ca), nickel-48 (48Ni) and lead-208 (208Pb). Double-magic effects may allow existence of stable isotopes which otherwise would not have been expected. An example is calcium-40 (40Ca), with 20 neutrons and 20 protons, which is the heaviest stable isotope made of the same number of protons and neutrons. Both calcium-48 (48Ca) and nickel-48 (48Ni) are double magic because calcium-48 has 20 protons and 28 neutrons while nickel-48 has 28 protons and 20 neutrons. Calcium-48 is very neutron-rich for such a light element, but like calcium-40, it is made stable by being double magic. Similarly, nickel-48, discovered in 1999, is the most proton-rich isotope known beyond helium-3.[2] Magic number shell effects are seen in ordinary abundances of elements: it is no accident that helium-4 (4He) is among the most abundant (and stable) nuclei in the universe[3] and that lead-208 (208Pb) is the heaviest stable nuclide. Magic effects can keep unstable nuclides from decaying as rapidly as would otherwise be expected. For example, the nuclides tin-100 (100Sn) and tin-132 (132Sn) are interesting examples of doubly magic isotopes of tin that are unstable; however they represent endpoints beyond which stability drops off rapidly. In December 2006 hassium-270 (270Hs), with 108 protons and 162 neutrons, was discovered by an international team of scientists led by the Technical University of Munich having the unusually long[dubious ] half-life of 22 seconds. Hassium-270 evidently forms part of an island of stability, and may even be double magic.[4][5] Derivation [edit] Magic numbers are typically obtained by empirical studies; however, if the form of the nuclear potential is known then the Schrödinger equation can be solved for the motion of nucleons and energy levels determined. Nuclear shells are said to occur when the separation between energy levels is significantly greater than the local mean separation. In the shell model for the nucleus, magic numbers are the numbers of nucleons at which a shell is filled. For instance the magic number 8 occurs when 1s1/2, 1p3/2, 1p1/2 energy levels are filled as there is a large energy gap between the 1p1/2 and the next highest 1d5/2 energy levels. The empirical values can be reproduced using the classical shell model with a strong spin-orbit interaction. The atomic analog to nuclear magic numbers are those numbers of electrons leading to discontinuities in the ionization energy. These occur for the noble gases helium, neon, argon, krypton, xenon, and radon. Hence, the "atomic magic numbers" are 2, 10, 18, 36, 54, and 86. In 2007, Jozsef Garai from Florida International University proposed a mathematical formula describing the periodicity of the nucleus in the periodic system based on the tetrahedron.[6] Recently, an alternative explanation of magic numbers has been given in terms of symmetry considerations. Based on the fractional extension of the standard rotation group, the ground state properties (including the magic numbers) for metallic clusters and nuclei were simultaneously determined analytically. A specific potential term is not necessary in this model. [7][8] See also [edit] References [edit] 1. ^ This reminiscence, from a talk by Moszkowski presented at the APS meeting in Indianapolis, May 4, 1996, is mentioned by Georges Audi in the paper "The History of Nuclidic Masses and of their Evaluation" (arXiv 2006) 3. ^ Nave, C. R. "The Most Tightly Bound Nuclei". HyperPhysics.  4. ^ Mason Inman (2006-12-14). "A Nuclear Magic Trick". Physical Review Focus. Retrieved 2006-12-25.  5. ^ Dvorak, J.; Brüchle, W.; Chelnokov, M.; Dressler, R.; Düllmann, Ch. E.; Eberhardt, K.; Gorshkov, V.; Jäger, E. et al. (2006). "Doubly Magic Nucleus 108270Hs162". Physical Review Letters 97. Bibcode:2006PhRvL..97x2501D. doi:10.1103/PhysRevLett.97.242501. PMID 17280272.  6. ^ Garai, Jozsef (2007). "Mathematical formulas describing the sequences of the periodic table". International Journal of Quantum Chemistry 108: 667. Bibcode:2008IJQC..108..667G. doi:10.1002/qua.21529.  7. ^ Herrmann, Richard (2010). "Higher dimensional mixed fractional rotation groups as a basis for dynamic symmetries generating the spectrum of the deformed Nilsson-oscillator". Physica A. 389: 693. arXiv:0806.2300. Bibcode:2010PhyA..389..693H. doi:10.1016/j.physa.2009.11.016.  8. ^ Herrmann, Richard (2010). "Fractional phase transition in medium size metal clusters and some remarks on magic numbers in gravitationally and weakly bound clusters". Physica A. 389: 3307. arXiv:0907.1953. Bibcode:2010PhyA..389.3307H. doi:10.1016/j.physa.2010.03.033.  External links [edit] • Nave, C. R. "Shell Model of Nucleus". HyperPhysics.  • OEIS : A018226 Magic numbers: atoms with one of these numbers of protons or neutrons in their nuclei are considered to be stable."AEIS A018226".  • Scerri, Eric (2007). The Periodic Table, Its Story and Its Significance. Oxford University Press. ISBN 0-19-530573-6.  see chapter 10 especially.
f0e97fb81c4d7ea5
stub born Born-Oppenheimer approximation In quantum chemistry, the computation of the energy and wavefunction of an average-size molecule is a formidable task that is alleviated by the Born-Oppenheimer (BO) approximation. For instance the benzene molecule consists of 12 nuclei and 42 electrons. The time independent Schrödinger equation, which must be solved to obtain the energy and molecular wavefunction of this molecule, is a partial differential eigenvalue equation in 162 variables—the spatial coordinates of the electrons and the nuclei. The BO approximation makes it possible to compute the wavefunction in two less formidable, consecutive steps. This approximation was proposed in the early days of quantum mechanics by Born and Oppenheimer (1927) and is still indispensable in quantum chemistry. In basic terms, it allows the wavefunction of a molecule to be broken into its electronic and nuclear (vibrational, rotational) components. Psi_{ total} = psi_{ electronic} times psi_{ nuclear} In the first step of the BO approximation the electronic Schrödinger equation is solved, yielding the wavefunction psi_{electronic} depending on electrons only. For benzene this wavefunction depends on 126 electronic coordinates. During this solution the nuclei are fixed in a certain configuration, very often the equilibrium configuration. If the effects of the quantum mechanical nuclear motion are to be studied, for instance because a vibrational spectrum is required, this electronic computation must be repeated for many different nuclear configurations. The set of electronic energies thus computed becomes a function of the nuclear coordinates. In the second step of the BO approximation this function serves as a potential in a Schrödinger equation containing only the nuclei—for benzene an equation in 36 variables. The success of the BO approximation is due to the high ratio between nuclear and electronic masses. The approximation is an important tool of quantum chemistry, without it only the lightest molecule, H2, could be handled; all computations of molecular wavefunctions for larger molecules make use of it. Even in the cases where the BO approximation breaks down, it is used as a point of departure for the computations. The electronic energies, constituting the nuclear potential, consist of kinetic energies, interelectronic repulsions and electron-nuclear attractions. In a handwaving manner the nuclear potential is an averaged electron-nuclear attraction. The BO approximation follows from the inertia of electrons to be negligible in comparison to the atom to which they are bound. Short description The Born-Oppenheimer (BO) approximation is ubiquitous in quantum chemical calculations of molecular wavefunctions. It consists of two steps. In the first step the nuclear kinetic energy is neglected, that is, the corresponding operator Tn is subtracted from the total molecular Hamiltonian. In the remaining electronic Hamiltonian He the nuclear positions enter as parameters. The electron-nucleus interactions are not removed and the electrons still "feel" the Coulomb potential of the nuclei clamped at certain positions in space. (This first step of the BO approximation is therefore often referred to as the clamped nuclei approximation.) The electronic Schrödinger equation H_mathrm{e}(mathbf{r,R} ); chi(mathbf{r,R}) = E_mathrm{e} ; chi(mathbf{r,R}) is solved (out of necessity approximately). The quantity r stands for all electronic coordinates and R for all nuclear coordinates. Obviously, the electronic energy eigenvalue Ee depends on the chosen positions R of the nuclei. Varying these positions R in small steps and repeatedly solving the electronic Schrödinger equation, one obtains Ee as a function of R. This is the potential energy surface (PES): Ee(R) . Because this procedure of recomputing the electronic wave functions as a function of an infinitesimally changing nuclear geometry is reminiscent of the conditions for the adiabatic theorem, this manner of obtaining a PES is often referred to as the adiabatic approximation and the PES itself is called an adiabatic surface. In the second step of the BO approximation the nuclear kinetic energy Tn (containing partial derivatives with respect to the components of R) is reintroduced and the Schrödinger equation for the nuclear motion left[T_mathrm{n} + E_mathrm{e}(mathbf{R})right] phi(mathbf{R}) = E phi(mathbf{R}) is solved. This second step of the BO approximation involves separation of vibrational, translational, and rotational motions. This can be achieved by application of the Eckart conditions. The eigenvalue E is the total energy of the molecule, including contributions from electrons, nuclear vibrations, and overall rotation and translation of the molecule. Derivation of the Born-Oppenheimer approximation It will be discussed how the BO approximation may be derived and under which conditions it is applicable. At the same time we will show how the BO approximation may be improved by including vibronic coupling. To that end the second step of the BO approximation is generalized to a set of coupled eigenvalue equations depending on nuclear coordinates only. Off-diagonal elements in these equations are shown to be nuclear kinetic energy terms. It will be shown that the BO approximation can be trusted whenever the PESs, obtained from the solution of the electronic Schrödinger equation, are well separated: E_0(mathbf{R}) ll E_1(mathbf{R}) ll E_2(mathbf{R}), cdots for all :mathbf{R}. We start from the exact non-relativistic, time-independent molecular Hamiltonian: H= H_mathrm{e} + T_mathrm{n} , with H_mathrm{e}= -sum_{i}{frac{1}{2}nabla_i^2}- sum_{i,A}{frac{Z_A}{r_{iA}}} + sum_{i>j}{frac{1}{r_{ij}}}+ sum_{A > B}{frac{Z_A Z_B}{R_{AB}}} quadmathrm{and}quad T_mathrm{n}=-sum_{A}{frac{1}{2M_A}nabla_A^2}. The position vectors mathbf{r}equiv {mathbf{r}_i} of the electrons and the position vectors mathbf{R}equiv {mathbf{R}_A = (R_{Ax},,R_{Ay},,R_{Az})} of the nuclei are with respect to a Cartesian inertial frame. Distances between particles are written as r_{iA} equiv |mathbf{r}_i - mathbf{R}_A| (distance between electron i and nucleus A) and similar definitions hold for r_{ij}; and R_{AB},. We assume that the molecule is in a homogeneous (no external force) and isotropic (no external torque) space. The only interactions are the Coulomb interactions between the electrons and nuclei. The Hamiltonian is expressed in atomic units, so that we do not see Planck's constant, the dielectric constant of the vacuum, electronic charge, or electronic mass in this formula. The only constants explicitly entering the formula are ZA and MA—the atomic number and mass of nucleus A. It is useful to introduce the total nuclear momentum and to rewrite the nuclear kinetic energy operator as follows: T_mathrm{n} = sum_{A} sum_{alpha=x,y,z} frac{P_{Aalpha} P_{Aalpha}}{2M_A} quadmathrm{with}quad P_{Aalpha} = -i {partial over partial R_{Aalpha}}. Suppose we have K electronic eigenfunctions chi_k (mathbf{r}; mathbf{R}) of H_mathrm{e},, that is, we have solved H_mathrm{e};chi_k (mathbf{r};mathbf{R}) = E_k(mathbf{R});chi_k (mathbf{r};mathbf{R}) quadmathrm{for}quad k=1,ldots, K. The electronic wave functions chi_k, will be taken to be real, which is possible when there are no magnetic or spin interactions. The parametric dependence of the functions chi_k, on the nuclear coordinates is indicated by the symbol after the semicolon. This indicates that, although chi_k, is a real-valued function of mathbf{r}, its functional form depends on mathbf{R}. For example, in the molecular-orbital-linear-combination-of-atomic-orbitals (LCAO-MO) approximation, chi_k, is an MO given as a linear expansion of atomic orbitals (AOs). An AO depends visibly on the coordinates of an electron, but the nuclear coordinates are not explicit in the MO. However, upon change of geometry, i.e., change of mathbf{R}, the LCAO coefficients obtain different values and we see corresponding changes in the functional form of the MO chi_k,. We will assume that the parametric dependence is continuous and differentiable, so that it is meaningful to consider P_{Aalpha}chi_k (mathbf{r};mathbf{R}) = - i frac{partialchi_k (mathbf{r};mathbf{R})}{partial R_{Aalpha}} quad mathrm{for}quad alpha=x,y,z, which in general will not be zero. The total wave function Psi(mathbf{R},mathbf{r}) is expanded in terms of chi_k (mathbf{r}; mathbf{R}): Psi(mathbf{R}, mathbf{r}) = sum_{k=1}^K chi_k(mathbf{r};mathbf{R}) phi_k(mathbf{R}) , with langle,chi_{k'}(mathbf{r};mathbf{R}),|, chi_k(mathbf{r};mathbf{R})rangle_{(mathbf{r})} = delta_{k' k} and where the subscript (mathbf{r}) indicates that the integration, implied by the bra-ket notation, is over electronic coordinates only. By definition, the matrix with general element big(mathbb{H}_mathrm{e}(mathbf{R})big)_{k'k} equiv langle chi_{k'}(mathbf{r};mathbf{R}) | H_mathrm{e} | chi_k(mathbf{r};mathbf{R})rangle_{(mathbf{r})} = delta_{k'k} E_k(mathbf{R}) is diagonal. After multiplication by the real function chi_{k'}(mathbf{r};mathbf{R}) from the left and integration over the electronic coordinates mathbf{r} the total Schrödinger equation H;Psi(mathbf{R},mathbf{r}) = E ; Psi(mathbf{R},mathbf{r}) is turned into a set of K coupled eigenvalue equations depending on nuclear coordinates only left[mathbb{H}_mathrm{n}(mathbf{R}) + mathbb{H}_mathrm{e}(mathbf{R}) right] ; boldsymbol{phi}(mathbf{R}) = E; boldsymbol{phi}(mathbf{R}). The column vector boldsymbol{phi}(mathbf{R}) has elements phi_k(mathbf{R}),; k=1,ldots,K. The matrix mathbb{H}_mathrm{e}(mathbf{R}) is diagonal and the nuclear Hamilton matrix is non-diagonal with the following off-diagonal (vibronic coupling) terms, big(mathbb{H}_mathrm{n}(mathbf{R})big)_{k'k} = langlechi_{k'}(mathbf{r};mathbf{R}) | T_mathrm{n}|chi_k(mathbf{r};mathbf{R})rangle_{(mathbf{r})}. The vibronic coupling in this approach is through nuclear kinetic energy terms. Solution of these coupled equations gives an approximation for energy and wavefunction that goes beyond the Born-Oppenheimer approximation. Unfortunately, the off-diagonal kinetic energy terms are usually difficult to handle. This is why often a diabatic transformation is applied, which retains part of the nuclear kinetic energy terms on the diagonal, removes the kinetic energy terms from the off-diagonal and creates coupling terms between the adiabatic PESs on the off-diagonal. If we can neglect the off-diagonal elements the equations will uncouple and simplify drastically. In order to show when this neglect is justified, we suppress the coordinates in the notation and write, by applying the Leibniz rule for differentiation, the matrix elements of T_{textrm{n}} as mathrm{H_n}(mathbf{R})_{k'k}equiv big(mathbb{H}_mathrm{n}(mathbf{R})big)_{k'k} = delta_{k'k} T_{textrm{n}} + sum_{A,alpha}frac{1}{M_A} langlechi_{k'}|big(P_{Aalpha}chi_kbig)rangle_{(mathbf{r})} P_{Aalpha} + langlechi_{k'}|big(T_mathrm{n}chi_kbig)rangle_{(mathbf{r})} The diagonal (k'=k) matrix elements langlechi_{k}|big(P_{Aalpha}chi_kbig)rangle_{(mathbf{r})} of the operator P_{Aalpha}, vanish, because this operator is Hermitian and purely imaginary. The off-diagonal matrix elements satisfy langlechi_{k'}|big(P_{Aalpha}chi_kbig)rangle_{(mathbf{r})} = frac{langlechi_{k'} |big[P_{Aalpha}, H_mathrm{e}big] | chi_krangle_{(mathbf{r})}} {E_{k}(mathbf{R})- E_{k'}(mathbf{R})}. The matrix element in the numerator is langlechi_{k'} |big[P_{Aalpha}, H_mathrm{e}big] | chi_krangle_{(mathbf{r})} = iZ_Asum_i ;langlechi_{k'}|frac{(mathbf{r}_{iA})_alpha}{r_{iA}^3}|chi_krangle_{(mathbf{r})} ;;mathrm{with};; mathbf{r}_{iA} equiv mathbf{r}_i - mathbf{R}_A . The matrix element of the one-electron operator appearing on the right hand side is finite. When the two surfaces come close, {E_{k}(mathbf{R})approx E_{k'}(mathbf{R})}, the nuclear momentum coupling term becomes large and is no longer negligible. This is the case where the BO approximation breaks down and a coupled set of nuclear motion equations must be considered, instead of the one equation appearing in the second step of the BO approximation. Conversely, if all surfaces are well separated, all off-diagonal terms can be neglected and hence the whole matrix of P^{A}_alpha is effectively zero. The third term on the right hand side of the expression for the matrix element of Tn (the Born-Oppenheimer diagonal correction) can approximately be written as the matrix of P^{A}_alpha squared and, accordingly, is then negligible also. Only the first (diagonal) kinetic energy term in this equation survives in the case of well-separated surfaces and a diagonal, uncoupled, set of nuclear motion equations results, left[T_mathrm{n} +E_k(mathbf{R})right] ; phi_k(mathbf{R}) = E phi_k(mathbf{R}) quadmathrm{for}quad k=1,ldots, K, which are the normal second-step of the BO equations discussed above. We reiterate that when two or more potential energy surfaces approach each other, or even cross, the Born-Oppenheimer approximation breaks down and one must fall back on the coupled equations. Usually one invokes then the diabatic approximation. Historical note The Born-Oppenheimer approximation is named after M. Born and R. Oppenheimer who wrote a paper [Annalen der Physik, vol. 84, pp. 457-484 (1927)] entitled: Zur Quantentheorie der Moleküle (On the Quantum Theory of Molecules). This paper describes the separation of electronic motion, nuclear vibrations, and molecular rotation. Somebody who expects to find in this paper the BO approximation—as it is explained above and in most modern textboooks—will be in for a surprise. The reason being that the presentation of the BO approximation is well hidden in Taylor expansions (in terms of internal and external nuclear coordinates) of (i) electronic wave functions, (ii) potential energy surfaces and (iii) nuclear kinetic energy terms. Internal coordinates are the relative positions of the nuclei in the molecular equilibrium and their displacements (vibrations) from equilibrium. External coordinates are the position of the center of mass and the orientation of the molecule. The Taylor expansions complicate the theory and make the derivations very hard to follow. Moreover, knowing that the proper separation of vibrations and rotations was not achieved in this paper, but only 8 years later [by C. Eckart, Physical Review, vol. 46, pp. 383-387 (1935)] (see Eckart conditions), one is not very much motivated to invest much effort into understanding the work by Born and Oppenheimer, however famous it may be. Although the article still collects many citations each year, it is safe to say that it is not read anymore (except perhaps by historians of science). External links Resources related to the Born-Oppenheimer approximation: See also Search another word or see stub bornon Dictionary | Thesaurus |Spanish Copyright © 2014, LLC. All rights reserved. • Please Login or Sign Up to use the Recent Searches feature
2e69e0ef0ad9a86b
Take the 2-minute tour × Consider the Schrödinger equation for a particle in one dimension, where we have at least one boundary in the system (say the boundary is at $x=0$ and we are solving for $x>0$). Sometimes we want to impose a boundary condition in which the wavefunction vanishes (Dirichlet boundary condition). We can indirectly impose this boundary condition through the physical assumptions by using an infinite potential outside the relevant region (like in the "particle in a box" model): $$ V(x<0)=\infty ~~~~\Longrightarrow ~~~~\psi(x=0)=0 $$ What if we want to impose a boundary condition in which the derivative of the wavefunction vanishes (Neumann boundary condition)? $$ ? ~~~~\Longrightarrow ~~~~ \left. \frac{\partial \psi}{\partial x} \right|_{x=0} = 0 $$ Is there a way to choose the potential, or maybe change something else in the Hamiltonian, in order to indirectly impose this boundary condition? P.S. This question is not of great practical importance, it is more of a curiosity. share|improve this question 2 Answers 2 By mirroring $V(x)$ about $x = 0$, i.e., by setting $V(-x) = V(x)$, the wavefunction can be taken to be even or odd. The even solution satisfies the Neumann boundary condition since the derivative of an even function is odd and thus zero at $x = 0$. share|improve this answer It is true that the even solutions satisfy the desired boundary condition, but unfortunately there will always be odd solutions as well, which do not satisfy it. I'm looking for an approach that imposes the boundary condition for all solutions. –  Joe Jun 19 '12 at 17:48 @Joe that is impossible, such material would behave as a perfect reflector with zero London depth. Not even superconductors have zero London depth –  lurscher Jun 19 '12 at 17:59 @lurscher - That's an interesting point. The infinite potential that is used for a particle in a box is also impossible. In reality there will always be some penetration depth, and the wavefunction will completely vanish only when we take the (unphysical) limit of an infinite potential. In an analogous manner, there's no reason there can't be some parameter that when a certain (unphysical) limit is taken the material turns into a perfect reflector, even though in reality there will always be some finite London length. –  Joe Jun 19 '12 at 19:47 It's not really a physical condition, but when one is doing R-matrix theory for scattering (which is arguably not for the faint of heart) the condition does come up. One resource I saw recently is a lecture by Hugo van der Hart (go to the slide titled Basic Applications). share|improve this answer Your Answer
7c80b7c6939e5e73
Wednesday, November 30, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Great Chinese Firewall Hui Chen helps us - and our Chinese friends - to circumvent the so-called Great Chinese Firewall. If you've ever heard someone in the People's Republic of China who had problems with the access to this blog, you may write her or him the following URL of the mirror. Right now, the Reference Frame has absolutely no intent to influence the social or political atmosphere in the world's most populous country which is our Chinese friends' internal affair. We assure all the glorious leaders that they have no reason to be afraid of anything. Greetings from the People's Republic of Cambridge, LM Closed string vacuum solved analytically The most interesting paper on the arXiv today is a paper by who is currently at CERN, Switzerland. Around 1998, Ashoke Sen conjectured that the open string tachyon may get a vev that corresponds to a complete annihilation of the D-brane onto which the open strings were attached. This predicts the energy density of this minimum of the tachyonic potential: it must be equal to minus the tension of the D-brane that has annihilated. In the framework of boundary string field theory (BSFT), this fact has been proved by Kutasov, Marino, and Moore. Of course, there have always been almost complete physical arguments that assured us that no reasonable person had any serious doubts that Sen's conjecture - the second insight in science after the Higgs mechanism that shows that the tachyons are more than just an inconsistency - was correct. The formalism of Witten's cubic string field theory of the Chern-Simons type is however much more well-defined than boundary string field theory. People wanted to verify Sen's conjecture in this cubic string field theory, too. They could have done so numerically and they obtained 99.9999% of the right value. Many other facts have been checked numerically, too. Many physicists also proposed various formal heuristic solutions and maps between the cubic string field theory and the boundary string field theory but it was usually hard to give these formulae a precise meaning. Blackberry loses round in patent dispute News: Mike Lazaridis, the founder of the Perimeter Institute and the company Research in Motion - that produces the e-mail mobile gadgets called BlackBerries - loses round in a patent dispute. Colbert report with Brian Greene It's definitely intense fun and Brian Greene is doing a superb job. Thanks to Joe Minahan for the tip. Colbert's show is somewhat similar to Bill O'Reilly's Factor, indeed. I was explained that it is a deliberate similarity. Just like the Reference Frame argues that string theory is forced upon us by mother Nature in somewhat analogous way as the evolutionary framework in biology, and much like Peter Woit believes that string theory is just like Intelligent Design, Colbert says that Occam's Razor requires us to accept the simplest explanations of everything - and the simplest explanation of the real world is that it was created by God like that: click. Occam's razor proves that Intelligent Design is better than science. That's slightly paradoxical because Stephen Colbert is the youngest among 11 children, and is therefore a walking experimental proof of M-theory: he's the M-theoretical circle at weak coupling, in fact. Two proofs of global warming (Via Grugoš Jotl.) That should be enough to neutralize the heretics. Computation after 90th birthday We just had a dinner to celebrate the Loeb lecturer, John Hopfield from Princeton, who bravely applies physical reasoning to neural networks and other models of the human brain. There have been many interesting discussions during the dinner but I will only mention a couple of them. Howard Georgi allowed me to understand a bit better the pretty high score of our F-group in the last homework problem set assigned in my class. Thanks, Howard, for the help. ;-) Yes, F stands for "female". Two different homework assignments have been made for the two groups, and no one has yet called the New York Times, not even Howard himself. ;-) Incidentally, my feeling after the class in the morning was that 160 minutes is simply too little to properly teach scattering, partial wave analysis including all of its limiting cases, Born approximation, including all the kinematical relations between amplitudes, cross sections, transition rates, time-dependent perturbation theory, and examples including the Yukawa and Coulomb potentials, spherical well, and 38 other issues. Moreover, these calculations in the old-fashioned quantum mechanics are sometimes even messier than in quantum field theory. I would probably reduce the amount of material related to these questions in the syllabus. Tai Wu, a member of our band, joined my opinions about the Summers controversy and Noam Chomsky and other things. We chatted about the LHC, too. And no, C.S. Wu did not belong to his family. Ursula Holliger (harp) described many things including the music critics. Of course, they're often annoying and they became critics because they could not become the musicians. Not surprisingly, the very same thing probably holds for the literary critics and the physics critics, too. ;-) Chris Stubbs has finished reading Barton Zwiebach's book on string theory. Because he learned many things and was pleasantly surprised that the whole modern high energy theoretical physics is fully accessible to him, he actually thinks that every experimental physicist should learn string theory from this book. I am happy to report this experimental conclusion. :-) Incidentally, Chu Xing who works in a factory in Hong Kong also studies Barton's book. Sometimes, he would find it helpful to know the answers to Barton's problems in the book; he informed me in the e-mail. When I had the pleasure to talk to Dr. Simon Capelin who may be credited for having published so many great physics books in the Cambridge University Press today in the afternoon, he agreed that Chu Xing should get the password to access the web page of Barton's book at the CUP's server. We will see this week whether Barton will agree. ;-) Good luck, Chu. One of the crimes against the copyright (not humankind) I did in 1993 was to ask a librarian in Prague to xerox the superstring textbook by Green, Schwarz, and Witten for me. She did so! For 12 years, I could not sleep because I worried that the Cambridge University Press would eventually sue me. It turned out today that Dr. Capelin who published the book will probably forgive me this particular sin! Norman Ramsey has told me many stories about the history of physics, his lectures on 9/11/2001 (the day of my PhD defense), the development of atomic clocks, negative temperatures of lasers, the most precise current experiments testing both special and general relativity, and his recent studies of string theory. He believes that the vacuum energy is gonna be an important question in physics for the years to come. Another very illuminating comment from him was about the changes of your approach to research when you're older than 90 years. Of course, when you're above 90, almost everything works easier than before. Just the short-term memory is slightly worse. When you need to finish some integral - that you otherwise do on the top of your head - with your calculator, it is sometimes a bit more difficult to remember what you want to type on your calculator. It is slightly more difficult than when you are young, e.g. about 85 years or so. Well, what an impressive person. ;-) In the afternoon, James Wells (Michigan) was explaining his children GUT constructions extending NMSSM but I will only mention the talk if I have really too much time tomorrow which is unlikely. Monday, November 28, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Royal society: ban science on the web The Royal Society - i.e. the British Academy of Sciences - has warned that "making research freely available on the internet could harm the scientific debate". It could even lower the profits of printed journals, the society predicts, especially of the non-profit journals. :-) The Royal Society is fully committed to the preservation of the reptiles. A free access to scientific results on the internet could also threaten feudalism itself and the leading role of the royal family in the world. Instead, the internet may encourage heretics. Prince Charles agrees that science and technology are dangerous. He expressed concern that economic progress is "upsetting the whole balance of nature." In another interview, he said that "if you make everything over efficient, you suck out, it seems to me, every last drop of what, up to now, has been known as culture." Don Page & death of de Sitter A. V. Yurov and his mirror image, V. A. Yurov have a very provocative paper attempting to reconcile the conjectured lifetime of the KKLT de Sitter vacua calculated from the low-energy effective field theory - this lifetime is not too far from the recurrence time and let's call it a "googolplex" - with the much shorter lifetime of our Universe comparable to a mere "googol" that was recently advocated by Don Page and his even more provocative paper. ;-) Let me emphasize at the beginning that although I find the KKLT estimates somewhat uncertain, they are definitely much less speculative than anything I am gonna describe in this text. Don Page argues that if our Universe approaches the de Sitter exponential cosmological expansion with the currently likely value of the vacuum energy, then its lifetime should be shorter than 10^{50} years or so. Why? Because if the lifetime were longer, then - I kid you not - most of our perceptions (such as the feelings of our brains expressed by a complicated projection operator onto a state of your brain) would usually occur because of random vacuum fluctuations. Sunday, November 27, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Discrete physics One of the "great" ideas that are being proposed billions of times every day is the idea that the fundamental physical laws of Nature are "discrete". The world is resembling a binary computer - or at least a quantum computer, we're being told very often. "Discrete physics" even has its own USENET newsgroup "sci.physics.discrete" which has fortunately been completely silent since it was created. Various games and "types of atoms" that are supposed to produce spacetime at the Planck scale are even sold as "alternatives to string theory". I am among those who are convinced that every single proposal based on the idea that "the fundamental entities must be discrete" has so far transparently been a crackpot fantasy. What's wrong with all of them? Both discrete and continuous mathematics matter First of all, both discrete as well as continuous mathematical structures are important for actual reasoning and calculations in physics and not only in physics. We just need both of these categories of tools and theorems. Many people who like to say that only one of them may be fundamental are usually the people who don't know the other set of insights well enough - or they don't know it all. And they don't want to learn it. Instead, they want to promote a "theory" that implies that it is good if you don't learn it. In other words, they ignore at least one half of the basic math that is needed for physics. Saturday, November 26, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Holographic 300 GB disks What is the capacity of optical disks? If they're DVDs, you can squeeze up to 4.7 GB of information on them. Imagine a very similar disk with 300 GB on it. Yes, it is more than the magnetic hard disks you have ever seen. Amazingly enough, some readers found the opportunity to argue about the previous sentence to be more interesting than the new fascinating technology described below - very sad. And it would read the information 10 times as fast as the DVDs. Impossible? No! has developed a commercially acceptable version of the holographic disks. It could be sold as early as in 2006. The required physics was discovered by Dennis Gabor in the 1950s using the methods of anticipation plagiarism. More concretely, Gabor stole the insights about the holographic principle in quantum gravity from 't Hooft and Susskind and applied them in optics 40 years before 't Hooft and Susskind published it and 60 years before string theory was confirmed experimentally. Unfortunately, 300 GB is still 50 orders of magnitude less than what the area should be able to store according to the holographic principle ;-), but it is progress nevertheless. See other articles via or the company's web. If you're interested, you should certainly see the WMV video or another exciting QuickTime video. PDF introductions: Paradoxically, the holographic disks are the first ones in which not only the two-dimensional surface but also the three-dimensional bulk of the medium is used. One can record thousands of holograms on the same medium by changing the angles or frequencies. Many bits are read simultaneously. The hologram is written down by adjusting many bits in a semi-transparent two-dimensional "checkerboard" which is really called "spatial light modulator" or LSM (also known as the "linear sigma-model") and letting two parts of a split laser beam to interfere with each other to create a three-dimensional pattern within the optically sensitive plastic medium. The LSM does not differ much from some modern types of displays. When you read the data, you only use the reference beam that deflects off the medium and reconstructs a similar checkerboard image in a "detector". The disks are slightly thicker than the DVDs but have the same area. The optimists predict that these disks could eventually store up to 1,600 gigabytes of data that could be read as quickly as 15 megabytes per second. Of course, technology will only be pushed to the limits if the modest versions of the disk turn out to be reliable. Although the idea of the holographic storage disks has been around since 1963, people could not find a good enough medium for 40 years. The two-chemistry "TapestryTM" disks have suddenly solved these problems. The lesson - one that even the string theorists should learn - is that even if you have a great idea deeply connected with an obviously important physical principle such as holography, it may take 40 years - years of listening to obnoxious Peterwoits - before the details are refined so that you may celebrate the final success. Fourty years? String theory was born in 1968. You can do the math. Incidentally, the readers who think that this posting - about a fascinating technology that has become a reality - should ignite discussions about the capacity of conventional magnetic hard disks are considered to be retarded and uncultural intellectual equivalents of dogs by The Reference Frame. But feel free to write whatever you want. Haf haf. (That was in the Czech dogs' language. Not sure how the American dogs translate it.) Friday, November 25, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Supernovae: Lambda is constant Scientific American reports that the observations of 71 distant galaxies suggest that the dark energy is constant, namely the cosmological constant, indeed. The upper bound on the pressure/energy_density ratio is now • pressure/energy_density < -0.85, very close to "-1". More details in the paper by Ray Carlberg et al. ATLAS at Wikipedia Incidentally, if you look at Wikipedia today, the featured article is about the It was mostly written by SCZenz, a graduate student of experimental particle physics at Berkeley. Very good job! Ignorance of Paul Boutin In this text, I want to demonstrate that Paul Boutin has no idea what he is talking about in his text at Slate - he is writing about theoretical physics - and why he is an example of people who know absolutely nothing but who want to influence absolutely everything. • Elegance is a term theorists apply to formulas, like E=mc2, which are simple and symmetrical yet have great scope and power. This is one of the less serious problems in his article, but I don't think that any physicist would use the equation "E=mc2", popular among the laymen, as an example of an elegant piece of mathematics or physics. Incidentally, in this form, the equation has never appeared in Einstein's revolutionary papers. • The concept has become so associated with string theory that Nova's three-hour 2003 series on the topic was titled The Elegant Universe (you can watch the whole thing online for free here). NOVA's "The Elegant Universe" was titled this way not because NOVA was the first one to realize that string theory is elegant, but simply because the title was borrowed from Brian Greene's bestseller - a book version of the TV program - that every journalist who is informed about modern physics knows very well. This book is the reason why the words "elegant" and "string theory" simultaneously appear at so many pages. • That's because compared to E=mc2, string theory equations look like spaghetti. At the beginning of the show, Brian Greene reminded us that it may be rather difficult to explain general relativity to dogs, and therefore even the people may have problems to understand advanced mathematical concepts that are necessary to understand string theory and its beauty. No doubt, people whose mathematical skills end with the product of "m" and "c2" - and who are therefore probably closer to the dogs than to Edward Witten - will hardly appreciate algebraic geometry, mirror symmetry, conformal field theory, or homology of the super moduli spaces. After all, dogs don't distinguish superstrings and spaghetti either. • His General Theory of Relativity says gravity is caused by the warping of space due to the presence of matter. In 1905, this seemed like opium-smoking nonsense. Except that general relativity was published in 1915, not 1905. I would think that such flagrant ignorance of history of science should prevent one from finishing the high school. In reality, it is not even a problem for publishing physics articles at Slate. Moreover, relativity - either special or general - never seemed like opium-smoking nonsense to the physicists. The special theory of relativity was accepted almost instantly; the general theory of relativity was accepted quickly - and almost universally after the 1919 observations of the bending of light. There may have been counterparts of Paul Boutin who always thought that relativity was opium-smoking nonsense but their voice never played any role in physics. Except that the uncertainty principle is not an equation. It is an inequality and not a particularly elegant one. • The closest you can get is a function related to Planck's constant (h), the theoretical minimum unit to which the universe can be quantized. A "function" cannot be related to Planck's constant. And Planck's constant is not a unit into which the universe can be quantized. It is a quantum of the action or the angular momentum, not a "quantum of the universe". • If relativity and quantum mechanics are both correct, they should work in agreement to model the Big Bang, the point 14 billion years ago at which the universe was at the same time supermassive (where relativity works) and supersmall (where quantum math holds). Instead, the math breaks down. Einstein spent his last three decades unsuccessfully seeking a formula to reconcile it all—a Theory of Everything. Einstein never worked on reconciling quantum mechanics with relativity. What he worked on was unification of electromagnetism and gravity but he never intended quantum mechanics to be a part of his fundamental equations. • The most popular string models require 10 or 11 dimensions. They're not the most "popular" ones. They're the only ones that predict a stable universe with all the qualitative features we observe in the real world. • Krauss' book is subtitled The Mysterious Allure of Extra Dimensions as a polite way of saying String Theory Is for Suckers. Well, I hope that it will be widely understood that my comment that Boutin's intelligence resembles that of dogs will be viewed as an appropriate answer to this "gentle" man. • Scientific Method 101 says that if you can't run a test that might disprove your theory, you can't claim it as fact. If you can run a test that might disprove your theory, you can't claim the theory as fact either. And if your experiment actually disproves your theory, you definitely cannot claim the theory as fact. ;-) • And there's no way to prove them wrong in our lifetime. Maybe. The same thing holds for the evolution etc. But there is a significant chance that the theory will be proved right - a deeper theory than the previous ones to describe reality - in our lifetime. Mr. Boutin does not seem to be interested in this alternative possibility that string theory is right; a textbook example of Crackpotism 101. • Einstein's theories paved the way for nuclear power. The only thing that Einstein's theories had to do with nuclear power is that he could have calculated the gained energy from the mass differences of the nuclei - much like he could have done for any physical process in the world. The development of science and technology behind the nuclear power has nothing to do with Einstein's theories. Einstein's letter to Roosevelt (warning him that the Nazis may have been working on the bomb) is perhaps the only link between Einstein and the nuclear energy. • Hiding in the Mirror does a much better job of explaining string theory than discrediting it. Good joke. • Krauss knows he's right, but every time he comes close to the kill he stops to make nice with his colleagues. He knows that he's right much like the Catholic Church who opposed Darwin's theory, does not he? A difference between string theory and its Kraussian "alternatives" is that the former is evaluated by scientific, rational arguments and calculations. The latter is evaluated by articles written by journalists whose understanding of physics resembles the skills of dogs. It's pretty sad if someone like Boutin whose knowledge of modern science is completely superficial - and he just writes down confused misinterpretations of some popular accounts of physics that have already been written in such an oversimplified way to target the silliest 10% of the population - are given space at such influential places as Slate. Well, of course I know why he was given space. It's because he's the senior editor at many places like that. ;-) Unfortunately, even having a lot of money does not prevent one from being a complete ignorant. Harvard diversity skyrockets There are people who still think that Harvard is not diverse enough. Well, that's certainly wrong. Harvard tops the black student yield among 22 colleges in an ensemble. The female tenure offers have essentially doubled and exceeded the level before Summers' presidency. On the other hand, Princeton is different. While it seems that the women are already allowed to enter the physics department, it was not the case 50 years ago because they were a "distraction" for the Princeton scholars. At that time, women were already walking to the left and to the right everywhere at Harvard! Thursday, November 24, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Annihilating letters If you're just waiting for your turkey, you can try to destroy some pairs of letters. Wednesday, November 23, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Physics fades from UK classrooms Quite independently, Clifford discusses the very same issue - physics at British schools. 2005 is the international year of physics, so let me offer you some "optimistic" news - namely news about the intellectually degenerating British society. The number of British pupils who took physics dropped by 40 percent in the last 20 years. In the last 10 years, the number of UK physics departments dropped by 30 percent. And the situation will become even worse as the current generation of teachers will retire and be replaced by mostly female young teachers who usually don't like physics, Alexandra Blair predicts in The Times. RealClimate loses Thanks God, RealClimate.ORG, the propagandistic blog designed to politicize science, a kind of blog that refers to all politically inconvenient scientific results as the "industry-funded misinformation" (which is always banned on their website) lost in both categories of the Deutsche Welle International Weblog Awards 2005, despite their shameless self-promotion and their attempts to increase the number of votes in unfair ways - so similar to their "scientific" methods. And despite the fact that the competing blogs are composed of a much smaller number of authors than RealClimate's "hockey team" of eleven authors. When I say that they label all politically inconvenient scientific results as "industry-funded misinformation", what I have in mind are these 38 hits on their website. As Joseph Goebbels said, a lie that is repeated 38 times becomes the truth. Imagine that in spite of this river of naive political articles about funding by the "evil" industry that make Fidel Castro look like a moderate conservative in comparison, they define the goal of on their title page as follows: • ... The discussion here is restricted to scientific topics and will not get involved in any political or economic implications of the science. ... Haha. Very funny. If climate science were funded by serious industry instead of mostly corrupt left-wing foundations, these guys would be begging on the street. Tuesday, November 22, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere How they stole $2800 from my account (for a while) (looks good, does not it?) and asking me to update my debit card number and so on to improve the community and so forth. Convinced that it had to be related to the review - that is only posted quickly if recognizes your "real name" through your payment card - I did not hesitate and decided to get rid of the paperwork as soon as possible and defend my priviliged status at Of course, I am getting roughly 5 phishing e-mails per day but this one was special: it got me. ;-) After opening a page that looked just like at, I entered my credit card number to the fraudulent website, and to show how really stupid I was, I also filled out another page with the social security number. (Please don't annoy me much with the messages about the credit history. I don't intend to borrow anything and I don't care.) Incidentally, the server was located in Thailand, not China, where the page was redirected through Germany. It was very easy to find out many other details about the website one month ago. Monday, November 21, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Scholars at risk If you know about a scholar who is prosecuted or discriminated in her or his country, you may want to remind me about the scholar so that I would nominate her or him for Lawrence Summers' "Scholars at Risk" fellowship. CERN receives supercomputing award The European Center for Nuclear Research (CERN) is the place where the World Wide Web was developed. Even the uncultural people who don't care about the existence of the Higgs boson or the superpartners may have noticed one of the by-products of high-energy experimental physics, namely the web. Despite some rumors, Al Gore did not invent the web. The only discovery in computer science that Al Gore has made are algorithms. :-) Despite the contributions of the unsuccessful 2000 presidential candidate to computer science, you should keep in mind that most websites start with Dubya Dubya Dubya. The Large Hadron Collider should start operations in 2007 and it will require many gradual technological advances to occur. Look at this article: In this article, you can remind yourself about the immense amount of data that CERN will generate and that will have to be analyzed. Even if the people do not care about particle physics, experiments like the LHC represent a very natural playground to improve technology - such as grid computing - that can become useful in many other contexts in the future. Of course, these results are not the goal of particle physics but rather a by-product; but they are another reason why people may find investments to pure science - and high energy physics in particular - to be a good idea. Quasinormal contours Songbai Chen and Jiliang Jing calculate a formula that is perhaps the single most general existing generalization of our formula with Andy Neitzke describing the asymptotic, highly-damped scalar quasinormal modes of the Reissner-Nordström black hole. Their setup involves a scalar field coupled to the Gibbons-Maeda dilaton spacetime with a general coupling. The contours in the monodromy method immitate our contours in the Reissner-Nordström case. Also their result is similar and it involves the exponentials that depend both on the Hawking temperature as well as the temperature of the inner horizon. In their case, there is also an additional square-root dependence on the coupling "xi" of the scalar. This dependence is another piece in (already) overwhelming evidence that "log(3)" is not universal but depends on many couplings and other details. An entertaining feature is that the Reissner-Nordström result is reproduced for "xi=91/18". I am actually a bit confused by this number. Don't you reproduce the same result for "xi=-5/18", among many other choices? Why is it exactly "91/18" that was chosen? More generally, I am still convinced that a future understanding of general relativity and physics of black holes and horizons, including thermodynamics and the (absence of) the information loss paradoxes will involve much more complexification, analytical continuation, and computations in unphysical regions of the parameter space than the current descriptions. The continuation of physics into complex (and other unphysical) values of the parameters (such as the spacetime coordinates) is relevant and legitimate in quantum gravity much like it is relevant in quantum field theory. However, in the case of quantum gravity, there are many ways how the continuation may be done and there are many subtleties that one must be careful about. In my opinion, it means that analytical continuation will be an even more important and bulky portion of our future understanding of quantum gravity than the role they play in quantum field theory. Although these particular quasinormal calculations only depend on the low-energy effective action, I may imagine that various sophisticated and diverse ways to analytically continue physics to unconventional regions of the parameter space may become important for string theory itself. Saturday, November 19, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Non-Kähler compactifications Li-Sheng Tseng from University of Utah gave a Duality Seminar about • Flux Compactifications and Moduli The talk focused on non-Kähler compactifications with fluxes which is a very interesting topic that however remains controversial. Let me try to explain why. For the sake of simplicity, Li-Sheng discussed the case of a non-zero H-field only; other fluxes are set to zero. The condition of unbroken supersymmetry implies that the three-form H must actually be expressed in terms of a derivative of the differential two-form that gives you a Hermitean metric. Why? Recall that the H-field enters as a kind of torsion to the overall connection - and a spinor should be covariantly constant under this connection. Because this exterior derivative of the metric is not zero, you can't really call it a Kähler form. And you can't call the manifold a Kähler manifold either. However, when we deal with heterotic string theory, there is also the usual Bianchi identity for H • dH = alpha' Tr (R /\ R - F /\ F) When we consider the classical compactifications with "H=0", we relate the curvature of the geometry "R" and the curvature of the gauge bundle "F". For example, we can cancel them trivially by embedding the spin connection to the gauge connection - but that's definitely not the most general solution. Both connections are nevertheless of the same order. The size of the manifold is not determined. The Kähler moduli are not fixed and "alpha'/a^2" is a good expansion parameter. That's both a good news as well as bad news. It is good news because reliable calculations can be done even perturbatively. It is bad news because the moduli are not stabilized which disagrees with basic properties of the real world around us, among many other worlds. However, when we choose a non-zero value of "H", the equation above makes it clear that terms with different powers of alpha' are getting mixed. Another equation tells you that "H" and the metric are of the same order, and therefore you can't assign unique powers of alpha' to all your fields; dimensional analysis breaks down. This implies that the size of the manifold is comparable to the string length (i.e. "of order one") if the equations are to be solved. For a finite value of the fluxes, the expansion in "alpha'/a^2" is strongly coupled. Li-Sheng actually argued that this conclusion is only correct for a T2-fiber in his main example while the K3-base may be arbitrarily large. But at any rate, there are some dimensions whose size is stringy which forces us to include infinitely many terms at all orders in alpha'. Any truncation seems unreliable. While the Hermitean metric is not a closed form, its square continues to be co-closed, and Li-Sheng told us about some examples of generalized geometry - conformal Calabi-Yau geometry, for example. These are nice abstract notions, ideas, and formulae, but the question whether an actual CFT that describes such a perturbative string theory exists remains unanswered. No such a CFT has been explicitly found - and not even proved by an existence theorem. Unlike the Calabi-Yau case, no explicit orbifold point in the moduli space is known either. At the level of geometry only, the existence of some particular examples has been shown, but I doubt that these proofs exceed some low-energy geometric approximations and imply the existence of the full backgrounds of string theory. Note that you must solve the equations for "H"; Einstein's equations with the appropriate right-hand side; and it turns out that the dilaton is non-trivial which also forces you to solve a non-trivial Laplace-like equation for the dilaton. All of these equations are pretty hard to satisfy. A proof that such a combined solution for all these fields exists amounts to a rather extensive generalization of Yau's theorem (such a description is especially appropriate for the metric and its Einstein's equations with a source). There are some details that seem hard and potentially incompatible with the perturbative approach to the question. For example, a static dilaton must satisfy something like • nabla^2 (exp(-phi)) = F^2 + H^2 + ... whose right-hand side actually turns out to be positively definite at the leading order. This would prevent the dilaton from being smooth but non-constant everywhere, as required in Li-Sheng's construction: the Laplacian has opposite signs at the maxima and minima of the dilaton. However, he argues that the higher-order terms in alpha' appear in the equation above and these extra terms may be negative and compensate the leading-order positive terms. Quite generally, I remain skeptical about the constructions that seem impossible or inconsistent at a leading order in a certain perturbative expansion and whose consistency is being explained by a mere existence of some higher-order (or even non-perturbative) terms. Don't get me wrong: constructions that are not accessible perturbatively are all but guaranteed to be important in many situations. All moduli in the real world are stabilized, for example, which makes any expansion about their extreme values problematic. Even though the backgrounds that do not solve the equations of motion order by order in a certain perturbative expansion are important and will be important, the importance does not prove that they actually exist. Cancelling a first-order problem by pointing out a possible second-order cure may be a hint that a consistent theory exists - but it is definitely not a proof. If we cancel terms with a different parametric dependence on any parameter G, it certainly means that the perturbative expansion in G breaks down. That is not enough to prove that the theory is doomed, but it is not enough to prove that it is consistent either. For example, the second-order terms may be negative but the third-order terms could be positive and such that the sum of the first three corrections remains positively definite. Is it up to the collective decision of the terms at all orders whether the qualitative conclusion based on the first-order approximation (the theory does not exist) is correct; or whether the conclusion based on the second-order approximation (the theory does exist) is right. Note that the existence of simpler vacua in string theory has been firmly established. In my opinion, the people thinking about the non-Kähler issues may want to focus on proving the existence of their conjectured generalized theories - for example the existence of CFTs in the cases in which the dilaton may be kept near minus infinity. When the perturbative expansions break down, it is only the exact result that is relevant. One should try to decide whether a truncation of some expansions leads to qualitatively correct results. And the answer can be either Yes or No. Friday, November 18, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Elsevier Science and crackpots W.S. has pointed out an article written by someone called Ms. Liisa Antilla that has been accepted into a peer-reviewed journal "Global Environmental Change Part A" published by Elsevier Science - the publisher that has done a lot of good things for high-energy physics in the past but whose journal "Nuclear Physics B" is widely considered to be a dinosaur that should die soon, especially because the journal seems much more expensive than what is appropriate in the era of the internet (and the arXiv). Incidentally, if your university does not provide you with access to the article, you can buy the text for $30. W.S. is amazed by the article, I am also amazed, virtually everyone else is amazed. Why are we so shocked? The title of the article is • Climate of scepticism: US newspaper coverage of the science of climate change and it is arguably the first published peer-reviewed article in the world whose main point is to accuse dozens of particular scientists from scientific misconduct and corruption just because the result of their research does not agree with the author's silly ideological fantasies, without having the tiniest glimpse of evidence. What do I mean? Who is the author? What is it all about? First of all, Lisa Antilla is not a scientist according to all available data. Already the abstract makes you sure that you should not expect the scientific method to play any role in what follows: • This two-part study integrates a quantitative review of one year of US newspaper coverage of climate science with a qualitative, comparative analysis of media-created themes and frames using a social constructivist approach. ... Well, a "social constructivist approach" is one of the postmodern "approaches" whose irrational character - and absurdity - was pointed out by Alan Sokal. The 15 pages make it absolutely clear that Liisa Antilla has no idea about the subject she is writing about - the climate in this case - and her ability to have learned the English alphabet is demonstrably her greatest intellectual achievement. At any rate, this achievement apparently seems sufficient to get published in peer-reviewed journals that belong to Elsevier Science. It must be clear to any rationally thinking person that the chance that an article about "warming" gets published in the newspapers today is at least 5 times higher than the probability that an article that implies "no warming" or "cooling", even locally, gets published - even though cooling and warming are essentially in balance. Bias of one order of magnitude is apparently not high enough to satisfy Liisa Antilla who argues that every article that does not support the crackpot idea of a looming global warming catastrophe must be an artifact of corruption. Seed magazine Seed magazine seems really serious about creating a highly stimulating environment for important scientific ideas. Their five categories in the front of our minds are • Einstein • Evolution • Genetics • Science/Religion • String Theory That's a pretty exciting mixture of topics and many people may have a lot of fun when Seed magazine really starts. Nima at Radcliffe Melanie Becker organizes nice lectures at the Radcliffe Institute, formerly known as Harvard University for girls. The first lecture was by Cumrun Vafa - about the Swampland, and two talks by Nima Arkani-Hamed followed. Today, he described our project (with Cumrun Vafa, me, and Alberto Nicolis) involving the "weak gravity" constraints. In my opinion, he did a terrific job. Let me only summarize a few slogans: • in quantum gravity i.e. string theory, one should have no global continuous symmetries; it is known in string theory that a symmetry current, a (1,0) or (0,1) tensor, is associated with any symmetry, and it can be multiplied by "del X" or "delBAR X" to create a (1,1) marginal vertex operator for a gauge boson, proving that the symmetry is local after all • this constraint looks too fragile because one can immitate global symmetries by weak symmetries with a very small coupling constant which seem allowed • therefore, it would be more natural if very weak couplings were forbidden, too • more precisely, if a U(1) gauge group has too small a coupling, ... This posting was never finished. Read the preprint hep-th/0601001. Congratulations to Devin Walker Clifford Johnson at CosmicVariance was thrilled when he learned that he belongs among my favorite people. So let me generate a new thrill. I like Devin Walker, much like many other people like him. We widely expect that tomorrow, he will become the first US-born and US-educated African American to earn a PhD degree from the physics department at Harvard. So let me say in a preliminary fashion: Congratulations, Devin! Note added later: We had some sparkling wine and cakes. Melissa Franklin, who was on the committee, said "Welcome to the 20th century, Harvard". Of course, the celebration is a fun event with a lot of people attending. Thursday, November 17, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Sixteen years after Velvet Revolution Tuesday, November 15, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere A Hagedorn alternative to inflation? Robert Brandenberger, Ali Nayeri, and Cumrun Vafa argue that a stringy phase of cosmology dominated by strings near the Hagedorn temperature is an alternative to inflation. More precisely, their calculation suggests that one can obtain a scale-invariant spectrum by assuming that the temperature was near the Hagedorn temperature in the past - and the environment was dominated by a long, strongly excited string. Moreover, the amplitude of fluctuations from the scale-invariant spectrum is suppressed by the fraction "(l_{Planck} / l_{string})^4", and this insight implies a consistent picture for "l_{string}" being roughly 1,000 times longer than "l_{Planck}", with "g_{string}" being the inverse quantity 0.001. That's an exciting statement. You can have two possible negative reactions; it is either wrong or equivalent to inflation (much like the good features of the ekpyrotic Universe may be argued to be equivalent to inflation). Assuming that the calculated scale-invariance is correct, which I have only checked partially so far, the second possible answer is pretty interesting. How can this picture be equivalent to inflation? In fact, how is the usual causality argument that implies that the temperature should not be isotropic evaded? Monday, November 14, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Which way the wind blows Well, one of the goals of the physics lunch today will be to gauge which way the wind of support blows. Although I may be right-wing, it should not be surprising that my wind blows to the left-hand side (with the usual acronym for the left-hand side) - of course, nothing against and all my respect for the other side! It seems to me that the left-hand side is also the natural direction in which the physics wind should generally blow - and not only because physics is about thoroughly addressing issues rather than skimming over them. But the wind experiment has yet to be done. Result: some people - e.g. a person who will have to travel to Scandinavia soon because of a person who had played with explosives - did not even believe the Crimson about the plan of the left-hand side. But everyone agreed that there was no evidence that the leak was deliberate, and the department will decline to support a certain unphysical underground petition designed to criticize the left-hand side using the hypothesis that the leak was deliberate. (I would personally find nothing wrong even about a deliberate leak because it is a natural preparation for action that should not be a complete shock, but that is a different issue.) Sunday, November 13, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Weather critical exponents How quickly is the weather changing? What is the right probabilistic distribution for the apparently random history of temperatures at a given location? Consider the deviation "Delta T(d)" of the daily temperature (on day "d") from the long-term average. Compute the correlation coefficient "C(s)" between "Delta T(d)" and "Delta T(d+s)", two temperature variations separated by "s" days. Govindan et al. (some of the authors being famous people who promoted fractals) showed in Phys. Rev. Let. (and in a book edited by Murray "SantaFe" Gell-Mann) that there is apparently universal scaling • C(s) = #.s^{-gamma} where the exponent "gamma" seems to be universal, between 0.6 and 0.7, independently of the location (at least as long as it is a continental station). The universal exponent could very well be 0.65. (This counting is somewhat analogous to the CMB scale-invariant spectrum but the exponent differs.) The law seems to hold when "s" is a couple of days or ten years - in fact, no violation of the law is known from the data, not even at very long time scales. This critical exponent is what I call an interesting insight about temperature dynamics. The authors demonstrate that most climate models give a very different exponent which is usually closer to an experimentally wrong value 0.5, and moreover lead to results that depend on the location: coasts are supposed to differ. Because the results of Govindan et al. imply that the climate models don't work - and moreover, more concretely, overestimate the trends, the consensual scientists such as William immediately know what to think about the paper: • But... is [the paper] any good? Weeeeeelllll... probably not. This is yet more of the fitting power laws to things stuff. They use "detrended fluctuation analysis" (DFA) which I don't understand, but that doesn't matter, we'll just read the results. Of course, the result that William sees at the end of the paper is that the models give wrong exponents and their prediction of global warming is thus unjustified. This could mean that the predicted global warming will be smaller than one predicted by IPCC 2001, and therefore William knows what to think about the paper even though he does not understand a word. I added the boldface because William's innocently honest description of the "mainstream" climate edition of the "scientific method" is refreshing. William continues with some amount of nonsensical criticism - such as that it is strange that they included Prague as a representative city :-) - and then he promoted a paper by Fraedrich and Blender (FB), also in Phys. Rev. Let. These Gentlemen offer a surprising conclusion that the scaling exponent should be around 0.5 for inner continents and 1 for the oceans which William, of course, immediately accepts. Why? Because it would help his global warming beliefs. I have not analyzed the data in detail, but the FB statement seems to contradict something I would call a physics intuition. Oceans or continents can change the (dimensionful) timescales of exponentially decaying processes or the overall size of the temperature fluctuations, but they should not change the (dimensionless) critical exponents of the power laws. It should not be surprising that the original team, Bunde et al., published a one-page comment also in Phys. Rev. Let. about FB which shows that the FB results contradict both their analysis as well as the initial data. Of course, William won't inform you about such a thing and he will erase every comment on his blog that would try to link to the new corrected paper by Bonde et al. - which is what he did to my comment. He's just damned scared that all these flawed scientific assertions will be revealed. Instead, he is going to convince you that the critical exponents (and probably also the rest of physics) are uninteresting because they have already validated their friends' models and no amount of heresy such as the critical exponents - or publications in Phys. Rev. Let. that no true AGW believer would ever read - can change the holy word. ;-) Meanwhile, the people who are still able to use their brains may compute the critical exponents in the statistical climate data and falsify most of the climate models that are being used today. Noise is not always the same thing as another noise, and there are scientific methods to determine whether two "noises" match. Cosmologists have been using these methods for more than a decade to analyze the CMB. The modern alchemists of course don't want to hear about the methods that have the power to show that some models are simply wrong and the "wrongness" can't be hidden behind the apparent "randomness" because when investigated scientifically, "randomness" is not universal. The very purpose of science is to uncover the layer of randomness and see the patterns that can be expressed by quantitatively measurable and predictable numbers. Finally, there have been quite many different papers that show that the climate models fail to reproduce the observed temperatures, for example paper from Boston University here; or a paper by Douglass et al. Saturday, November 12, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Cosmic string or dark matter I just received a mail from Rich Murray who has taken many pictures of the region near CSL-1, the "cosmic string lensing" candidate. For example, the newest picture #31 at the top includes "RML-1" which stands for "Rich Murray Lens 1", but I remain somewhat unconvinced that this rather amateurish picture proves anything. The primary recipient of the e-mail was Malcolm Fairbairn who just posted an interesting paper arguing that if the CSL-1 event is caused by lensing, it is likely to be a cosmic string rather than a dark matter filament because in the latter case, the corresponding tidally disrupted dark matter halo would have to be as heavy as the Milky Way - and such a halo seems to be absent in other data. This was always our primary worry - that CSL-1 could be caused by lensing by something that just acts as a string but can be of a rather conventional origin. Movie: Loners (2000) Ying, a Chinese girl who speaks Czech, invited us to screening of a Czech movie (with English subtitles) in the Department of Visual and Environmental Studies (VES). It was the first time I saw Samotáři (Loners, 2000) and it was pretty good. Much like in many other Czech movies, the seven central characters seem to have a pretty difficult, dirty life; the web indicates that this theme was popular among the U.S. movies in the early 1990s. Their relationships are breaking up, combining, and recombining. Another typical feature of the post-1989 Czech movies is that neither of the characters is designed to be a universally negative one and neither of them is a permanently positive character either. Also, you can see how the characters judge the features of others depending on the context; that's a very realistic feature of the movie's psychological analysis. Ondřej is a talented and married young surgeon who has two daughters. Nevertheless, you learn that he has only studied neurobiology to prove how much he loved another woman, Hanka. He is so obsessed that he repeatedly dresses up as a plumber to get into Hanka's parents' house - a house that he repeatedly burns. Meanwhile, Hanka has a very mixed relationship with her parents. She just decides - by tossing a coin - to break up with Petr who works in a private radio station. Hanka does not view her parents' bourgeois life as a good example but seems rather unsuccessful in creating a better environment. But she is a very flexible figure, as far as the type of her boyfriends go. For a while, Hanka seems to have serious plans with Jakub, an innocent drug addict whose memory seems to be rather devastated by the drugs. However, the friends from his band inform Jakub that he already has another girlfriend. Hanka is disappointed and returns to her parents. Friday, November 11, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Discrimination against white males and conservatives As Sean Carroll has pointed out, the U.S. justice department plans to sue a university, namely Southern Illinois University (SIU), because it discriminates against whites, males, and others - by establishing a wide spectrum of fellowships that are not available to whites and/or males; see the article here. Unlike Sean, I agree with the Justice Department that beyond a certain plausible level, these things simply are illegal and undesirable and it is a task for the Justice Department to act. Not surprisingly, Barack Obama supports the discrimination, too. Don't get me wrong: I may have been involved in increasing the fraction of females or other minorities - and maybe even females from the Axis of Evil - in our own department; by the way, some of them being very nice and attractive. ;-) But whenever my decision was twisted by similar considerations, I was very careful to check that the law gave me the right to act in one way or another in a given situation and that the decision was not manifestly counterproductive. If someone establishes several fellowships whose only purpose is to selectively choose people from certain groups that are defined by their race, nationality, or gender, and to exclude the complementary groups, then it seems pretty clear to me that the person has violated the federal laws and probably the U.S. constitution itself. This is about the validity of very basic U.S. laws and someone's belief in some particular ideologically scientific opinions that all groups have the same average XY where XY is any observable simply can't justify a crime. When I talk about the laws, I mean e.g. the fourteenth amendment of the U.S. constitution. The Cornell legal interpretation says, among other things: No doubt, this clause is violated by the fellowships. Technically, the Justice Department will probably not use the amendment but rather Civil Rights Act of 1964. Helping to increase the fraction of a certain group in Academia may look as a good plan to some people, but if it contradicts the federal laws that define in what respects the people have to be treated as equal by major institutions, then there is no doubt that the U.S. laws are primary. I think that the constitution and other laws are very balanced, stable, and clear, and a violation of these rules is not right. Flying cars from string theory Our colleague Song Yoo-geun, who is eight years old, has finished his elementary and high school in 9 months, instead of 12 years. He just joined a college near Seoul and impressed the professors by his explanation of the Schrödinger equation. His next goal is to work at CERN and develop flying cars based on superstring theory. Because Song does not like to communicate with ordinary adults too much, his father has explained that Ramond-Ramond or Neveu-Schwarz-Neveu-Schwarz forces are able to compensate gravity as a consequence of supersymmetry, which makes flying BPS cars float in the atmosphere. Thursday, November 10, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Higgs at 105 GeV? As Jacques Distler reminds us, we normally argue that LEP has imposed a lower bound on the Higgs boson mass: 115 GeV. Slightly below this level, maybe around 105 GeV, there can be a viable candidate that was seen as a weak signal. However, it contradicts the previous sentence about the lower bound and we usually discard the signal. Are we doing a wise thing? Dermíšek and Gunion argue that in next-to-minimal supersymmetric standard model (NMSSM) two things happen: first of all, with the lowest possible fine-tuning you can imagine, the Higgs is predicted at 105 GeV. Second of all, the decay channels are a bit different, the classical decay channel weakens, and therefore 115 GeV is no longer a lower bound on the mass. In other words, it is plausible that LEP has seen a small signal for the most natural value of the Higgs mass in the second simplest supersymmetric extension of the Standard Model you can imagine. Note that the Standard Model itself was also the second simplest theory with an SU(2) gauge group you can imagine. ;-) If this NMSSM scenario were right, I would prefer not to share Jacques' bitterness about the difficulties with observing Higgs directly at the LHC. There could be more interesting things to observe! ;-) Looks like I am not the only one who told this thing to Jacques. Wednesday, November 09, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Modern science-haters Clifford Johnson describes a talk by one of these modern science-haters whom we discussed several times. It seems that some ideological presentations of the creationists are as honest as the holy word in comparison with this gentleman. Clifford's report confirms the hypothesis that it is never just string theory that the science-haters dislike and want to humiliate in the context of modern science. This particular science-hater also claims many other things. Modern science is ridiculous and equivalent to the theory of Intelligent Design, he argues, because • it uses the concept of infinity. For example, the mathematicians are crackpots, he explains, because they have proved the Hilbert's hotel theorem. (I have not heard the original formulation but trust me that this captures the essence.) I find such a statement incredible. The Hilbert's hotel theorem, showing that the infinite-dimensional Hilbert space is isomorphic to the same Hilbert space with an extra one-dimensional space added (an infinite hotel can always accomodate an extra guest) is not only a rigorously proved simple theorem, but it is also a theorem relevant for physics (which is not the case of all theorems in mathematics). You don't need to talk about the spectral flows: the very existence of the creation operator acting on the harmonic oscillator is a physical example that the theorem is relevant in physics. Also, this science-hater humiliates the fact that • "zeta(-1) = -1/12" and it can be used to obtain modular-invariant regularized results for various divergent sums. I just personally find it amazing that some professional physicists have problems with various regularization procedures and with the very concept of infinity - roughly 60 years after these concepts became completely essential for doing virtually anything in theoretical physics. (Arguably, the concept of infinity has been crucial in physics for several centuries.) PhD committees should probably insist that anyone who deserves a PhD in theoretical physics should know not only how the symbol of infinity should be manipulated with, but also why the Casimir energy in 1+1D leads to an expression proportional to the sum of integers, and why "-1/12" is the only correct answer one must assign to this sum. Notes on two papers Brustein and de Alwis study a thermodynamic description of the early cosmology and argue that the tunnelling tends to end up near extrema of the potential, and close to the "center" of the landscape where the coupling is of order one and the sizes of the internal manifold are close to the self-dual radii, taking KKLT as a moral example. This seems to agree with the observations of physicists such as Alon Faraggi that the self-dual dimensions of order one are preferred. I also believe in this kind of stuff - the vacuum selection mechanism that we will eventually find will favor the most "canonical" vacua which may be those that are close to the center. This idea is somewhat related to the "beauty is attractive" business but its focus is not on enhanced symmetries. As far as I understand, John Baez: • rediscovered that the Standard Model group is SU(3) x SU(2) x U(1) divided by a certain Z_6 group • rediscovered that the complex spinor 16 of spin(10) is a good representation for a single generation of quarks and leptons - i.e. rediscovered one reason behind grand unified theories • realized that SU(5) is actually a subgroup of SO(10), not only spin(10), that moreover does not include the 2.pi-rotation and therefore the spinor is single-valued • rediscovered that manifolds with SU(5) holonomy are called Calabi-Yau five-folds • wants to study, for a very incomprehensible reason, manifolds whose holonomy coincides with the Standard Model gauge group The last point seems rather unnatural to me because the holonomy is exactly the symmetry - a part of the tangential group - that is broken by the manifold’s curvature, while the gauge group of the Standard Model is a group that must be, on the contrary, completely unbroken to start with. Tuesday, November 08, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Krauss on science and religion Lawrence Krauss has an essay in Tuesday's New York Times in which he argues that theoretical physics, as long as it's not just a telephone directory summarizing the experiments that have already been done, is more or less on par with religion and Intelligent Design. What do I think about these comparisons? See Mark Trodden's text for his viewpoint. Science and religion have definitely common roots. The ancient people used to be scared by many natural phenomena they did not understand and they started to produce various "theories" how the world works and what you have to do to save your life and protect yourself and your community from various threats. Some of these "theories" were rather complex. This complexity is what distinguished the early believers and the early scientists from average people who only cared about their Tuesday lunch. The ancient protosciences and protoreligions made the people focus on certain questions that transcended their lives. They helped us to transform ourselves (i.e. monkeys) into humans. They taught us to spend a certain amount of time with activities that were not immediately necessary for our survival. They taught us to make big conjectures. Even Newton has constructed his mechanics in order to support a more far-reaching concept - namely the holy spirit that fills the space. Religion and science have co-existed for millenia. Once again, scientists and believers have always shared certain characteristics and millions of words have been written about these relationships. Moreover, in the ancient era, it was often difficult to distinguish which activity was science and which activity was religion or unjustifiable superstition. If Lawrence Krauss wrote his "essay" 30,000 years ago or maybe even 500 years ago, it would have been almost correct. But surprising as it may sound, it is 2005 right now and Lawrence Krauss is no longer right. Religion and sciences have been separated several centuries ago. Science has claimed certain questions to belong under its umbrella and it has pretty well-defined procedures that are used to decide whether a conjecture is correct or at least convincing or not. Whether or not warped geometry or a Calabi-Yau manifold reminds Dr. Krauss of Moses is completely irrelevant for science. Science works independently of these beliefs and only rationally justifiable arguments have the power to influence where science goes. The apparent mathematical inconsistency of all purely four-dimensional theories of gravity is a powerful scientific argument; Krauss' religious or anti-religious feelings and vague articles in the newspapers are not. Lawrence Krauss clearly misunderstands and understimates (and maybe even misunderestimates) how serious the UV problems of gravity or the hierarchy problem, among many other examples, are. He may choose not to solve these questions because they may be uninteresting for him. Billions of people in the world do not care about science unless it may directly improve their life today. But these people are not expected to be those who determine the direction of the scientific research. Krauss is quite clearly unhappy that physics has become counterintuitive after the 20th century revolutions. It is no longer transparent for most peasants. What a sad development! However, most of us are quite satisfied or even excited because exactly this feature measures the depth of the scientific progress: how many deeply counter-intuitive insights can we establish. Be sure that this process will never be reverted. Science simply is more complex than it used to be 600 years ago. Science and religion share fascination by unseen things. But they differ in their method how to decide whether the unseen things exist or not. Calabi-Yaus from MSSM Last week or so, James Gray has given a very interesting talk about their work on deducing the details of string theory from low-energy physics. Normally, the string theorists start at the top. You pick your favorite compactification (you decide whether it is heterotic or type II, what is the shape of extra dimensions, which fluxes and which branes you allow), figure out what its low-energy spectrum is, and deduce the moduli spaces, couplings, and low-energy effective field theory in general. They propose the reverse approach. Take one of the simplest low-energy phenomenological models compatible with your observations - such as the MSSM. Define its gauge-invariant monomials to parameterize a moduli space; these are used as F-terms or D-terms. Determine the dimensionality and topology of the resulting moduli space. And find a string model that exactly matches it. They often mention that they would like to derive that the moduli space is a Calabi-Yau three-fold itself. I find it a bit exaggerated. The Calabi-Yau space can only be a moduli space at low energies if you consider something like one D3-brane on Calabi-Yaus in type IIB - but there are really no phenomenologically viable models of this kind. Note that the 3-fold in the F-theory flux constructions is not a Calabi-Yau manifold. In reality, the moduli spaces at low energies describe the moduli spaces of shapes of manifolds such as the Calabi-Yaus or moduli spaces of gauge bundles over them. They can have many different dimensionalities and topologies. In the top-down approach, we know very well what is a natural requirement for a model to be phenomenologically appealing: we want to get as close to the Standard Model or MSSM as possible, and remove all exotics. A corresponding task in their bottom-up approach is not quite determined, as far as I can see. What properties should the low-energy moduli spaces derived by their algorithm have in order to tell us that it looks like it comes from string theory? I think that the idea that it should be a Calabi-Yau manifold is naive, and I don't have any better replacement for this proposed answer. For example, they seem very excited by having obtained a moduli space whose Hodge numbers coincide with those of a CP^2: • h0,0=h1,1=h2,2=1 and otherwise zero. I personally have not understood why this "simple" Hodge diamond is more attractive than other Hodge diamonds that they could have derived. Monday, November 07, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Great October Revolution For one half of my readers: Congratulations to the 88th anniversary of the Great October Revolution! Well, it was not great, it was not in October, and it was not a revolution but rather a coup, but nevertheless, Well, no longer in Russia, but a celebration in Ho Chi Minh City in VietNam is fine, too. As kids, we could always watch some fireworks before November 7th. In the first approximation, one must be extremely grateful that this era is gone. On the other hand, we live in a modern, environmentally friendly society, and therefore all the trash from the past is recycled in one form or another - which is why our expected happiness received negative contributions from the second order perturbation theory, much like in most cases in physics. :-) Sunday, November 06, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Landscape decay channels One of the reasons why I think that a megalomanic amount of metastable de Sitter vacua in string theory should not exist is that they have a megalomanic number of ways how they can decay. See also Resonance tunneling and landscape percolation (2007) Of course, this particular argument can't eliminate the supersymmetric anti de Sitter vacua of the landscape because they are exactly stable. However, de Sitter vacua - which is what we eventually want to get to match reality - can generically decay to other vacua with smaller vacuum energy. Such a process is described by an instanton whose Minkowski interpretation is nothing else than membrane nucleation. A charged spherical domain wall is spontaneously created in space. The interior of this "bubble" carries a lower energy density (which is why it is allowed by energy conservation) and a different value of one type of the electric or magnetic field (a flux over the internal manifold). If the action of this instanton is "S", the decay amplitude is suppressed roughly by "exp(-S)" and can become negligible if the action is comparable to hundreds or thousands. But there are actually many types of domain walls that one can nucleate. Each of the basic "types" of the domain walls is able to change one type of the flux by a single unit. In principle, you may consider tunnelling to a much more distant vacuum elsewhere in the landscape. These more convoluted decay processes are suppressed by even greater actions; but there are many such decay channels. Who wins? In the simplest models you can imagine - such as Shamit+Nima+Savas' model with many scalar fields with a quartic potential - the parameteric answer is actually a tie. If the instanton that induces the flip of a single scalar field from "+v" to "-v" or the other way around has the action "S", then the instanton that flips "k" fields simultaneously may be interpreted as a superposition of "k" copies of decoupled instantons of this type. Recall that the Hamiltonian for these scalar fields is a simple sum of contributions from the individual scalar fields. And the action of the composite instantons is therefore "kS". If you take "k" to be comparable to "N", the total number of the scalar fields, there are "2^N" decay channels and each of them is suppressed by something like "exp(-NS)". If the elementary action "S" is of order one, the product may be close to one, too. However, it is natural to imagine that there are semi-isolated sectors in which the elementary action is much higher, e.g. 1000, and the instanton suppression wins. In such cases, we can indeed neglect all decay channels except for the fastest one. But note that both factors, the growing factor and the suppression factor, have the same, roughly exponential dependence on "N". However, in reality, I assume that the action of the composite instanton is parameterically smaller than "kS". My guess is something like "sqrt(k)S". One needs to nucleate a composite domain wall but the tension of the composite domain wall is likely to be smaller than a simple sum of the contributions; a composite domain wall is a bound state of the "elementary" domain walls. If you take the guess "sqrt(k)S" for the action, justifiable by the Pythagorean theorem in the configuration space (landscape) or the analogy with the (p,q) strings, as your starting point, the total decay rate will go approximately like • 2^N exp(-sqrt(N)S) ~ (morally) ~ exp(N-sqrt(N)S) You see that for really large values of "N", being many thousands, the positive term in the exponential dominates and the decay rate becomes fast. The minimum of the exponent appears at • N = S^2 / 4 If you assume that the elementary action "S" is, for example, 60, the resulting "N" that maximizes the lifetime will be around 1000. That's still large because it gives you 2^{1000}=10^{300} vacua, but still, you see an argument that the number of long-lived de Sitter vacua should not be allowed to grow indefinitely. You may either think that this argument is flawed or irrelevant, or you may think that in reality, the numbers are actually much more stringent - and closer to one - and your conclusion will be that the number of stable de Sitter vacua must be reasonable, too. At any rate, the statement that there are googols of nearly stable de Sitter vacua is a rather strong statement - imagine how weird it would be to argue that there are 10^{500} stable states of the Hydrogen atom - and I would expect a rather extraordinary body of evidence, including a detailed refinement of the ideas above, before their existence is accepted as a consequence of string theory. Once again, this argument does not affect the anti de Sitter vacua. It may be a bit puzzling to have zillions of anti de Sitter vacua - and their dual conformal field theories -too. But maybe this is how the (supersymmetric) life eventually works. French Intifada As you know, France is experiencing the worst days of rioting since the 1968 student protests. I am afraid that the country has allowed more immigration than it can handle. By the "ability to handle", I probably mean a working system and infrastructure that can integrate a vast majority of the immigrants into the society. Charles de Gaulle used the military in 1968 and it was one of the reasons he had to resign in 1970. Nowadays, the government prefers to issue a "warning" that the rioters could spend many years in the jail. In my opinion, this is no real warning. Instead, such a statement assures them that they can't be shot and they can probably always escape as long as they know how to run or drive a scooter. One of the things I could not resist to look at was the attitude of the Muslim countries. An agency that shares the name with the most influential agency in the region, (Suvrat explains below that it is actually a different Aljazeerah than the "regular" Aljazeerah), used the opportunity to summarize all criticisms and dirt they have against France and its perceived discrimination. Comments by the readers As an Irish reader notices, the French are not getting much credit for their opposition against the Iraq war. Well, that's hardly surprising. The second commentator is a French muslim who thinks that the riots won't help anyone. The third contributor is from Tunisia and he announces that France will be destroyed for mocking them, among other things, by headscarf bans. The fourth one is an American who conjectures that NATO and the U.S. will be the only one that will be able to save the French who were on wrong side of the history a few years ago. The fifth one, Dr. Khan from Holland, tells the "demonstrators" to keep on fighting against "discrimination". Another American advices the French muslims to work hard or return home and preserve their habits. Mshakir from Somalia surprisingly says a very similar thing. Abdul Mateen from India recommends to follow his prophet. Tuna hunter from Philippine seas explains that the French have always been soft against fascists. And so on, and so on. In Israel, many people think that Paris "deserves it". Europe is already a battleground in the war on terror. And political correctness in France prevents the police to stop crime. RedState.ORG, a major right-wing blog, argues that blood is necessary on the streets of Paris today to prevent a greater tragedy tomorrow. A corrected list of authors Several versions of the anti-Hagedorn paper we discussed here, including some editions submitted to journals, as well as three more papers temporarily had the name of Joe Polchinski on the list of the authors. Euphemistically speaking, it was a typo. Joe Polchinski is not a co-author and does not believe that the paper is (or papers are) based on a rational idea. The editors of journals and others are encouraged to learn that a combination of these two names (SC and JP) on a future paper will likely be another typo. More details here or here. I ask the commentators to avoid comments that could be inappropriate in this slightly sensitive context. Jágr's smile and jokes Let me admit that I find most athletes uninteresting. Jaromír Jágr, the NHL superstar, is definitely one of the major exceptions. Lee Jenkins wrote an article in the about this Czech athlete who has become a key player of New York Rangers and who is again the leading scorer and the dominant right wing in the league. What's so special about Jágr? First of all, he likes freedom. Others usually prefer the money and the fame. The communists stole the farm of Jágr's grandfather and arrested him for several years. Jágr's grandfather died in 1968, during the Prague Spring. Jágr has been using the number "68" ever since. As a schoolkid, not surprisingly, he kept a picture of Ronald Reagan in one of his schoolbooks or wallets. (So did I.) I am sure that most readers are completely unimpressed by these feelings, and they are wrong. Happy 25th anniversary to all Reaganites! It's been quarter a century since Reagan defeated Carter. (Jágr spoke to Reagan in 1992 via telephone and the conversation may have been difficult because he did not know what Gipper meant etc.) Jágr also enjoyed to be himself as a player whose salary was as high as 11 million USD. Well, such a situation opens new dimensions of freedom, including room for heavy gambling, speeding tickets in fast cars, new arcade games, other symbols of Jágr neverending childhood, and occassional debates with the IRS. Two or three years ago, his difficult years were caused partly by the limitations imposed on his freedom and on his jokes and partly by the breakup with Andrea Verešová, a former Miss Slovakia (2003). Well, yes, another reason for my understanding of Jágr is that I can imagine very well that it is discouraging if your Slovak girlfriend is not functioning properly. She is an attractive woman; but things look very different from the viewpoint of eternity. Saturday, November 05, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere A reader has pointed out that The Guardian, a British left-wing daily, is promoting the theory and industry (power plants) based on the "hydrinos" invented by Randell Mills, a Harvard-trained medical doctor, and investigated by a "hardy band of scientists", as the newspaper calls the undereducated or corrupt humans who believe this stuff. Hydrinos are supposed to be small versions of the Hydrogen atom. No muons are involved; indeed, Mills also wants to kill the uncertainty principle. Forget your 143a Quantum Mechanics I as well as billions of experiments that confirm it in detail. Instead of the muons, the theory underlying Mills' activities is based on the assumption that quantum mechanics should be replaced by the so-called "Grand Unified Theory of classical quantum mechanics"; and that we should also abandon the Big Bang theory. Randell Mills is no small fish among the crackpots. By 2000, he had collected 25 million USD for his "BlackLight Power Inc." company. He has actually built a factory and no doubt, his banking account is much richer today. How much energy does Mills get, according to his own words? He can get 1000 times more than the conventional fuels, he says. Also, Mills has demonstrated that he is able to multiply "2 times 13.6 electronvolts". He obtained 27.2 electronvolts. He argues that his "catalysts" absorb 27.2 electronvolts from the Hydrogen that is becoming a hydrino. In 2000, Mills also promised to fill California with hydrino plasma cars. He claimed that he had make breakthroughs in artificial intelligence, cosmology, medicine, and gravitational jujitsu. Friday, November 04, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Anthropic Weinberg Steven Weinberg is an exceptional physicist. He not only gave the name to the Standard Model, but he also discovered it - in the old era when it did not include QCD yet. He has made lots of other discoveries and he is still "in" and he even follows the very technical recent discoveries in string theory. Many of us have read his popular books such as The First Three Minutes and Dreams on a Final Theory and they influenced us tremendously. One of his controversial but equally successful predictions was the prediction of the rough size of the cosmological constant assuming the galactic principle. If cosmology allows the galaxy formation, the cosmological constant can't be too large because the space would expand and dilute too quickly, before the clumps can be created by the gravitational pull. But it definitely can't be too negative either because the Universe would approach the Big Crunch too early. There is an allowed window - the anthropic or galactic window - and the measurements in 1998 confirmed that our Universe seems to be somewhere in this window on its preferred, positive side. Although I am sure that Weinberg hates the idea of the anthropic principle deeply in his stomach and mind, he decided to accept it. And because his comment about the cosmological constant is the only at least partially and superficially successful prediction of the anthropic principle among those that can't be explained more accurately, he became a true prophet of the Anthropic Church. In his opening talk Weinberg presents some novel ideology to support this principle. The biggest revolutions of physics not only answer questions in new ways, but they even change the classification which questions are important and well-defined. And the anthropic principle and the landscape seem to be doing something similar, he argues. Asking for an explanation of the values of the parameters of the Standard Model is equally meaningless as asking how to describe the interior of an electron or how to explain that aether causes no winds, Weinberg suggests. Most published findings are false Via Wolfgang Flamme, a reader of Stephen McIntyre's blog. In his article, • financial and other interests and prejudices are high
9eb8980be5090b04
Pseudo-spectral method From Wikipedia, the free encyclopedia Jump to: navigation, search Pseudo-spectral methods,[1] also known as discrete variable representation (DVR) methods, are a class of numerical methods used in applied mathematics and scientific computing for the solution of partial differential equations. They are closely related to spectral methods, but complement the basis by an additional pseudo-spectral basis, which allows to represent functions on a quadrature grid. This simplifies the evaluation of certain operators, and can considerably speed up the calculation when using fast algorithms such as the fast Fourier transform. Motivation with a concrete example[edit] Take the initial-value problem i \frac{\partial}{\partial t} \psi(x, t) = \Bigl[-\frac{\partial^2}{\partial x^2} + V(x) \Bigr] \psi(x,t), \qquad\qquad \psi(t_0) = \psi_0 with periodic conditions \psi(x+2\pi, t) = \psi(x, t). This specific example is the Schrödinger equation for a particle in a potential V(x), but the structure is more general. In many practical partial differential equations, one has a term that involves derivatives (such as a kinetic energy contributions), and a multiplication with a function (for example, a potential). In the spectral method, the solution \psi is expanded in a suitable set of basis functions, for example plane waves, \psi(x,t) = \frac{1}{\sqrt{2\pi}} \sum_n c_n(t) e^{2\pi i n x} . Insertion and equating identical coefficients yields a set of ordinary differential equations for the coefficients, i\frac{d}{dt} c_n(t) = (2\pi n)^2 c_n + \sum_k V_{nk} c_k, where the elements V_{nk} are calculated through the explicit Fourier-transform V_{nk} = \frac{1}{2\pi} \int_0^{2\pi} V(x) \ e^{2\pi i (k-n) x} dx . The solution would then be obtained by truncating the expansion to N basis functions, and finding a solution for the c_n(t). In general, this is done by numerical methods, such as Runge–Kutta methods. For the numerical solutions, the right-hand side of the ordinary differential equation has to be evaluated repeatedly at different time steps. At this point, the spectral method has a major problem with the potential term V(x). In the spectral representation, the multiplication with the function V(x) transforms into a matrix multiplication, which scales as N^2. Also, the matrix elements V_{nk} need to be evaluated explicitly before the differential equation for the coefficients can be solved, which requires an additional step. In the pseudo-spectral method, this term is evaluated differently. Given the coefficients c_n(t), an inverse discrete Fourier transform yields the value of the function \psi at discrete grid points x_j = 2\pi j/N. At these grid points, the function is then multiplied, \psi'(x_i, t) = V(x_i) \psi(x_i, t), and the result Fourier-transformed back. This yields a new set of coefficients c'_n(t) that are used instead of the matrix product \sum_k V_{nk} c_k(t). It can be shown that both methods have similar accuracy. However, the pseudo-spectral method allows the use of a fast Fourier transform, which scales with O(N\ln N), and is therefore significantly more efficient than the matrix multiplication. Also, the function V(x) can be used directly without evaluating any additional integrals. Technical discussion[edit] In a more abstract way, the pseudo-spectral method deals with the multiplication of two functions V(x) and f(x) as part of a partial differential equation. To simplify the notation, the time-dependence is dropped. Conceptually, it consists of three steps: 1. f(x), \tilde{f}(x) = V(x)f(x) are expanded in a finite set of basis functions (this is the spectral method). 2. For a given set of basis functions, a quadrature is sought that converts scalar products of these basis functions into a weighted sum over grid points. 3. The product is calculated by multiplying V,f at each grid point. Expansion in a basis[edit] The functions f, \tilde f can be expanded in a finite basis \{\phi_n\}_{n = 0,\ldots,N} as f(x) = \sum_{n=0}^N c_n \phi_n(x) \tilde f(x) = \sum_{n=0}^N \tilde c_n \phi_n(x) For simplicity, let the basis be orthogonal and normalized, \langle \phi_n, \phi_m \rangle = \delta_{nm} using the inner product \langle f, g \rangle = \int_a^b f(x) \overline{g(x)} dx with appropriate boundaries a,b. The coefficients are then obtained by c_n = \langle f, \phi_n \rangle \tilde c_n = \langle \tilde f, \phi_n \rangle A bit of calculus yields then \tilde c_n = \sum_{m=0}^N V_{nm} c_m with V_{nm} = \langle V\phi_m, \phi_n \rangle. This forms the basis of the spectral method. To distinguish the basis of the \phi_n from the quadrature basis, the expansion is sometimes called Finite Basis Representation (FBR). For a given basis \{\phi_n\} and number of N+1 basis functions, one can try to find a quadrature, i.e., a set of N+1 points and weights such that \langle \phi_n, \phi_m \rangle = \sum_{i=0}^N w_i \phi_n(x_i) \overline{\phi_m(x_i)} \qquad\qquad n,m = 0,\ldots,N Special examples are the Gaussian quadrature for polynomials and the Discrete Fourier Transform for plane waves. It should be stressed that the grid points and weights, x_i,w_i are a function of the basis and the number N. The quadrature allows an alternative numerical representation of the function f(x), \tilde f(x) through their value at the grid points. This representation is sometimes denoted Discrete Variable Representation (DVR), and is completely equivalent to the expansion in the basis. f(x_i) = \sum_{n=0}^N c_n \phi_n(x_i) c_n = \langle f, \phi_n \rangle = \sum_{n=0}^{N} w_i f(x_i) \overline{\phi_n(x_i)} The multiplication with the function V(x) is then done at each grid point, \tilde f(x_i) = V(x_i) f(x_i). This generally introduces an additional approximation. To see this, we can calculate one of the coefficients \tilde c_n: \tilde c_n = \langle \tilde f, \phi_n \rangle = \sum_i w_i \tilde f(x_i) \overline{\phi_n(x_i)} = \sum_i w_i V(x_i) f(x_i) \overline{\phi_n(x_i)} However, using the spectral method, the same coefficient would be \tilde c_n = \langle Vf, \phi_n \rangle. The pseudo-spectral method thus introduces the additional approximation \langle Vf, \phi_n \rangle \approx \sum_i w_i V(x_i) f(x_i) \overline{\phi_n(x_i)}. If the product Vf can be represented with the given finite set of basis functions, the above equation is exact due to the chosen quadrature. Special pseudospectral schemes[edit] The Fourier method[edit] If periodic boundary conditions with period [0,L] are imposed on the system, the basis functions can be generated by plane waves, \phi_n(x) = \frac{1}{\sqrt{L}} e^{-\imath k_n x} with k_n = (-1)^n \lceil n/2 \rceil 2\pi/L, where \lceil\rceil is the ceiling function. The quadrature for a cut-off at n_{\text{max}} = N is given by the discrete Fourier transformation. The grid points are equally spaced, x_i = i \Delta x with spacing \Delta x = L / (N+1), and the constant weights are w_i = \Delta x. For the discussion of the error, note that the product of two plane waves is again a plane wave, \phi_{a} + \phi_b = \phi_c with c \leq a+b. Thus, qualitatively, if the functions f(x), V(x) can be represented sufficiently accurately with N_f, N_V basis functions, the pseudo-spectral method gives accurate results if N_f + N_V basis functions are used. An expansion in plane waves often has a poor quality and needs many basis functions to converge. However, the transformation between the basis expansion and the grid representation can be done using a Fast Fourier transform, which scales favorably as N \ln N. As a consequence, plane waves are one of the most common expansion that is encountered with pseudo-spectral methods. Another common expansion is into classical polynomials. Here, the Gaussian quadrature is used, which states that one can always find weights w_i and points x_i such that \int_a^b w(x) p(x) dx = \sum_{i=0}^N w_i p(x_i) holds for any polynomial p(x) of degree 2N+1 or less. Typically, the weight function w(x) and ranges a,b are chosen for a specific problem, and leads to one of the different forms of the quadrature. To apply this to the pseudo-spectral method, we choose basis functions \phi_n(x) = \sqrt{w(x)} P_n(x), with P_n being a polynomial of degree n with the property \int_a^b w(x) P_n(x) P_m(x) dx = \delta_{mn}. Under these conditions, the \phi_n form an orthonormal basis with respect to the scalar product \langle f, g \rangle = \int_a^b f(x) \overline{g(x)} dx. This basis, together with the quadrature points can then be used for the pseudo-spectral method. For the discussion of the error, note that if f is well represented by N_f basis functions and V is well represented by a polynomial of degree N_V, their product can be expanded in the first N_f+N_V basis functions, and the pseudo-spectral method will give accurate results for that many basis functions. Such polynomials occur naturally in several standard problems. For example, the quantum harmonic oscillator is ideally expanded in Hermite polynomials, and Jacobi-polynomials can be used to define the associated Legendre functions typically appearing in rotational problems. 1. ^ Orszag, Steven A. (1972). "Comparison of Pseudospectral and Spectral Approximation". Studies in Applied Mathematics 51 (1972): 253–259.  • Steven A. Orszag (1969) Numerical Methods for the Simulation of Turbulence, Phys. Fluids Supp. II, 12, 250-257 • D. Gottlieb and S. Orzag (1977) "Numerical Analysis of Spectral Methods : Theory and Applications", SIAM, Philadelphia, PA • J. Hesthaven, S. Gottlieb and D. Gottlieb (2007) "Spectral methods for time-dependent problems", Cambridge UP, Cambridge, UK • Lloyd N. Trefethen (2000) Spectral Methods in MATLAB. SIAM, Philadelphia, PA • Bengt Fornberg (1996) A Practical Guide to Pseudospectral Methods. Cambridge University Press, Cambridge, UK • Chebyshev and Fourier Spectral Methods by John P. Boyd. • Polynomial Approximation of Differential Equations, by Daniele Funaro, Lecture Notes in Physics, Volume 8, Springer-Verlag, Heidelberg 1992 • Javier de Frutos, Julia Novo: A Spectral Element Method for the Navier--Stokes Equations with Improved Accuracy • Canuto C., Hussaini M. Y., Quarteroni A., and Zang T.A. (2006) Spectral Methods. Fundamentals in Single Domains. Springer-Verlag, Berlin Heidelberg • Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 20.7. Spectral Methods". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
d36c7dad8b3fa990
From Wikipedia, the free encyclopedia Jump to: navigation, search Name, symbol Hydrogen-2,2H or D Neutrons 1 Protons 1 Nuclide data Natural abundance 0.0156% (Earth) Isotope mass 2.01410178 u Spin 1+ Excess energy 13135.720± 0.001 keV Binding energy 2224.52± 0.20 keV Deuterium (symbol D or 2H, also known as heavy hydrogen) is one of two stable isotopes of hydrogen. The nucleus of deuterium, called a deuteron, contains one proton and one neutron, whereas the far more common hydrogen isotope, protium, has no neutron in the nucleus. It has a natural abundance in Earth's oceans of about one atom in 6,420 of hydrogen. Thus deuterium accounts for approximately 0.0156% (or on a mass basis: 0.0312%) of all the naturally occurring hydrogen in the oceans, while the most common isotope (hydrogen-1 or protium) accounts for more than 99.98%. The abundance of deuterium changes slightly from one kind of natural water to another (see Vienna Standard Mean Ocean Water). The deuterium isotope's name is formed from the Greek deuteros meaning "second", to denote the two particles composing the nucleus.[1] Deuterium was discovered and named in 1931 by Harold Urey, earning him a Nobel Prize in 1934. This followed the discovery of the neutron in 1932, which made the nuclear structure of deuterium obvious. Soon after deuterium's discovery, Urey and others produced samples of "heavy water" in which the deuterium has been highly concentrated with respect to the protium. Because deuterium is destroyed in the interiors of stars faster than it is produced, and because other natural processes are thought to produce only an insignificant amount of deuterium, it is thought that nearly all deuterium found in nature was produced in the Big Bang 13.8 billion years ago, and that the basic or primordial ratio of hydrogen-1 (protium) to deuterium (about 26 atoms of deuterium per million hydrogen atoms) has its origin from that time. This is the ratio found in the gas giant planets, such as Jupiter. However, different astronomical bodies are found to have different ratios of deuterium to hydrogen-1, and this is thought to be as a result of natural isotope separation processes that occur from solar heating of ices in comets. Like the water-cycle in Earth's weather, such heating processes may enrich deuterium with respect to protium. In fact, the discovery of deuterium/protium ratios in a number of comets very similar to the mean ratio in Earth's oceans (156 atoms of deuterium per million hydrogens) has led to theories that much of Earth's ocean water has a cometary origin.[2][3] Deuterium/protium ratios thus continue to be an active topic of research in both astronomy and climatology. Differences between deuterium and common hydrogen (protium)[edit] Chemical symbol[edit] Deuterium discharge tube Deuterium is frequently represented by the chemical symbol D. Since it is an isotope of hydrogen with mass number 2, it is also represented by 2H. IUPAC allows both D and 2H, although 2H is preferred.[4] A distinct chemical symbol is used for convenience because of the isotope's common use in various scientific processes. Also, its large mass difference with protium (1H) (deuterium has a mass of 2.014102 u, compared to the mean hydrogen atomic weight of 1.007947 u, and protium's mass of 1.007825 u) confers non-negligible chemical dissimilarities with protium-containing compounds, whereas the isotope weight ratios within other chemical elements are largely insignificant in this regard. In quantum mechanics the energy levels of electrons in atoms depend on the reduced mass of the system of electron and nucleus. For the hydrogen atom, the role of reduced mass is most simply seen in the Bohr model of the atom, where the reduced mass appears in a simple calculation of the Rydberg constant and Rydberg equation, but the reduced mass also appears in the Schrödinger equation, and the Dirac equation for calculating atomic energy levels. The reduced mass of the system in these equations is close to the mass of a single electron, but differs from it by a small amount about equal to the ratio of mass of the electron to the atomic nucleus. For hydrogen, this amount is about 1837/1836, or 1.000545, and for deuterium it is even smaller: 3671/3670, or 1.0002725. The energies of spectroscopic lines for deuterium and light-hydrogen (hydrogen-1) therefore differ by the ratios of these two numbers, which is 1.000272. The wavelengths of all deuterium spectroscopic lines are shorter than the corresponding lines of light hydrogen, by a factor of 1.000272. In astronomical observation, this corresponds to a blue Doppler shift of 0.000272 times the speed of light, or 81.6 km/s.[5] The differences are much more pronounced in vibrational spectroscopy such as infrared spectroscopy and Raman spectroscopy,[1] and in rotational spectra such as microwave spectroscopy because the reduced mass of the deuterium is markedly higher than that of protium. Deuterium and Big Bang nucleosynthesis[edit] Deuterium is thought to have played an important role in setting the number and ratios of the elements that were formed in the Big Bang. Combining thermodynamics and the changes brought about by cosmic expansion, one can calculate the fraction of protons and neutrons based on the temperature at the point that the universe cooled enough to allow formation of nuclei. This calculation indicates seven protons for every neutron at the beginning of nucleogenesis, a ratio that would remain stable even after nucleogenesis was over. This fraction was in favor of protons initially, primarily because the lower mass of the proton favored their production. As the universe expanded, it cooled. Free neutrons and protons are less stable than helium nuclei, and the protons and neutrons had a strong energetic reason to form helium-4. However, forming helium-4 requires the intermediate step of forming deuterium. Through much of the few minutes after the big bang during which nucleosynthesis could have occurred, the temperature was high enough that the mean energy per particle was greater than the binding energy of weakly bound deuterium; therefore any deuterium that was formed was immediately destroyed. This situation is known as the deuterium bottleneck. The bottleneck delayed formation of any helium-4 until the universe became cool enough to form deuterium (at about a temperature equivalent to 100 keV). At this point, there was a sudden burst of element formation (first deuterium, which immediately fused to helium). However, very shortly thereafter, at twenty minutes after the Big Bang, the universe became too cool for any further nuclear fusion and nucleosynthesis to occur. At this point, the elemental abundances were nearly fixed, with the only change as some of the radioactive products of BBN (such as tritium) decay.[6] The deuterium bottleneck in the formation of helium, together with the lack of stable ways for helium to combine with hydrogen or with itself (there are no stable nuclei with mass numbers of five or eight) meant that insignificant carbon, or any elements heavier than carbon, formed in the Big Bang. These elements thus required formation in stars. At the same time, the failure of much nucleogenesis during the Big Bang ensured that there would be plenty of hydrogen in the later universe available to form long-lived stars, such as our Sun. Deuterium occurs in trace amounts naturally as deuterium gas, written 2H2 or D2, but most natural occurrence in the universe is bonded with a typical 1H atom, a gas called hydrogen deuteride (HD or 1H2H).[7] The existence of deuterium on Earth, elsewhere in the solar system (as confirmed by planetary probes), and in the spectra of stars, is also an important datum in cosmology. Gamma radiation from ordinary nuclear fusion dissociates deuterium into protons and neutrons, and there are no known natural processes other than the Big Bang nucleosynthesis, which might have produced deuterium at anything close to the observed natural abundance of deuterium (deuterium is produced by the rare cluster decay, and occasional absorption of naturally occurring neutrons by light hydrogen, but these are trivial sources). There is thought to be little deuterium in the interior of the Sun and other stars, as at temperatures there nuclear fusion reactions that consume deuterium happen much faster than the proton-proton reaction that creates deuterium. However, deuterium persists in the outer solar atmosphere at roughly the same concentration as in Jupiter, and this has probably been unchanged since the origin of the Solar System. The natural abundance of deuterium seems to be a very similar fraction of hydrogen, wherever hydrogen is found, unless there are obvious processes at work that concentrate it. The existence of deuterium at a low but constant primordial fraction in all hydrogen is another one of the arguments in favor of the Big Bang theory over the Steady State theory of the universe. The observed ratios of hydrogen to helium to deuterium in the universe are difficult to explain except with a Big Bang model. It is estimated that the abundances of deuterium have not evolved significantly since their production about 13.8 bya.[8] Measurements of Milky Way galactic deuterium from ultraviolet spectral analysis show a ratio of as much as 23 atoms of deuterium per million hydrogen atoms in undisturbed gas clouds, which is only 15% below the WMAP estimated primordial ratio of about 27 atoms per million from the Big Bang. This has been interpreted to mean that less deuterium has been destroyed in star formation in our galaxy than expected, or perhaps deuterium has been replenished by a large in-fall of primordial hydrogen from outside the galaxy.[9] In space a few hundred light years from the Sun, deuterium abundance is only 15 atoms per million, but this value is presumably influenced by differential adsorption of deuterium onto carbon dust grains in interstellar space.[10] The abundance of deuterium in the atmosphere of Jupiter has been directly measured (by the Galileo space probe as 26 atoms per million hydrogen atoms. ISO-SWS observations find 22 atoms per million hydrogen atoms in Jupiter.[11] and this abundance is thought to represent close to the primordial solar system ratio.[3] This is about 17% of the terrestrial deuterium-to-hydrogen ratio of 156 deuterium atoms per million hydrogen atoms. Cometary bodies such as Comet Hale Bopp and Halley's Comet have been measured to contain relatively more deuterium (about 200 atoms D per million hydrogens), ratios which are enriched with respect to the presumed protosolar nebula ratio, probably due to heating, and which are similar to the ratios found in Earth seawater. The recent measurement of deuterium amounts of 161 atoms D per million hydrogen in Comet 103P/Hartley (a former Kuiper belt object), a ratio almost exactly that in Earth's oceans, emphasizes the theory that Earth's surface water may be largely comet-derived.[2][3] Deuterium has also observed to be concentrated over the mean solar abundance in other terrestrial planets, in particular Mars and Venus. Deuterium is produced for industrial, scientific and military purposes, by starting with ordinary water—a small fraction of which is naturally-occurring heavy water—and then separating out the heavy water by the Girdler sulfide process, distillation, or other methods. The world's leading supplier of deuterium was Atomic Energy of Canada Limited, in Canada, until 1997, when the last heavy water plant was shut down. Canada uses heavy water as a neutron moderator for the operation of the CANDU reactor design. Physical properties[edit] The physical properties of deuterium compounds can exhibit significant kinetic isotope effects and other physical and chemical property differences from the hydrogen analogs; for example, D2O is more viscous than H2O.[12] Chemically, deuterium behaves similarly to ordinary hydrogen, but there are differences in bond energy and length for compounds of heavy hydrogen isotopes which are larger than the isotopic differences in any other element. Bonds involving deuterium and tritium are somewhat stronger than the corresponding bonds in hydrogen, and these differences are enough to make significant changes in biological reactions. Deuterium can replace the normal hydrogen in water molecules to form heavy water (D2O), which is about 10.6% denser than normal water (enough that ice made from it sinks in ordinary water). Heavy water is slightly toxic in eukaryotic animals, with 25% substitution of the body water causing cell division problems and sterility, and 50% substitution causing death by cytotoxic syndrome (bone marrow failure and gastrointestinal lining failure). Prokaryotic organisms, however, can survive and grow in pure heavy water (though they grow more slowly).[13] Consumption of heavy water does not pose a health threat to humans, it is estimated that a 70 kg person might drink 4.8 liters of heavy water without serious consequences.[14] Small doses of heavy water (a few grams in humans, containing an amount of deuterium comparable to that normally present in the body) are routinely used as harmless metabolic tracers in humans and animals. Quantum properties[edit] The deuteron has spin +1 ("triplet") and is thus a boson. The NMR frequency of deuterium is significantly different from common light hydrogen. Infrared spectroscopy also easily differentiates many deuterated compounds, due to the large difference in IR absorption frequency seen in the vibration of a chemical bond containing deuterium, versus light hydrogen. The two stable isotopes of hydrogen can also be distinguished by using mass spectrometry. The triplet deuteron nucleon is barely bound at EB = 2.23 MeV, so all the higher energy states are not bound. The singlet deuteron is a virtual state, with a negative binding energy of ~60 keV. There is no such stable particle, but this virtual particle transiently exists during neutron-proton inelastic scattering, accounting for the unusually large neutron scattering cross-section of the proton.[15] Nuclear properties (the deuteron)[edit] Deuteron mass and radius[edit] The nucleus of deuterium is called a deuteron. It has a mass of 2.013553212724(78) u[16] The charge radius of the deuteron is 2.1402(28) fm[17] Spin and energy[edit] Deuterium is one of only five stable nuclides with an odd number of protons and odd number of neutrons. (2H, 6Li, 10B, 14N, 180mTa; also, the long-lived radioactive nuclides 40K, 50V, 138La, 176Lu occur naturally.) Most odd-odd nuclei are unstable with respect to beta decay, because the decay products are even-even, and are therefore more strongly bound, due to nuclear pairing effects. Deuterium, however, benefits from having its proton and neutron coupled to a spin-1 state, which gives a stronger nuclear attraction; the corresponding spin-1 state does not exist in the two-neutron or two-proton system, due to the Pauli exclusion principle which would require one or the other identical particle with the same spin to have some other different quantum number, such as orbital angular momentum. But orbital angular momentum of either particle gives a lower binding energy for the system, primarily due to increasing distance of the particles in the steep gradient of the nuclear force. In both cases, this causes the diproton and dineutron nucleus to be unstable. The proton and neutron making up deuterium can be dissociated through neutral current interactions with neutrinos. The cross section for this interaction is comparatively large, and deuterium was successfully used as a neutrino target in the Sudbury Neutrino Observatory experiment. Isospin singlet state of the deuteron[edit] Due to the similarity in mass and nuclear properties between the proton and neutron, they are sometimes considered as two symmetric types of the same object, a nucleon. While only the proton has an electric charge, this is often negligible due to the weakness of the electromagnetic interaction relative to the strong nuclear interaction. The symmetry relating the proton and neutron is known as isospin and denoted I (or sometimes T). Isospin is an SU(2) symmetry, like ordinary spin, so is completely analogous to it. The proton and neutron form an isospin doublet, with a "down" state (↓) being a neutron, and an "up" state (↑) being a proton. A pair of nucleons can either be in an antisymmetric state of isospin called singlet, or in a symmetric state called triplet. In terms of the "down" state and "up" state, the singlet is \frac{1}{\sqrt{2}}\Big( |\uparrow \downarrow \rangle - |\downarrow \uparrow \rangle\Big). This is a nucleus with one proton and one neutron, i.e. a deuterium nucleus. The triplet is \frac{1}{\sqrt{2}}( |\uparrow\downarrow\rangle + |\downarrow\uparrow\rangle )\\ and thus consists of three types of nuclei, which are supposed to be symmetric: a deuterium nucleus (actually a highly excited state of it), a nucleus with two protons, and a nucleus with two neutrons. The latter two nuclei are not stable or nearly stable, and therefore so is this type of deuterium (meaning that it is indeed a highly excited state of deuterium). Approximated wavefunction of the deuteron[edit] The deuteron wavefunction must be antisymmetric if the isospin representation is used (since a proton and a neutron are not identical particles, the wavefunction need not be antisymmetric in general). Apart from their isospin, the two nucleons also have spin and spatial distributions of their wavefunction. The latter is symmetric if the deuteron is symmetric under parity (i.e. have an "even" or "positive" parity), and antisymmetric if the deuteron is antisymmetric under parity (i.e. have an "odd" or "negative" parity). The parity is fully determined by the total orbital angular momentum of the two nucleons: if it is even then the parity is even (positive), and if it is odd then the parity is odd (negative). The deuteron, being an isospin singlet, is antisymmetric under nucleons exchange due to isospin, and therefore must be symmetric under the double exchange of their spin and location. Therefore it can be in either of the following two different states: • Symmetric spin and symmetric under parity. In this case, the exchange of the two nucleons will multiply the deuterium wavefunction by (−1) from isospin exchange, (+1) from spin exchange and (+1) from parity (location exchange), for a total of (−1) as needed for antisymmetry. • Antisymmetric spin and antisymmetric under parity. In this case, the exchange of the two nucleons will multiply the deuterium wavefunction by (−1) from isospin exchange, (−1) from spin exchange and (−1) from parity (location exchange), again for a total of (−1) as needed for antisymmetry. In the first case the deuteron is a spin triplet, so that its total spin s is 1. It also has an even parity and therefore even orbital angular momentum l ; The lower its orbital angular momentum, the lower its energy. Therefore the lowest possible energy state has s = 1, l = 0. In the second case the deuteron is a spin singlet, so that its total spin s is 0. It also has an odd parity and therefore odd orbital angular momentum l. Therefore the lowest possible energy state has s = 0, l = 1. Since s = 1 gives a stronger nuclear attraction, the deuterium ground state is in the s =1, l = 0 state. The same considerations lead to the possible states of an isospin triplet having s = 0, l = even or s = 1, l = odd. Thus the state of lowest energy has s = 1, l = 1, higher than that of the isospin singlet. The analysis just given is in fact only approximate, both because isospin is not an exact symmetry, and more importantly because the strong nuclear interaction between the two nucleons is related to angular momentum in spin-orbit interaction that mixes different s and l states. That is, s and l are not constant in time (they do not commute with the Hamiltonian), and over time a state such as s = 1, l = 0 may become a state of s = 1, l = 2. Parity is still constant in time so these do not mix with odd l states (such as s = 0, l = 1). Therefore the quantum state of the deuterium is a superposition (a linear combination) of the s = 1, l = 0 state and the s = 1, l = 2 state, even though the first component is much bigger. Since the total angular momentum j is also a good quantum number (it is a constant in time), both components must have the same j, and therefore j = 1. This is the total spin of the deuterium nucleus. To summarize, the deuterium nucleus is antisymmetric in terms of isospin, and has spin 1 and even (+1) parity. The relative angular momentum of its nucleons l is not well defined, and the deuteron is a superposition of mostly l = 0 with some l = 2. Magnetic and electric multipoles[edit] In order to find theoretically the deuterium magnetic dipole moment µ, one uses the formula for a nuclear magnetic moment \mu = {1\over (j+1)}\langle(l,s),j,m_j=j|\overrightarrow{\mu}\cdot \overrightarrow{j}|(l,s),j,m_j=j\rangle \overrightarrow{\mu} = g^{(l)}\overrightarrow{l} + g^{(s)}\overrightarrow{s} g(l) and g(s) are g-factors of the nucleons. Since the proton and neutron have different values for g(l) and g(s), one must separate their contributions. Each gets half of the deuterium orbital angular momentum \overrightarrow{l} and spin \overrightarrow{s}. One arrives at \mu = {1\over (j+1)}\langle(l,s),j,m_j=j|\left({1\over 2}\overrightarrow{l} {g^{(l)}}_p + {1\over 2}\overrightarrow{s} ({g^{(s)}}_p + {g^{(s)}}_n)\right)\cdot \overrightarrow{j}|(l,s),j,m_j=j\rangle where subscripts p and n stand for the proton and neutron, and g(l)n = 0. By using the same identities as here and using the value g(l)p = µN, we arrive at the following result, in nuclear magneton units \mu = {1\over 4 (j+1)}\left[({g^{(s)}}_p + {g^{(s)}}_n)\big(j(j+1) - l(l+1) + s(s+1)\big) + \big(j(j+1) + l(l+1) - s(s+1)\big)\right] For the s = 1, l = 0 state (j = 1), we obtain \mu = {1\over 2}({g^{(s)}}_p + {g^{(s)}}_n) = 0.879 For the s = 1, l = 2 state (j = 1), we obtain \mu = -{1\over 4}({g^{(s)}}_p + {g^{(s)}}_n) + {3\over 4} = 0.310 The measured value of the deuterium magnetic dipole moment, is 0.857 µN. This suggests that the state of the deuterium is indeed only approximately s = 1, l = 0 state, and is actually a linear combination of (mostly) this state with s = 1, l = 2 state. The electric dipole is zero as usual. The measured electric quadrupole of the deuterium is 0.2859 e·fm2. While the order of magnitude is reasonable, since the deuterium radius is of order of 1 femtometer (see below) and its electric charge is e, the above model does not suffice for its computation. More specifically, the electric quadrupole does not get a contribution from the l =0 state (which is the dominant one) and does get a contribution from a term mixing the l =0 and the l =2 states, because the electric quadrupole operator does not commute with angular momentum. The latter contribution is dominant in the absence of a pure l = 0 contribution, but cannot be calculated without knowing the exact spatial form of the nucleons wavefunction inside the deuterium. Higher magnetic and electric multipole moments cannot be calculated by the above model, for similar reasons. Ionized deuterium in a fusor reactor giving off its characteristic pinkish-red glow Emission spectrum of an ultraviolet deuterium arc lamp Deuterium has a number of commercial and scientific uses. These include: Nuclear reactors[edit] Deuterium is used in heavy water moderated fission reactors, usually as liquid D2O, to slow neutrons without high neutron absorption of ordinary hydrogen.[18] This is a common commercial use for larger amounts of deuterium. In research reactors, liquid D2 is used in cold sources to moderate neutrons to very low energies and wavelengths appropriate for scattering experiments. Experimentally, deuterium is the most common nuclide used in nuclear fusion reactor designs, especially in combination with tritium, because of the large reaction rate (or nuclear cross section) and high energy yield of the D–T reaction. There is an even higher-yield D–3He fusion reaction, though the breakeven point of D–3He is higher than that of most other fusion reactions; together with the scarcity of 3He, this makes it implausible as a practical power source until at least D–T and D–D fusion reactions have been performed on a commercial scale. However, commercial nuclear fusion is not yet an accomplished technology. NMR spectroscopy[edit] Deuterium is useful in hydrogen nuclear magnetic resonance spectroscopy (proton NMR). NMR ordinarily requires compounds of interest to be analyzed as dissolved in solution. Because of deuterium's nuclear spin properties which differ from the light hydrogen usually present in organic molecules, NMR spectra of hydrogen/protium are highly differentiable from that of deuterium, and in practice deuterium is not "seen" by an NMR instrument tuned to light-hydrogen. Deuterated solvents (including heavy water, but also compounds like deuterated chloroform, CDCl3) are therefore routinely used in NMR spectroscopy, in order to allow only the light-hydrogen spectra of the compound of interest to be measured, without solvent-signal interference. Deuterium NMR spectra are especially informative in the solid state because of its relatively small quadrupole moment in comparison with those of bigger quadrupolar nuclei such as chlorine-35, for example. In chemistry, biochemistry and environmental sciences, deuterium is used as a non-radioactive, stable isotopic tracer, for example, in the doubly labeled water test. In chemical reactions and metabolic pathways, deuterium behaves somewhat similarly to ordinary hydrogen (with a few chemical differences, as noted). It can be distinguished from ordinary hydrogen most easily by its mass, using mass spectrometry or infrared spectrometry. Deuterium can be detected by femtosecond infrared spectroscopy, since the mass difference drastically affects the frequency of molecular vibrations; deuterium-carbon bond vibrations are found in locations free of other signals. Measurements of small variations in the natural abundances of deuterium, along with those of the stable heavy oxygen isotopes 17O and 18O, are of importance in hydrology, to trace the geographic origin of Earth's waters. The heavy isotopes of hydrogen and oxygen in rainwater (so-called meteoric water) are enriched as a function of the environmental temperature of the region in which the precipitation falls (and thus enrichment is related to mean latitude). The relative enrichment of the heavy isotopes in rainwater (as referenced to mean ocean water), when plotted against temperature falls predictably along a line called the global meteoric water line (GMWL). This plot allows samples of precipitation-originated water to be identified along with general information about the climate in which it originated. Evaporative and other processes in bodies of water, and also ground water processes, also differentially alter the ratios of heavy hydrogen and oxygen isotopes in fresh and salt waters, in characteristic and often regionally distinctive ways.[19] The ratio of concentration of 2H to 1H is usually indicated with a delta as δ2H and the geographic patterns of these values are plotted in maps termed as isoscapes. Stable isotope are incorporated into plants and animals and an analysis of the ratios in a migrant bird or insect can help suggest a rough guide to their origins.[20][21] Contrast properties[edit] Neutron scattering techniques particularly profit from availability of deuterated samples: The H and D cross sections are very distinct and different in sign, which allows contrast variation in such experiments. Further, a nuisance problem of ordinary hydrogen is its large incoherent neutron cross section, which is nil for D. The substitution of deuterium atoms for hydrogen atoms thus reduces scattering noise. Hydrogen is an important and major component in all materials of organic chemistry and life science, but it barely interacts with X-rays. As hydrogen (and deuterium) interact strongly with neutrons, neutron scattering techniques, together with a modern deuteration facility,[22] fills a niche in many studies of macromolecules in biology and many other areas. Nuclear weapons[edit] This is discussed below. It is notable that although most stars, including the Sun, generate energy over most of their lives by fusing hydrogen into heavier elements, such fusion of light hydrogen (protium) has never been successful in the conditions attainable on Earth. Thus, all artificial fusion, including the hydrogen fusion that occurs in so-called hydrogen bombs, requires heavy hydrogen (either tritium or deuterium, or both) in order for the process to work. Main article: deuterated drug Suggested neurological effects of natural abundance variation[edit] The natural deuterium content of water has been suggested from preliminary correlative epidemiology to influence the incidence of affective disorder-related pathophysiology and major depression, which might be mediated by the serotonergic mechanisms.[23] Suspicion of lighter element isotopes[edit] The existence of nonradioactive isotopes of lighter elements had been suspected in studies of neon as early as 1913, and proven by mass spectrometry of light elements in 1920. The prevailing theory at the time, however, was that the isotopes were due to the existence of differing numbers of "nuclear electrons" in different atoms of an element. It was expected that hydrogen, with a measured average atomic mass very close to 1 u, the known mass of the proton, always had a nucleus composed of a single proton (a known particle), and therefore could not contain any nuclear electrons without losing its charge entirely. Thus, hydrogen could have no heavy isotopes. Deuterium detected[edit] Harold Urey It was first detected spectroscopically in late 1931 by Harold Urey, a chemist at Columbia University. Urey's collaborator, Ferdinand Brickwedde, distilled five liters of cryogenically produced liquid hydrogen to mL of liquid, using the low-temperature physics laboratory that had recently been established at the National Bureau of Standards in Washington, D.C. (now the National Institute of Standards and Technology). The technique had previously been used to isolate heavy isotopes of neon. The cryogenic boiloff technique concentrated the fraction of the mass-2 isotope of hydrogen to a degree that made its spectroscopic identification unambiguous.[24][25] Naming of the isotope and Nobel Prize[edit] Urey created the names protium, deuterium, and tritium in an article published in 1934. The name is based in part on advice from G. N. Lewis who had proposed the name "deutium". The name is derived from the Greek deuteros (second), and the nucleus to be called "deuteron" or "deuton". Isotopes and new elements were traditionally given the name that their discoverer decided. Some British chemists, like Ernest Rutherford, wanted the isotope to be called "diplogen", from the Greek diploos (double), and the nucleus to be called diplon.[1] The amount inferred for normal abundance of this heavy isotope of hydrogen was so small (only about 1 atom in 6400 hydrogen atoms in ocean water (156 deuteriums per million hydrogens) that it had not noticeably affected previous measurements of (average) hydrogen atomic mass. This explained why it hadn't been experimentally suspected before. Urey was able to concentrate water to show partial enrichment of deuterium. Lewis had prepared the first samples of pure heavy water in 1933. The discovery of deuterium, coming before the discovery of the neutron in 1932, was an experimental shock to theory, but when the neutron was reported, making deuterium's existence more explainable, deuterium won Urey the Nobel Prize in chemistry in 1934. Lewis was embittered by being passed over for this recognition given to his former student.[1] "Heavy water" experiments in World War II[edit] Main article: Heavy water Shortly before the war, Hans von Halban and Lew Kowarski moved their research on neutron moderation from France to England, smuggling the entire global supply of heavy water (which had been made in Norway) across in twenty-six steel drums.[26][27] During World War II, Nazi Germany was known to be conducting experiments using heavy water as moderator for a nuclear reactor design. Such experiments were a source of concern because they might allow them to produce plutonium for an atomic bomb. Ultimately it led to the Allied operation called the "Norwegian heavy water sabotage", the purpose of which was to destroy the Vemork deuterium production/enrichment facility in Norway. At the time this was considered important to the potential progress of the war. After World War II ended, the Allies discovered that Germany was not putting as much serious effort into the program as had been previously thought. They had been unable to sustain a chain reaction. The Germans had completed only a small, partly built experimental reactor (which had been hidden away). By the end of the war, the Germans did not even have a fifth of the amount of heavy water needed to run the reactor[clarification needed], partially due to the Norwegian heavy water sabotage operation. However, even had the Germans succeeded in getting a reactor operational (as the U.S. did with a graphite reactor in late 1942), they would still have been at least several years away from development of an atomic bomb with maximal effort. The engineering process, even with maximal effort and funding, required about two and a half years (from first critical reactor to bomb) in both the U.S. and U.S.S.R, for example. Deuterium in thermonuclear weapons[edit] A view of the Sausage device casing of the Ivy Mike hydrogen bomb, with its instrumentation and cryogenic equipment attached. This bomb held a cryogenic Dewar flask containing room for as much as 160 kilograms of liquid deuterium. The bomb was 20 feet tall. Note the seated man at the right of the photo for the scale. The 62-ton Ivy Mike device built by the United States and exploded on 1 November 1952, was the first fully successful "hydrogen bomb" or thermonuclear bomb. In this context, it was the first bomb in which most of the energy released came from nuclear reaction stages that followed the primary nuclear fission stage of the atomic bomb. The Ivy Mike bomb was a factory-like building, rather than a deliverable weapon. At its center, a very large cylindrical, insulated vacuum flask or cryostat, held cryogenic liquid deuterium in a volume of about 1000 liters (160 kilograms in mass, if this volume had been completely filled). Then, a conventional atomic bomb (the "primary") at one end of the bomb was used to create the conditions of extreme temperature and pressure that were needed to set off the thermonuclear reaction. Within a few years, so-called "dry" hydrogen bombs were developed that did not need cryogenic hydrogen. Released information suggests that all thermonuclear weapons built since then contain chemical compounds of deuterium and lithium in their secondary stages. The material that contains the deuterium is mostly lithium deuteride, with the lithium consisting of the isotope lithium-6. When the lithium-6 is bombarded with fast neutrons from the atomic bomb, tritium (hydrogen-3) is produced, and then the deuterium and the tritium quickly engage in thermonuclear fusion, releasing abundant energy, helium-4, and even more free neutrons. Data for elemental deuterium[edit] Formula: D2 or 2 • Density: 0.180 kg/m3 at STP (0 °C, 101.325 kPa). • Atomic weight: 2.0141017926 u. • Mean abundance in ocean water (from VSMOW) 155.76 ± 0.1 ppm (a ratio of 1 part per approximately 6420 parts), that is, about 0.015% of the atoms in a sample (by number, not weight) Data at approximately 18 K for D2 (triple point): • Density: • Liquid: 162.4 kg/m3 • Gas: 0.452 kg/m3 • Viscosity: 12.6 µPa·s at 300 K (gas phase) • Specific heat capacity at constant pressure cp: • Solid: 2,950 J/(kg·K) • Gas: 5,200 J/(kg·K) An antideuteron is the antiparticle of the nucleus of deuterium, consisting of an antiproton and an antineutron. The antideuteron was first produced in 1965 at the Proton Synchrotron at CERN[28] and the Alternating Gradient Synchrotron at Brookhaven National Laboratory.[29] A complete atom, with a positron orbiting the nucleus, would be called antideuterium, but as of 2005 antideuterium has not yet been created. The proposed symbol for antideuterium is D, that is, D with an overbar.[30] See also[edit] Deuterium is an isotope of hydrogen Decay product of: Decay chain of deuterium Decays to: 1. ^ a b c Dan O'Leary "The deeds to deuterium" Nature Chemistry 4, 236 (2012). doi:10.1038/nchem.1273. "Science: Deuterium v. Diplogen". Time. 19 February 1934.  2. ^ a b Hartogh, Paul; Lis, Dariusz C.; Bockelée-Morvan, Dominique; De Val-Borro, Miguel; Biver, Nicolas; Küppers, Michael; Emprechtinger, Martin; Bergin, Edwin A. et al. (2011). "Ocean-like water in the Jupiter-family comet 103P/Hartley 2". Nature 478 (7368): 218–220. Bibcode:2011Natur.478..218H. doi:10.1038/nature10519. PMID 21976024.  3. ^ a b c Hersant, Franck; Gautier, Daniel; Hure, Jean‐Marc (2001). "A Two‐dimensional Model for the Primordial Nebula Constrained by D/H Measurements in the Solar System: Implications for the Formation of Giant Planets". The Astrophysical Journal 554: 391. Bibcode:2001ApJ...554..391H. doi:10.1086/321355. "see fig. 7. for a review of D/H ratios in various astronomical objects"  4. ^ "§ IR-3.3.2 Provisional Recommendations". Nomenclature of Inorganic Chemistry. Chemical Nomenclature and Structure Representation Division, IUPAC. Retrieved 2007-10-03.  5. ^ Hébrard, G.; Péquignot, D.; Vidal-Madjar, A.; Walsh, J. R.; Ferlet, R. (7 Feb 2000), Detection of deuterium Balmer lines in the Orion Nebula  6. ^ Weiss, Achim. "Equilibrium and change: The physics behind Big Bang Nucleosynthesis". Einstein Online. Retrieved 2007-02-24.  7. ^ IUPAC Commission on Nomenclature of Inorganic Chemistry (2001). "Names for Muonium and Hydrogen Atoms and their Ions" (PDF). Pure and Applied Chemistry 73 (2): 377–380. doi:10.1351/pac200173020377.  8. ^ "Cosmic Detectives". The European Space Agency (ESA). 2 April 2013. Retrieved 2013-04-15.  9. ^ NASA page on FUSE satellite 10. ^ graph of deuterium with distance in our galactic neighborhood See also "What is the Total Deuterium Abundance in the Local Galactic Disk?" Linsky, J. L., Draine, B. T., Moos, H. W., Jenkins, E. B., Wood, B. E., Oliviera, C., Blair, W. P., Friedman, S. D., Knauth, D., Lehner, N., Redfield, S., Shull, J. M., Sonneborn, G., Williger, G. M., 2006, The Astrophysical Journal, Vol. 647, page 1106. 11. ^ Lellouch, E; Bézard, B.; Fouchet, T.; Feuchtgruber, H.; Encrenaz, T.; De Graauw, T. (2001). "The deuterium abundance in Jupiter and Saturn from ISO-SWS observations". Astronomy & Astrophysics 670 (2): 610–622. Bibcode:2001A&A...370..610L. doi:10.1051/0004-6361:20010259.  12. ^ Lide, D. R., ed. (2005). CRC Handbook of Chemistry and Physics (86th ed.). Boca Raton (FL): CRC Press. ISBN 0-8493-0486-5.  13. ^ D. J. Kushner, Alison Baker, and T. G. Dunstall (1999). "Pharmacological uses and perspectives of heavy water and deuterated compounds". Can. J. Physiol. Pharmacol. 77 (2): 79–88. doi:10.1139/cjpp-77-2-79. PMID 10535697.  14. ^ Attila Vertes, ed. (2003). "Physiological effect of heavy water". Elements and isotopes: formation, transformation, distribution. Dordrecht: Kluwer. pp. 111–112. ISBN 978-1-4020-1314-0.  15. ^ Neutron-Proton Scattering. (PDF). Retrieved on 2011-11-23. 16. ^ 2002 CODATA recommended value. Retrieved on 2011-11-23. 17. ^ 2006 CODATA recommended value Mohr, Peter J.; Taylor, Barry N.; Newell, David B. (2008). "CODATA Recommended Values of the Fundamental Physical Constants: 2006". Rev. Mod. Phys. 80 (2): 633–730. arXiv:0801.0028. Bibcode:2008RvMP...80..633M. doi:10.1103/RevModPhys.80.633.  18. ^ See neutron cross section#Typical cross sections 19. ^ "Oxygen – Isotopes and Hydrology". SAHRA. Retrieved 2007-09-10.  20. ^ West, Jason B. (2009). Isoscapes: Understanding movement, pattern, and process on Earth through isotope mapping. Springer.  21. ^ Hobson, KA; Van Wilgenburg, SL; Wassenaar, LI; Larson, K (2012). "Linking Hydrogen (δ2H) Isotopes in Feathers and Precipitation: Sources of Variance and Consequences for Assignment to Isoscapes.". PLoS ONE 7 (4): e35137. Bibcode:2012PLoSO...735137H. doi:10.1371/journal.pone.0035137.  22. ^ "NMI3 - Deuteration". NMI3. Retrieved 2012-01-23.  23. ^ Strekalova T., Evans M. et al (2014). "Deuterium content of water increases depression susceptibility: The potential role of a serotonin-related mechanism.". Behavioural Brain Research. doi:10.1016/j.bbr.2014.07.039. PMID 25092571.  24. ^ Brickwedde, Ferdinand G. (1982). "Harold Urey and the discovery of deuterium". Physics Today 35 (9): 34. Bibcode:1982PhT....35i..34B. doi:10.1063/1.2915259.  25. ^ Urey, Harold; Brickwedde, F.; Murphy, G. (1932). "A Hydrogen Isotope of Mass 2". Physical Review 39: 164. Bibcode:1932PhRv...39..164U. doi:10.1103/PhysRev.39.164.  26. ^ Sherriff, Lucy (1 June 2007). "Royal Society unearths top secret nuclear research". The Register. Situation Publishing Ltd. Retrieved 2007-06-03.  27. ^ "The Battle for Heavy Water Three physicists' heroic exploits". CERN Bulletin. European Organization for Nuclear Research. 1 April 2002. Retrieved 2007-06-03.  28. ^ Massam, T; Muller, Th.; Righini, B.; Schneegans, M.; Zichichi, A. (1965). "Experimental observation of antideuteron production". Il Nuovo Cimento 39: 10–14. Bibcode:1965NCimS..39...10M. doi:10.1007/BF02814251.  29. ^ Dorfan, D. E; Eades, J.; Lederman, L. M.; Lee, W.; Ting, C. C. (June 1965). "Observation of Antideuterons". Phys. Rev. Lett. 14 (24): 1003–1006. Bibcode:1965PhRvL..14.1003D. doi:10.1103/PhysRevLett.14.1003.  30. ^ Chardonnet, P; Orloff, Jean; Salati, Pierre (1997). "The production of anti-matter in our galaxy". Physics Letters B 409: 313. arXiv:astro-ph/9705110. Bibcode:1997PhLB..409..313C. doi:10.1016/S0370-2693(97)00870-8.  External links[edit]
e0e9e073539a3d03
SpringerOpen Newsletter Receive periodic news and updates relating to SpringerOpen. Open Access Nano Express A Model System for Dimensional Competition in Nanostructures: A Quantum Wire on a Surface Rainer Dick Author Affiliations Physics and Engineering Physics, University of Saskatchewan, 116 Science Place, Saskatoon, SK, Canada, S7N 5E2 Nanoscale Research Letters 2008, 3:140-144  doi:10.1007/s11671-008-9126-4 Received:16 January 2008 Accepted:13 March 2008 Published:2 April 2008 © 2008 to the authors The retarded Green’s function (EH + iε)−1is given for a dimensionally hybrid Hamiltonian which interpolates between one and two dimensions. This is used as a model for dimensional competition in propagation effects in the presence of one-dimensional subsystems on a surface. The presence of a quantum wire generates additional exponential terms in the Green’s function. The result shows how the location of the one-dimensional subsystem affects propagation of particles. Fermions in reduced dimensions; Nanowires; Quantum wires One-dimensional field theory is frequently used for quantum wires or nanowires [1-3] or nanotubes [4-7]. Two-dimensional field theory has become a universally accepted tool for the theoretical modeling of particles and quasi-particles on surfaces, interfaces, and thin films. The success of low-dimensional field theory in applications to the quantum hall effects [8-10], impurity scattering in low-dimensional system (see e.g. [11-16]), and the confirmation of low-dimensional critical exponents in experimental samples [17-22] confirm that low-dimensional field theories are useful tools for the description of low-dimensional condensed matter systems. The properties of a physical system have a strong dependence on the number of dimensionsd. A straightforward example is provided by the zero energy Green’s functionG(r)|E=0, which is proportional torind = 1 and proportional to lnrind = 2, and decays liker2 − din higher dimensions. Green’s functions determine correlation functions, two-particle interaction potentials, propagation of initial conditions, scattering off perturbations, susceptibilities, and densities of states in quantum physics. It is therefore of interest to study systems of mixed dimensionality, where competition of dimensions can manifest itself in the properties of particle propagators. To address questions of dimensional competition analytically in the framework of interfaces in a bulk material, dimensionally hybrid Hamiltonians of the form were introduced in [23]. The corresponding first quantized Hamiltonian is Here the convention is to use vector notation for directions parallel to an interface, while z is orthogonal to the interface. From a practical side, Hamiltonians of the form (2) may be used to investigate propagation effects of weakly coupled particles in the presence of an interface. From a theoretical side, the Hamiltonians (1,2) are of interest for the analytic study of competition between two-dimensional and three-dimensional motion. The two-dimensional mass parameter μ is a mass per length. In simple models it is given by where depending on the model, is either a bulk penetration depth of states bound to the interface at z = z0 or a thickness of the interface, see [24]. The zero energy Green’s function for the Hamiltonians (1,2) for perturbations in the interface (z′ = z0 = 0, ) satisfies and was found in [23] (), The Green’s function in the interface is given in terms of a Struve function and a Bessel function, and interpolates between two-dimensional and three-dimensional distance laws (see Fig. 1), thumbnailFigure 1. The upper dotted line is the three-dimensional Green's function (4πr) −1 in units of ℓ−1, the continuous line is the Green's function (4), and the lower dotted line is the two-dimensional logarithmic Green’s function. x = r/ℓ The corresponding energy-dependent Green’s function was also recently reported [24]. However, another system of great practical and theoretical interest concerns quantum wires or nanowires on surfaces. Preparation techniques for one-dimensional nanostructures were recently reviewed in reference [25]. We will examine the corresponding dimensionally hybrid Hamiltonian and its Green’s function in this paper. The Hamiltonian We wish to discuss effects of dimensionality of nanostructures on the propagation of weakly coupled particles in the framework of a simple model system. We assume large de Broglie wavelengths h/p compared to lateral dimensions of nanostructures, and for our model system we also neglect electromagnetic effects or interactions, bearing in mind that these effects are highly relevant in realistic nanostructures [26,27]. The model system which we have in mind consists of non-relativistic particles or quasi-particles tied to a surface. The surface carries a one-dimensional wire. The x direction is along the wire and the y direction is orthogonal to the wire. The wire is located at y = y0. The particles can move with a mass m on the surface, but motion along the wire may be described by a different effective mass m*. In case of a weak attraction to substructures, kinetic operators can be split between bulk motion and motion along substructures [24]. Alternatively, for large lateral de Broglie wavelength relative to lateral extension of a substructure, one can also argue that the lateral integral of the kinetic energy density along a substructure only yields a factor in the kinetic energy for motion along the substructure. In either case we end up with an approximation for the kinetic energy operator of the particles of the form where the mass parameter is a mass per lateral attenuation length of bound states, or a mass per lateral extension of the substructure. The operator (7) is the second quantized kinetic Hamiltonian for the particles. The corresponding first quantized Hamiltonian is The wire corresponds to a channel in which propagation of a particle comes with a different cost in terms of kinetic energy. It is intuitively clear that existence of this channel will affect propagation of the particles on the surface, and we will discuss this in terms of a resulting Green’s function for the Hamiltonians (7,8). The Green’s Function inkSpace The Hamiltonians (7,8) yield the Schrödinger equation and the corresponding equation for the Green’s function in the energy representation, The last equation reads in (x,y) representation and with the convention Substitution of the Fourier transform This yields with the condition This result implies that must have the form with the yet to be determined function satisfying Substitution of We finally find where the definition was used. ) is the Green’s function which we would use inkspace Feynman rules. It is also instructive to switch toyrepresentation for the transverse direction to see the impact of the wire on particle propagation. The Green’s Function in Mixed Representations and Impurity Scattering It is well known in surface science that Green’s functions can also be given in closed form in mixed representations, where momentum coordinates are used along the surface and configuration space coordinates are used for the transverse directions. The same observation applies here. In particular, the Green’s function with one transverse momentum replaced by a transverse coordinate is given by The Green’s function with both arguments for the transverse direction given in terms of configuration space coordinates is The first order perturbation of a state ψ0(x,y) due to scattering off an impurity potential V(x,y) corresponds to The result (16) shows peculiar distance effects between the location of the wire and the perturbation or impurity on the one hand, and between the location of the wire and theycoordinate of the wave function on the other hand. In both cases, the wavelength (for ) or attenuation length (for ) are the same as in the terms from the unperturbed surface propagator. In the evanescent case, the impact of the wire on impurity scattering is exponentially suppressed if either the impurity is located far from the wire or if the wave function is considered far from the wire. In the non-evanescent case the perturbation of the propagator due to the wire becomes a strongly oscillating function of far from the wire. Therefore the impact of the wire will also be small if we consider wave packets far from the wire. For a simple application of (17) consider a wire at y0 = 0 and an impurity potential The plane wave with is a solution of the Schrödinger Eq. 9 which satisfies the conditions for the approximation (7). In this case we get a scattering amplitude The equation for ℓ = 0 is just the standard result for scattering from a pointlike impurity in mixed representation. The presence of the wire reduces the scattering cross section of the impurity for orthogonal infall. Equations 15 and 16 also show that the effects of the additional terms should be most noticeable ifkxℓ ≫ 1. Since promising samples should have an effective mass m* for motion along a quantum or nanowire which is much smaller than the effective mass m for motion along the surface. What comes to mind is an InSb nanowire on a Si surface. Scattering of surface particles off impurities in the presence of the wire should exhibit the additional propagator terms. A simple model system for dimensional competition in nanostructures has been proposed. The system assumes that motion along a wire on a surface comes with a different cost in terms of kinetic energy, e.g. due to effective mass effects. The dimensionally hybrid retarded Green’s function for the propagation of free particles in the system was found in closed analytic form both inkspace and in mixed (kx,y) representations. The wire generates extra exponential terms in the propagator of the particles. The attenuation lengths or wavelengths in the evanescent or oscillating case, respectively, are the same as for the unperturbed propagator, but the extra terms exhibit distance effects between the particles and the wire. This work was supported in part by NSERC Canada. I also gratefully acknowledge the generous hospitality of the Perimeter Institute for Theoretical Physics while this work was completed. 1. Wu H, Sprung DWL: J. Martorell, Phys. Rev. B. 1992, 45:11960. Publisher Full Text OpenURL 2. Wan CC, Mozos JL, Taraschi G, Wang J, Guo H: Appl. Phys. Lett.. 1997, 71:419. COI number [1:CAS:528:DyaK2sXkslygsLY%3D] Publisher Full Text OpenURL 3. Garcia-Vidal FJ, Flores F, Davison SG: Prog. Surf. Sci.. 2003, 74:177. COI number [1:CAS:528:DC%2BD3sXos1Kit7k%3D] Publisher Full Text OpenURL 4. Ando T, Suzuura H: Physica E. 2003, 18:202. COI number [1:CAS:528:DC%2BD3sXktVKgsb4%3D] 5. Ando T: J. Phys. Soc. Jpn.. 2005, 74:777. COI number [1:CAS:528:DC%2BD2MXjvVKjt74%3D] Publisher Full Text OpenURL 6. Umegaki T, Ogawa M, Miyoshi T: J. Appl. Phys.. 2006, 99:034307. COI number [1:CAS:528:DC%2BD28Xhs1Whurk%3D] Publisher Full Text OpenURL 7. Ando T, Asada Y, Uryu S: Phys. Stat. Sol. A. 2007, 204:1882. COI number [1:CAS:528:DC%2BD2sXntl2msLk%3D] Publisher Full Text OpenURL 8. Laughlin RB: Phys. Rev. Lett.. 1983, 50:1395. Publisher Full Text OpenURL 9. Chakraborty T, Pietiläinen P: The Quantum Hall Effects. Springer-Verlag, Berlin; 1995. OpenURL 10. Chakraborty T: Adv. Phys. 2000, 49:959. COI number [1:CAS:528:DC%2BD3MXktlWmuw%3D%3D] Publisher Full Text OpenURL 11. Lake R, Klimeck G, Bowen RC, Jovanovic D: J. Appl. Phys.. 1997, 81:7845. COI number [1:CAS:528:DyaK2sXjvFCmurc%3D] Publisher Full Text OpenURL 12. Fu Y, Willander M: Surf. Sci.. 1997, 391:81. COI number [1:CAS:528:DyaK2sXotFSns78%3D] Publisher Full Text OpenURL 13. Mazon KT, Hai GQ, Lee MT, Koenraad PM, van der AFW: Stadt, Phys. Rev. B. 2004, 70:193312. COI number [1:CAS:528:DC%2BD2cXhtVGgs7vO] Publisher Full Text OpenURL 14. Shytov AV, Mishchenko EG, Engel HA, Halperin BI: Phys. Rev. B. 2006, 73:075316. COI number [1:CAS:528:DC%2BD28XisFelsrk%3D] Publisher Full Text OpenURL 15. Grimaldi C, Cappelluti E, Marsiglio F: Phys. Rev. B. 2006, 73:081303. COI number [1:CAS:528:DC%2BD28XivV2is7k%3D] Publisher Full Text OpenURL 16. Ando T: J. Phys. Soc. Jpn.. 2006, 75:074716. COI number [1:CAS:528:DC%2BD28Xpt1SktLc%3D] Publisher Full Text OpenURL 17. Li Y, Baberschke K: Phys. Rev. Lett.. 1992, 68:1208. COI number [1:CAS:528:DyaK38XhvVKhsrg%3D] Publisher Full Text OpenURL 18. Back CH, Würsch C, Vaterlaus A, Ramsperger U, Maler U, Pescia D: Nature. 1995, 378:597. COI number [1:CAS:528:DyaK2MXpvVChsLo%3D] Publisher Full Text OpenURL 19. Elmers H-J, Hauschild J, Gradmann U: Phys. Rev. B. 1996, 54:15224. COI number [1:CAS:528:DyaK28XnsVSkur0%3D] Publisher Full Text OpenURL 20. Wildes AR, Ronnow HM, Roessli B, Harris MJ, Godfrey KW: Phys. Rev. B. 2006, 74:094422. COI number [1:CAS:528:DC%2BD28XhtVGkt7%2FN] Publisher Full Text OpenURL 21. Takekoshi K, Sasaki Y, Ema K, Yao H, Takanishi Y, Takezoe H: Phys. Rev. E. 2007, 75:031704. COI number [1:CAS:528:DC%2BD2sXkvVeksLk%3D] Publisher Full Text OpenURL 22. Dick R: Int. J. Theor. Phys.. 2003, 42:569. Publisher Full Text OpenURL 23. Ruda HE, Polyani JC, Yang J, Wu Z, Philipose U, Xu T, Yang S, Kavanagh KL, Liu JQ, Yang L, Wang Y, Robbie K, Yang J, Kaminska K, Cooke DG, Hegmann FA, Budz AJ, Haugen HK: Nanoscale Res. Lett.. 2006, 1:99. COI number [1:CAS:528:DC%2BD28XhtFGitLjK] Publisher Full Text OpenURL 24. Ruda H, Shik A: Physica E. 2000, 6:543. COI number [1:CAS:528:DC%2BD3cXhsFOnsrs%3D] Publisher Full Text OpenURL 25. Achosyan A, Petrosyan S, Craig W, Ruda HE, Shik A: J. Appl. Phys.. 2007, 101:104308. COI number [1:CAS:528:DC%2BD2sXmtleru7s%3D] Publisher Full Text OpenURL
1986c3e20ea011b2
@proceedings{2288, abstract = {This book constitutes the proceedings of the 11th International Conference on Computational Methods in Systems Biology, CMSB 2013, held in Klosterneuburg, Austria, in September 2013. The 15 regular papers included in this volume were carefully reviewed and selected from 27 submissions. They deal with computational models for all levels, from molecular and cellular, to organs and entire organisms.}, editor = {Gupta, Ashutosh and Henzinger, Thomas A}, isbn = {978-3-642-40707-9}, location = {Klosterneuburg, Austria}, publisher = {Springer}, title = {{Computational Methods in Systems Biology}}, doi = {10.1007/978-3-642-40708-6}, volume = {8130}, year = {2013}, } @article{2289, abstract = {Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness, which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.}, author = {Henzinger, Thomas A}, journal = {Computer Science Research and Development}, number = {4}, pages = {331 -- 344}, publisher = {Springer}, title = {{Quantitative reactive modeling and verification}}, doi = {10.1007/s00450-013-0251-7}, volume = {28}, year = {2013}, } @article{2290, abstract = {The plant hormone indole-acetic acid (auxin) is essential for many aspects of plant development. Auxin-mediated growth regulation typically involves the establishment of an auxin concentration gradient mediated by polarly localized auxin transporters. The localization of auxin carriers and their amount at the plasma membrane are controlled by membrane trafficking processes such as secretion, endocytosis, and recycling. In contrast to endocytosis or recycling, how the secretory pathway mediates the localization of auxin carriers is not well understood. In this study we have used the differential cell elongation process during apical hook development to elucidate the mechanisms underlying the post-Golgi trafficking of auxin carriers in Arabidopsis. We show that differential cell elongation during apical hook development is defective in Arabidopsis mutant echidna (ech). ECH protein is required for the trans-Golgi network (TGN)-mediated trafficking of the auxin influx carrier AUX1 to the plasma membrane. In contrast, ech mutation only marginally perturbs the trafficking of the highly related auxin influx carrier LIKE-AUX1-3 or the auxin efflux carrier PIN-FORMED-3, both also involved in hook development. Electron tomography reveals that the trafficking defects in ech mutant are associated with the perturbation of secretory vesicle genesis from the TGN. Our results identify differential mechanisms for the post-Golgi trafficking of de novo-synthesized auxin carriers to plasma membrane from the TGN and reveal how trafficking of auxin influx carriers mediates the control of differential cell elongation in apical hook development.}, author = {Boutté, Yohann and Jonsson, Kristoffer and Mcfarlane, Heather and Johnson, Errin and Gendre, Delphine and Swarup, Ranjan and Friml, Jirí and Samuels, Lacey and Robert, Stéphanie and Bhalerao, Rishikesh}, journal = {PNAS}, number = {40}, pages = {16259 -- 16264}, publisher = {National Academy of Sciences}, title = {{ECHIDNA mediated post Golgi trafficking of auxin carriers for differential cell elongation}}, doi = {10.1073/pnas.1309057110}, volume = {110}, year = {2013}, } @inproceedings{2291, abstract = {Cryptographic access control promises to offer easily distributed trust and broader applicability, while reducing reliance on low-level online monitors. Traditional implementations of cryptographic access control rely on simple cryptographic primitives whereas recent endeavors employ primitives with richer functionality and security guarantees. Worryingly, few of the existing cryptographic access-control schemes come with precise guarantees, the gap between the policy specification and the implementation being analyzed only informally, if at all. In this paper we begin addressing this shortcoming. Unlike prior work that targeted ad-hoc policy specification, we look at the well-established Role-Based Access Control (RBAC) model, as used in a typical file system. In short, we provide a precise syntax for a computational version of RBAC, offer rigorous definitions for cryptographic policy enforcement of a large class of RBAC security policies, and demonstrate that an implementation based on attribute-based encryption meets our security notions. We view our main contribution as being at the conceptual level. Although we work with RBAC for concreteness, our general methodology could guide future research for uses of cryptography in other access-control models. }, author = {Ferrara, Anna and Fuchsbauer, Georg and Warinschi, Bogdan}, location = {New Orleans, LA, United States}, pages = {115 -- 129}, publisher = {IEEE}, title = {{Cryptographically enforced RBAC}}, doi = {10.1109/CSF.2013.15}, year = {2013}, } @proceedings{2292, abstract = {This book constitutes the thoroughly refereed conference proceedings of the 38th International Symposium on Mathematical Foundations of Computer Science, MFCS 2013, held in Klosterneuburg, Austria, in August 2013. The 67 revised full papers presented together with six invited talks were carefully selected from 191 submissions. Topics covered include algorithmic game theory, algorithmic learning theory, algorithms and data structures, automata, formal languages, bioinformatics, complexity, computational geometry, computer-assisted reasoning, concurrency theory, databases and knowledge-based systems, foundations of computing, logic in computer science, models of computation, semantics and verification of programs, and theoretical issues in artificial intelligence.}, editor = {Chatterjee, Krishnendu and Sgall, Jiri}, isbn = {978-3-642-40312-5}, location = {Klosterneuburg, Austria}, pages = {VI -- 854}, publisher = {Springer}, title = {{Mathematical Foundations of Computer Science 2013}}, doi = {10.1007/978-3-642-40313-2}, volume = {8087}, year = {2013}, } @inproceedings{2293, abstract = {Many computer vision problems have an asymmetric distribution of information between training and test time. In this work, we study the case where we are given additional information about the training data, which however will not be available at test time. This situation is called learning using privileged information (LUPI). We introduce two maximum-margin techniques that are able to make use of this additional source of information, and we show that the framework is applicable to several scenarios that have been studied in computer vision before. Experiments with attributes, bounding boxes, image tags and rationales as additional information in object classification show promising results.}, author = {Sharmanska, Viktoriia and Quadrianto, Novi and Lampert, Christoph}, location = {Sydney, Australia}, pages = {825 -- 832}, publisher = {IEEE}, title = {{Learning to rank using privileged information}}, doi = {10.1109/ICCV.2013.107}, year = {2013}, } @inproceedings{2294, abstract = {In this work we propose a system for automatic classification of Drosophila embryos into developmental stages. While the system is designed to solve an actual problem in biological research, we believe that the principle underly- ing it is interesting not only for biologists, but also for researchers in computer vision. The main idea is to combine two orthogonal sources of information: one is a classifier trained on strongly invariant features, which makes it applicable to images of very different conditions, but also leads to rather noisy predictions. The other is a label propagation step based on a more powerful similarity measure that however is only consistent within specific subsets of the data at a time. In our biological setup, the information sources are the shape and the staining patterns of embryo images. We show experimentally that while neither of the methods can be used by itself to achieve satisfactory results, their combina- tion achieves prediction quality comparable to human performance.}, author = {Kazmar, Tomas and Kvon, Evgeny and Stark, Alexander and Lampert, Christoph}, location = {Sydney, Australia}, publisher = {IEEE}, title = {{Drosophila Embryo Stage Annotation using Label Propagation}}, doi = {10.1109/ICCV.2013.139}, year = {2013}, } @inproceedings{2295, abstract = {We consider partially observable Markov decision processes (POMDPs) with ω-regular conditions specified as parity objectives. The qualitative analysis problem given a POMDP and a parity objective asks whether there is a strategy to ensure that the objective is satisfied with probability 1 (resp. positive probability). While the qualitative analysis problems are known to be undecidable even for very special cases of parity objectives, we establish decidability (with optimal EXPTIME-complete complexity) of the qualitative analysis problems for POMDPs with all parity objectives under finite-memory strategies. We also establish asymptotically optimal (exponential) memory bounds.}, author = {Chatterjee, Krishnendu and Chmelik, Martin and Tracol, Mathieu}, location = {Torino, Italy}, pages = {165 -- 180}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum für Informatik}, title = {{What is decidable about partially observable Markov decision processes with omega-regular objectives}}, doi = {10.4230/LIPIcs.CSL.2013.165}, volume = {23}, year = {2013}, } @article{2297, abstract = {We present an overview of mathematical results on the low temperature properties of dilute quantum gases, which have been obtained in the past few years. The presentation includes a discussion of Bose-Einstein condensation, the excitation spectrum for trapped gases and its relation to superfluidity, as well as the appearance of quantized vortices in rotating systems. All these properties are intensely being studied in current experiments on cold atomic gases. We will give a description of the mathematics involved in understanding these phenomena, starting from the underlying many-body Schrödinger equation.}, author = {Seiringer, Robert}, journal = {Japanese Journal of Mathematics}, number = {2}, pages = {185 -- 232}, publisher = {Springer}, title = {{Hot topics in cold gases: A mathematical physics perspective}}, doi = {10.1007/s11537-013-1264-5}, volume = {8}, year = {2013}, } @inproceedings{2298, abstract = {We present a shape analysis for programs that manipulate overlaid data structures which share sets of objects. The abstract domain contains Separation Logic formulas that (1) combine a per-object separating conjunction with a per-field separating conjunction and (2) constrain a set of variables interpreted as sets of objects. The definition of the abstract domain operators is based on a notion of homomorphism between formulas, viewed as graphs, used recently to define optimal decision procedures for fragments of the Separation Logic. Based on a Frame Rule that supports the two versions of the separating conjunction, the analysis is able to reason in a modular manner about non-overlaid data structures and then, compose information only at a few program points, e.g., procedure returns. We have implemented this analysis in a prototype tool and applied it on several interesting case studies that manipulate overlaid and nested linked lists. }, author = {Dragoi, Cezara and Enea, Constantin and Sighireanu, Mihaela}, location = {Seattle, WA, United States}, pages = {150 -- 171}, publisher = {Springer}, title = {{Local shape analysis for overlaid data structures}}, doi = {10.1007/978-3-642-38856-9_10}, volume = {7935}, year = {2013}, }
db5eb445f1f5fcaf
Reconstructions of Quantum Theory Quantum theory is one of our most successful physical theories, but also one of the most mysterious ones: why are detector click probabilities in nature described by abstract mathematical structures like Hilbert spaces, complex numbers, and operators? This question of “why the quantum” has more than just philosophical significance --  there are important reasons for addressing this question in a rigorous mathematical way: • We need to construct consistent modifications of quantum theory (QT) that we can in principle test against QT in experiments. If we simply modify some of the quantum postulates in an ad hoc way (e.g. by adding nonlinear terms to the Schrödinger equation), then we typically do not obtain a consistent theory. But if we derive quantum theory from principles, then we can weaken or modify the principles, and work out mathematically what set of alternative consistent theories appears as solutions. These alternative theories will also have implications for quantum computing, since they can serve as theoretical models of computation that can be contrasted in their computational power to quantum mechanics. • Perhaps a reformulation of QT in terms of simple physical principles can be helpful in the search for a theory of quantum gravity. Or, somewhat less ambitiously, such a reformulation may help to illuminate how the structure of quantum theory is related to the structure of spacetime, by taking a broader perspective beyond operator algebras. • It has proven tremendously important in the history of physics to derive ad hoc equations from first principles. A paradigmatic example is given by the Lorentz transformations: initially, they were discovered in an ad hoc way as symmetries of the Maxwell equations. But Einstein has shown that they can be understood as simple consequences of two principles: the relativity principle and the light postulate. This has enormously improved our understanding, and paved the way to some of the subsequent development of relativity. Perhaps we can profit in a similar way from a principled derivation of quantum theory. We have contributed to this research program in a number of different ways, and have found several successful reconstructions of the (finite-dimensional) quantum formalism from simple physical, or broadly information-theoretic, principles. In one such work [1], later improved in [2], we have derived quantum theory from the following postulates: 1. Continouous reversibility: In any system, for any pair of pure states, continuous reversible time evolution can bring one state to the other. 2. Tomographic locality: The state of any composite system is completely characterized by measurements on its individual components and their correlations. 3. Existence of an information unit: There is a type of system (“generalized bit”) such that the state of any system can be reversibly encoded in a sufficiently large number of such bits. Moreover, state tomography for the bit is possible, and these bits can interact. 4. No simultaneous encoding (aka “Zeilinger’s Principle”): If a generalized bit is used to perfectly encode one classical bit, it cannot simultaneously encode any further information. To show that quantum theory follows from these postulates, the first step is to derive that the generalized bit corresponds exactly to the quantum bit, which can be represented as a three-dimensional ball (the Bloch ball). The picture above (from [2]) shows how this is achieved step by step: first, the state space of the generalized bit can be any convex set of any dimension d. No simultaneous encoding shows that there can be no “flat pieces in its boundary”, since these could be used to encode additional information into a state – hence it must have a droplet-like shape. Continuous reversibility shows that the state space must be very symmetric – that is, due to group representation theory, an ellipsoid, which can be reparametrized as a ball. Then, interaction between pairs of d-dimensional ball state spaces turns out to be possible only if d=3, as for the quantum bit, which can be shown with quite some effort [3]. Finally, the only consistent way to combine qubit state spaces is in a way that is equivalent to standard quantum theory’s state space over many qubits, as shown in [4], and we are done. Note that at no point is it assumed that there are wave functions, operators or complex numbers – instead, those arise as consequences of the postulates. And we get all other ingredients and predictions of abstract finite-dimensional quantum: unitary transformations, uncertainty relations, the Schrödinger equation (but not the choice of Hamiltonian or Lagrangian), Tsirelson’s bound on Bell correlations and more. Once we have postulates that give us quantum theory (QT), we can start to relax those postulates, and in this way try to construct “quantum theory’s closest cousins” – theories that predict measurement outcome probabilities in a way that is not related to operator algebras or Hilbert spaces, but that are still in some sense physically close to QT. This turns out to be very difficult to do for the postulates above; but in [5], we have derived QT from four other principles that may be more suitable for this goal (because they do mention composite systems which are much harder to analyze): 1. Classical decomposability: Every state of a physical system can be represented as a probabilistic mixture of perfectly distinguishable pure states. 2. Strong symmetry: Every set of perfectly distinguishable pure states (of a given cardinality) can be reversibly transformed into any other such set (of the same cardinality). 3. No higher-order interference: The interference patterns between mutually exclusive “paths” in an experiment is exactly the sum of the patterns which would be observed in all two-path subexperiments, corrected for overlaps. 4. Observability of energy: There is non-trivial continuous reversible time evolution, and the generator of every such evolution can be associated to an observable (“energy”) which is a conserved quantity. The insight that QT only allows for “second-order”, but not for “higher-order interference” is due to Sorkin [14], and this has initiated several recent experimental tests of this property (e.g. [15]). It would significantly improve the experimental situation if one would not only test for general violations of the quantum predictions, but if one actually had a viable alternative theory that could predict a certain concrete alternative behavior in those experiments. In principle, we can obtain such a theory by using the reconstruction above: Simply drop postulate (3), and work out the resulting set of theories that appear as solutions. This is one of the problems that we are currently thinking about, but it seems to be a mathematically extremely difficult problem. Apart from the goals listed above, what does all this tell us about the nature of the quantum world? In [6], we argue that this tells us that QT can be understood as a “principle theory of information” (referring to Einstein’s distinction between principle and constructive theories). While the information-theoretic reconstructions of QT are to a large extent agnostic regarding the question how to interpret the quantum state, they do seem to tell us that we can understand the full quantum formalism as a simple “theory of probability” – namely, of quantum states expressing our information, knowledge, or belief about future observations (conditioned on our choices of which observations to make). In other words: taking an informational view on the quantum state as a starting point can lead us more or less directly to a derivation of QT’s formalism. No other interpretations (like many-worlds or Bohmian mechanics) can currently claim something like this. The question now is how far this reasoning can be pushed: can we start with a concrete “informational” interpretation of QT – concretely, QBism – and derive the full quantum formalism from its basic premises? Our group is part of an international team (funded by the Foundational Questions Institute) that is currently working on this question: [1] Ll. Masanes and M. P. Müller, A derivation of quantum theory from physical requirements,  New J. Phys. 13, 063001 (2011). arXiv:1004.1483 [4] G. de la Torre, Ll. Masanes, A. J. Short, and M. P. Müller, Deriving quantum theory from its local structure and reversibility, Phys. Rev. Lett. 109, 090403 (2012). arXiv:1110.5482 [6] A. Koberinski and M. P. Müller, Quantum theory as a principle theory: insights from an information-theoretic construction, to appear in S. Fletcher and M. Cuffaro (ed.), “Physical perspectives on computation, computational perspectives on physics”, Cambridge University Press, 2017. arXiv:1707.05602 [7] M. P. Müller and C. Ududec, Structure of reversible computation determines the self-duality of quantum theory, Phys. Rev. Lett. 108, 130401 (2012). arXiv:1110.3516 [8] D. Gross, M. Müller, R. Colbeck, and O. C. O. Dahlsten, All reversible dynamics in maximally non-local theories are trivial, Phys. Rev. Lett. 104, 080402 (2010). arXiv:0910.1840 Further reading: [9] L. Hardy, Quantum Theory From Five Reasonable Axioms, arXiv:quant-ph/0101012. [10] B. Dakić and Č. Brukner, Quantum Theory and beyond: Is entanglement special?, in “Deep Beauty. Understanding the Quantum World through Mathematical Innovation”, edited by H. Halvorson (Cambridge University Press, New York, 2011). arXiv:0911.0695 [11] G. Chiribella, G. M. D’Ariano, and P. Perinotti, Informational derivation of quantum theory, Phys. Rev. A 84, 012311 (2011).  arXiv:1011.6451 [12] L. Hardy, Reformulating and reconstructing quantum theory, arXiv:1104.2066. [13] P. A. Höhn, Quantum theory from rules on information acquisition, Entropy 19(3), 98 (2017). arXiv:1612.06849 [14] R. D. Sorkin, Quantum Mechanics as Quantum Measure Theory, Mod. Phys. Lett A 9, 3119 (1994). arXiv:gr-qc/9401003
b40b05871ba9b057
Appendix E Parabolic Potential Well An example of an extremely important class of one-dimensional bound state in quantum mechanics is the simple harmonic oscillator whose potential can be written as E.1 E.1 where K is the force constant of the oscillator. The Hamiltonian operator is given by E.2 E.2 The Schrödinger equation that gives the possible energies of the oscillator is E.3 E.3 This equation can be simplified by choosing a new measure length and a new measure of energy, each of which is dimensionless. images, where images. With these substitutions, Equation E.3 becomes E.4 E.4 In looking for bounded solutions, one can notice that as η approaches infinity and becomes too small compared to ζ2, the resulting differential equation can be easily solved to yield E.5 E.5 This expression for the asymptotic dependence ... Get Introduction to Nanomaterials and Devices now with O’Reilly online learning.
3b5312a77f970b16
Complexity-Theoretic Foundations of Quantum Supremacy Experiments Scott Aaronson The University of Texas at Austin.  .  Supported by a Vannevar Bush Faculty Fellowship from the US Department of Defense, and by the Simons Foundation “It from Qubit” Collaboration.    Lijie Chen Tsinghua University.  .  Supported in part by the National Basic Research Program of China Grant 2011CBA00300, 2011CBA00301, the National Natural Science Foundation of China Grant 61361136003. In the near future, there will likely be special-purpose quantum computers with 40-50 high-quality qubits.  This paper lays general theoretical foundations for how to use such devices to demonstrate “quantum supremacy”: that is, a clear quantum speedup for some task, motivated by the goal of overturning the Extended Church-Turing Thesis as confidently as possible. First, we study the hardness of sampling the output distribution of a random quantum circuit, along the lines of a recent proposal by the Quantum AI group at Google.  We show that there’s a natural average-case hardness assumption, which has nothing to do with sampling, yet implies that no polynomial-time classical algorithm can pass a statistical test that the quantum sampling procedure’s outputs do pass.  Compared to previous work—for example, on BosonSampling and —the central advantage is that we can now talk directly about the observed outputs, rather than about the distribution being sampled. Second, in an attempt to refute our hardness assumption, we give a new algorithm, inspired by Savitch’s Theorem, for simulating a general quantum circuit with qubits and depth in polynomial space and time.  We then discuss why this and other known algorithms fail to refute our assumption. Third, resolving an open problem of Aaronson and Arkhipov, we show that any strong quantum supremacy theorem—of the form “if approximate quantum sampling is classically easy, then the polynomial hierarchy collapses”—must be non-relativizing.  This sharply contrasts with the situation for exact sampling. Fourth, refuting a conjecture by Aaronson and Ambainis, we show that there is a sampling task, namely Fourier Sampling, with a 1 versus linear separation between its quantum and classical query complexities. Fifth, in search of a “happy medium” between black-box and non-black-box arguments, we study quantum supremacy relative to oracles in .  Previous work implies that, if one-way functions exist, then quantum supremacy is possible relative to such oracles.  We show, conversely, that some computational assumption is needed: if and , then quantum supremacy is impossible relative to oracles with small circuits. 1 Introduction The Extended Church-Turing Thesis, or ECT, asserts that every physical process can be simulated by a deterministic or probabilistic Turing machine with at most polynomial overhead.  Since the 1980s—and certainly since the discovery of Shor’s algorithm [Sho97] in the 1990s—computer scientists have understood that quantum mechanics might refute the ECT in principle.  Today, there are actual experiments being planned (e.g., [BIS16]) with the goal of severely challenging the ECT in practice.  These experiments don’t yet aim to build full, fault-tolerant, universal quantum computers, but “merely” to demonstrate some quantum speedup over the best known or conjectured classical algorithms, for some possibly-contrived task, as confidently as possible.  In other words, the goal is to answer the skeptics [Kal11, Lev03] who claim that genuine quantum speedups are either impossible in theory, or at any rate, are hopelessly out of reach technologically.  Recently, the term “quantum supremacy” has come into vogue for such experiments,111As far as we know, the first person to use the term in print was John Preskill [Pre12]. although the basic goal goes back several decades, to the beginning of quantum computing itself. Before going further, we should address some common misunderstandings about quantum supremacy. The ECT is an asymptotic claim, which of course means that no finite experiment could render a decisive verdict on it, even in principle.  But this hardly makes experiments irrelevant.  If 1. a quantum device performed some task (say)  times faster than a highly optimized simulation written by “adversaries” and running on a classical computing cluster, with the quantum/classical gap appearing to increase exponentially with the instance size across the whole range tested, and 2. this observed performance closely matched theoretical results that predicted such an exponential quantum speedup for the task in question, and 3. all other consistency checks passed (for example: removing quantum behavior from the experimental device destroyed the observed speedup), this would obviously “raise the stakes” for anyone who still believed the ECT!  Indeed, when some quantum computing researchers have criticized previous claims to have experimentally achieved quantum speedups (see, e.g., [Aar15]), it has typically been on the ground that, in those researchers’ view, the experiments failed to meet one or more of the conditions above. It’s sometimes claimed that any molecule in Nature or the laboratory, for which chemists find it computationally prohibitive to solve the Schrödinger equation and calculate its ground state, already provides an example of “quantum supremacy.”  The idea, in other words, is that such a molecule constitutes a “useful quantum computer, for the task of simulating itself.” For us, the central problem with this idea is that in theoretical computer science, we care less about individual instances than about solving problems (i.e., infinite collections of instances) in a more-or-less uniform way.  For any one molecule, the difficulty in simulating classically it might reflect genuine asymptotic hardness, but it might also reflect other issues (e.g., a failure to exploit special structure in the molecule, or the same issues of modeling error, constant-factor overheads, and so forth that arise even in simulations of classical physics). Thus, while it’s possible that complex molecules could form the basis for a convincing quantum supremacy demonstration, we believe more work would need to be done.  In particular, one would want a device that could synthesize any molecule in some theoretically infinite class—and one would then want complexity-theoretic evidence that the general problem, of simulating a given molecule from that class, is asymptotically hard for a classical computer.  And in such a case, it would seem more natural to call the synthesis machine the “quantum computer,” rather than the molecules themselves! In summary, we regard quantum supremacy as a central milestone for quantum computing that hasn’t been reached yet, but that might be reached in the near future.  This milestone is essentially negative in character: it has no obvious signature of the sort familiar to experimental physics, since it simply amounts to the nonexistence of an efficient classical algorithm to simulate a given quantum process.  For that reason, the tools of theoretical computer science will be essential to understand when quantum supremacy has or hasn’t been achieved.  So in our view, even if it were uninteresting as TCS, there would still be an urgent need for TCS to contribute to the discussion about which quantum supremacy experiments to do, how to verify their results, and what should count as convincing evidence that classical simulation is hard.  Happily, it turns out that there is a great deal here of intrinsic TCS interest as well. 1.1 Supremacy from Sampling In recent years, a realization has crystallized that, if our goal is to demonstrate quantum supremacy (rather than doing anything directly useful), then there are good reasons to shift our attention from decision and function problems to sampling problems: that is, problems where the goal is to sample an -bit string, either exactly or approximately, from a desired probability distribution. A first reason for this is that demonstrating quantum supremacy via a sampling problem doesn’t appear to require the full strength of a universal quantum computer.  Indeed, there are now at least a half-dozen proposals [AA13, BJS10, FH16, TD04, MFF14, JVdN14, ABKM16] for special-purpose devices that could efficiently solve sampling problems believed to be classically intractable, without being able to solve every problem in the class , or for that matter even every problem in .  Besides their intrinsic physical and mathematical interest, these intermediate models might be easier to realize than a universal quantum computer.  In particular, because of their simplicity, they might let us avoid the need for the full machinery of quantum fault-tolerance [ABO97]: something that adds a theoretically polylogarithmic but enormous-in-practice overhead to quantum computation.  Thus, many researchers now expect that the first convincing demonstration of quantum supremacy will come via this route. A second reason to focus on sampling problems is more theoretical: in the present state of complexity theory, we can arguably be more confident that certain quantum sampling problems really are classically hard, than we are that factoring (for example) is classically hard, or even that .  Already in 2002, Terhal and DiVincenzo [TD04] noticed that, while constant-depth quantum circuits can’t solve any classically intractable decision problems,222This is because any qubit output by such a circuit depends on at most a constant number of input qubits. they nevertheless have a curious power: namely, they can sample probability distributions that can’t be sampled in classical polynomial time, unless , which would be a surprising inclusion of complexity classes.  Then, in 2004, Aaronson showed that , where  means  with the ability to postselect on exponentially-unlikely measurement outcomes.  This had the immediate corollary that, if there’s an efficient classical algorithm to sample the output distribution of an arbitrary quantum circuit—or for that matter, any distribution whose probabilities are multiplicatively close to the correct ones—then By Toda’s Theorem [Tod91], this implies that the polynomial hierarchy collapses to the third level. Related to that, in 2009, Aaronson [Aar10] showed that, while it was (and remains) a notorious open problem to construct an oracle relative to which , one can construct oracular sampling and relation problems that are solvable in quantum polynomial time, but that are provably not solvable in randomized polynomial time augmented with a  oracle. Then, partly inspired by that oracle separation, Aaronson and Arkhipov [AA13] proposed BosonSampling: a model that uses identical photons traveling through a network of beamsplitters and phaseshifters to solve classically hard sampling problems.  Aaronson and Arkhipov proved that a polynomial-time exact classical simulation of BosonSampling would collapse .  They also gave a plausible conjecture implying that even an approximate simulation would have the same consequence.  Around the same time, Bremner, Jozsa, and Shepherd [BJS10] independently proposed the Commuting Hamiltonians or  (“Instantaneous Quantum Polynomial-Time”) model, and showed that it had the same property, that exact classical simulation would collapse .  Later, Bremner, Montanaro, and Shepherd [BMS15, BMS16] showed that, just like for BosonSampling, there are plausible conjectures under which even a fast classical approximate simulation of the  model would collapse . Since then, other models have been proposed with similar behavior.  To take a few examples: Farhi and Harrow [FH16] showed that the so-called Quantum Approximate Optimization Algorithm, or QAOA, can sample distributions that are classically intractable unless  collapses.  Morimae, Fujii, and Fitzsimons [MFF14] showed the same for the so-called One Clean Qubit or  model, while Jozsa and Van den Nest [JVdN14] showed it for stabilizer circuits with magic initial states and nonadaptive measurements, and Aaronson et al. [ABKM16] showed it for a model based on integrable particle scattering in  dimensions.  In retrospect, the constant-depth quantum circuits considered by Terhal and DiVincenzo [TD04] also have the property that fast exact classical simulation would collapse . Within the last four years, quantum supremacy via sampling has made the leap from complexity theory to a serious experimental prospect.  For example, there have by now been many small-scale demonstrations of BosonSampling in linear-optical systems, with the current record being a -photon experiment by Carolan et al. [CHS15].  To scale up to (say) or photons—as would be needed to make a classical simulation of the experiment suitably difficult—seems to require more reliable single-photon sources than exist today.  But some experts (e.g., [Rud16, PGHAG15]) are optimistic that optical multiplexing, superconducting resonators, or other technologies currently under development will lead to such photon sources.  In the meantime, as we mentioned earlier, Boixo et al. [BIS16] have publicly detailed a plan, currently underway at Google, to perform a quantum supremacy experiment involving random circuits applied to a 2D array of 40-50 coupled superconducting qubits.  So far, the group at Google has demonstrated the preparation and measurement of entangled states on a linear array of 9 superconducting qubits [KBF15]. 1.2 Theoretical Challenges Despite the exciting recent progress in both theory and experiment, some huge conceptual problems have remained about sampling-based quantum supremacy.  These problems are not specific to any one quantum supremacy proposal (such as BosonSampling, , or random quantum circuits), but apply with minor variations to all of them. Verification of Quantum Supremacy Experiments.  From the beginning, there was the problem of how to verify the results of a sampling-based quantum supremacy experiment.  In contrast to (say) factoring and discrete log, for sampling tasks such as BosonSampling, it seems unlikely that there’s any witness certifying the quantum experiment’s output, let alone an witness that’s also the experimental output itself.  Rather, for the sampling tasks, not only simulation but even verification might need classical exponential time.  Yet, while no one has yet discovered a general way around this,333In principle, one could use so-called authenticated quantum computing [ABOE08, BFK09], but the known schemes for that might be much harder to realize technologically than a basic quantum supremacy experiment, and in any case, they all presuppose the validity of quantum mechanics. it’s far from the fatal problem that some have imagined.  The reason is simply that experiments can and will target a “sweet spot,” of (say) 40-50 qubits, for which classical simulation and verification of the results is difficult but not impossible. Still, the existing verification methods have a second drawback.  Namely, once we’ve fixed a specific verification test for sampling from a probability distribution , we ought to consider, not merely all classical algorithms that sample exactly or approximately from , but all classical algorithms that output anything that passes the verification test.  To put it differently, we ought to talk not about the sampling problem itself, but about an associated relation problem: that is, a problem where the goal is to produce any output that satisfies a given condition. As it happens, in 2011, Aaronson [Aar14] proved an extremely general connection between sampling problems and relation problems.  Namely, given any approximate sampling problem , he showed how to define a relation problem  such that, for every “reasonable” model of computation (classical, quantum, etc.),  is efficiently solvable in that model if and only if is.  This had the corollary that where and  are the classes of approximate sampling problems solvable in polynomial time by randomized and quantum algorithms respectively, and  and  are the corresponding classes of relation problems.  Unfortunately, Aaronson’s construction of  involved Kolmogorov complexity: basically, one asks for an -tuple of strings, , such that where  is the desired probability of outputting  in the sampling problem.  And of course, verifying such a condition is extraordinarily difficult, even more so than calculating the probabilities .444Furthermore, this is true even if we substitute a resource-bounded Kolmogorov complexity, as Aaronson’s result allows.  For this reason, it’s strongly preferable to have a condition that talks only about the largeness of the ’s, and not about the algorithmic randomness of the ’s.  But then hardness for the sampling problem no longer necessarily implies hardness for the relation problem, so a new argument is needed. Supremacy Theorems for Approximate Sampling.  A second difficulty is that any quantum sampling device is subject to noise and decoherence.  Ultimately, of course, we’d like hardness results for quantum sampling that apply even in the presence of experimentally realistic errors.  Very recently, Bremner, Montanaro, and Shepherd [BMS16] and Fujii [Fuj16] have taken some promising initial steps in that direction.  But even if we care only about the smallest “experimentally reasonable” error—namely, an error that corrupts the output distribution  to some other distribution  that’s -close to  in variation distance—Aaronson and Arkhipov [AA13] found that we already engage substantial new open problems in complexity theory, if we want evidence for classical hardness.  So for example, their hardness argument for approximate BosonSampling depended on the conjecture that there’s no  algorithm to estimate the permanent of an i.i.d. Gaussian matrix , with high probability over the choice of . Of course, one could try to close that loophole by proving that this Gaussian permanent estimation problem is -hard, which is indeed a major challenge that Aaronson and Arkhipov left open.  But this situation also raises more general questions.  For example, is there an implication of the form “if , then  collapses,” where again  and  are the approximate sampling versions of  and  respectively?  Are there oracles relative to which such an implication does not hold? Quantum Supremacy Relative to Oracles.  A third problem goes to perhaps the core issue of complexity theory (both quantum and classical): namely, we don’t at present have a proof of , much less of  or , less still of the hardness of specific problems like factoring or estimating Gaussian permanents.  So what reason do we have to believe that any of these problems are hard?  Part of the evidence has always come from oracle results, which we often can prove unconditionally.  Particularly in quantum complexity theory, oracle separations can already be highly nontrivial, and give us a deep intuition for why all the “standard” algorithmic approaches fail for some problem. On the other hand, we also know, from results like [Sha92], that oracle separations can badly mislead us about what happens in the unrelativized world.  Generally speaking, we might say, relying on an oracle separation is more dangerous, the less the oracle function resembles what would actually be available in an explicit problem.555Indeed, the algebrization barrier of Aaronson and Wigderson [AW09] was based on precisely this insight: namely, if we force oracles to be “more realistic,” by demanding (in that case) that they come equipped with algebraic extensions of whichever Boolean functions they represent, then many previously non-relativizing results become relativizing. In the case of sampling-based quantum supremacy, we’ve known strong oracle separations since early in the subject.  Indeed, in 2009, Aaronson [Aar10] showed that Fourier Sampling—a quantumly easy sampling problem that involves only a random oracle—requires classical exponential time, and for that matter, sits outside the entire polynomial hierarchy.  But of course, in real life random oracles are unavailable.  So a question arises: can we say anything about the classical hardness of Fourier Sampling with a pseudorandom oracle?  More broadly, what hardness results can we prove for quantum sampling, relative to oracles that are efficiently computable?  Here, we imagine that an algorithm doesn’t have access to a succinct representation of the oracle function , but it does know that a succinct representation exists (i.e., that ).  Under that assumption, is there any hope of proving an unconditional separation between quantum and classical sampling?  If not, then can we at least prove quantum supremacy under weaker (or more “generic”) assumptions than would be needed in the purely computational setting? 1.3 Our Contributions In this paper, we address all three of the above challenges.  Our results might look wide-ranging, but they’re held together by a single thread: namely, the quest to understand the classical hardness of quantum approximate sampling problems, and especially the meta-question of under which computational assumptions such hardness can be proven.  We’ll be interested in both “positive” results, of the form “quantum sampling problem is classically hard under assumption ,” and “negative” results, of the form “proving the classical hardness of requires assumption .”  Also, we’ll be less concerned with specific proposals such as BosonSampling, than simply with the general task of approximately sampling the output distribution of a given quantum circuit .  Fortuitously, though, our focus on quantum circuit sampling will make some of our results an excellent fit to currently planned experiments—most notably, those at Google [BIS16], which will involve random quantum circuits on a 2D square lattice of to superconducting qubits.  Even though we won’t address the details of those or other experiments, our results (together with other recent work [BIS16, BMS16]) can help to inform the experiments—for example, by showing how the circuit depth, the verification test applied to the outputs, and other design choices affect the strength of the computational assumptions that are necessary and sufficient to conclude that quantum supremacy has been achieved. We have five main results. The Hardness of Quantum Circuit Sampling.  Our first result, in Section 3, is about the hardness of sampling the output distribution of a random quantum circuit, along the general lines of the planned Google experiment.  Specifically, we propose a simple verification test to apply to the outputs of a random quantum circuit.  We then analyze the classical hardness of generating any outputs that pass that test. More concretely, we study the following basic problem: Problem 1 (HOG, or Heavy Output Generation). Given as input a random quantum circuit (drawn from some suitable ensemble), generate output strings , at least a fraction of which have greater than the median probability in ’s output distribution. HOG is a relation problem, for which we can verify a claimed solution in classical exponential time, by calculating the ideal probabilities  for each  to be generated by , and then checking whether enough of the ’s are greater than the median value (which we can estimate analytically to extremely high confidence).  Furthermore, HOG is easy to solve on a quantum computer, with overwhelming success probability, by the obvious strategy of just running over and over and collecting of its outputs.666Heuristically, one expects the ’s to be exponentially distributed random variables, which one can calculate implies that a roughly fraction of the outputs will have probabilities exceeding the median value. It certainly seems plausible that HOG is exponentially hard for a classical computer.  But we ask: under what assumption could that hardness be proven?  To address that question, we propose a new hardness assumption: Assumption 1 (QUATH, or the QUAntum THreshold assumption). There is no polynomial-time classical algorithm that takes as input a description of a random quantum circuit , and that guesses whether  is greater or less than the median of all  of the  values, with success probability at least  over the choice of . Our first result says that if QUATH is true, then HOG is hard.  While this might seem nearly tautological, the important point here is that QUATH makes no reference to sampling or relation problems.  Thus, we can now shift our focus from sampling algorithms to algorithms that simply estimate amplitudes, with a minuscule advantage over random guessing. New Algorithms to Simulate Quantum Circuits.  But given what a tiny advantage  is, why would anyone even conjecture that QUATH might be true?  This brings us to our second result, in Section 4, which is motivated by the attempt to refute QUATH.  We ask: what are the best classical algorithms to simulate an arbitrary quantum circuit?  For special quantum circuits (e.g., those with mostly Clifford gates and a few T gates [BG16]), there’s been exciting recent progress on improved exponential-time simulation algorithms, but for arbitrary quantum circuits, one might think there isn’t much to say.  Nevertheless, we do find something basic to say that, to our knowledge, had been overlooked earlier. For a quantum circuit with qubits and gates, there are two obvious simulation algorithms.  The first, which we could call the “Schrödinger” algorithm, stores the entire state vector in memory, using  time and  space.  The second, which we could call the “Feynman” algorithm, calculates an amplitude as a sum of terms, using  time and  space, as in the proof of [BV97]. Now typically , and the difference between and could matter enormously in practice.  For example, in the planned Google setup, will be roughly or , while will ideally be in the thousands.  Thus,  time is reasonable whereas  time is not.  So a question arises: • When , is there a classical algorithm to simulate an -qubit, -gate quantum circuit using both  space and much less than  time—ideally, more like ? We show an affirmative answer.  In particular, inspired by the proof of Savitch’s Theorem [Sav70], we give a recursive, sum-of-products algorithm that uses  space and time—or better yet,  time, where is the circuit depth.  We also show how to improve the running time further for quantum circuits subject to nearest-neighbor constraints, such as the superconducting systems currently under development.  Finally, we show the existence of a “smooth tradeoff” between our algorithm and the -memory Schrödinger algorithm.  Namely, starting with the Schrödinger algorithm, for every desired halving of the memory usage, one can multiply the running time by an additional factor of . We hope our algorithm finds some applications in quantum simulation.  In the meantime, though, the key point for this paper is that neither the Feynman algorithm, nor the Schrödinger algorithm, nor our new recursive algorithm come close to refuting QUATH.  The Feynman algorithm fails to refute QUATH because it yields only a  advantage over random guessing, rather than a  advantage.  The Schrödinger and recursive algorithms have much closer to the “correct”  running time, but they also fail to refute QUATH because they don’t calculate amplitudes as straightforward sums, so don’t lead to polynomial-time guessing algorithms at all.  Thus, in asking whether we can falsify QUATH, in some sense we’re asking how far we can go in combining the advantages of all these algorithms.  This might, in turn, connect to longstanding open problems about the optimality of Savitch’s Theorem itself (e.g.,  versus ). Interestingly, our analysis of quantum circuit simulation algorithms explains why this paper’s hardness argument for quantum circuit sampling, based on QUATH, would not have worked for quantum supremacy proposals such as BosonSampling or .  It works only for the more general problem of quantum circuit sampling.  The reason is that for the latter, unlike for BosonSampling or , there exists a parameter (namely, the number of gates) that controls the advantage that a polynomial-time classical algorithm can achieve over random guessing, even while controls the number of possible outputs.  Our analysis also underscores the importance of taking  in experiments meant to show quantum supremacy, and it provides some guidance to experimenters about the crucial question of what circuit depth they need for a convincing quantum supremacy demonstration. Note that, the greater the required depth, the more protected against decoherence the qubits need to be.  But the tradeoff is that the depth must be high enough that simulation algorithms that exploit limited entanglement, such as those based on tensor networks, are ruled out.  Beyond that requirement, our simulation algorithm gives some information about how much additional hardness one can purchase for a given increase in depth. Strong Quantum Supremacy Theorems Must Be Non-Relativizing.  Next, in Section 5, we switch our attention to a meta-question.  Namely, what sorts of complexity-theoretic evidence we could possibly hope to offer for : in other words, for quantum computers being able to solve approximate sampling problems that are hard classically?  By Aaronson’s sampling/searching equivalence theorem [Aar14], any such evidence would also be evidence for (where  and  are the corresponding classes of relation problems), and vice versa. Of course, an unconditional proof of these separations is out of the question right now, since it would imply .  Perhaps the next best thing would be to show that, if , then the polynomial hierarchy collapses.  This latter is not out of the question: as we said earlier, we already know, by a simple relativizing argument, that an equivalence between quantum and classical exact sampling implies the collapse .  Furthermore, in their work on BosonSampling, Aaronson and Arkhipov [AA13] formulated a -hardness conjecture—namely, their so-called Permanent of Gaussians Conjecture, or PGC—that if true, would imply a generalization of that collapse to the physically relevant case of approximate sampling.  More explicitly, Aaronson and Arkhipov showed that if the PGC holds, then They went on to propose a program for proving the PGC, by exploiting the random self-reducibility of the permanent.  On the other hand, Aaronson and Arkhipov also explained in detail why new ideas would be needed to complete that program, and the challenge remains open. Subsequently, Bremner, Montanaro, and Shepherd [BMS15, BMS16] gave analogous -hardness conjectures that, if true, would also imply the implication (1), by going through the  model rather than through BosonSampling. Meanwhile, nearly two decades ago, Fortnow and Rogers [FR99] exhibited an oracle relative to which  and yet the polynomial hierarchy is infinite.  In other words, they showed that any proof of the implication would have to be non-relativizing.  Unfortunately, their construction was extremely specific to languages (i.e., total Boolean functions), and didn’t even rule out the possibility that the implication could be proven in a relativizing way.  Thus, Aaronson and Arkhipov [AA13, see Section 10] raised the question of which quantum supremacy theorems hold relative to all oracles. In Section 5, we fill in the final piece needed to resolve their question, by constructing an oracle  relative to which  and yet  is infinite.  In other words, we show that any strong supremacy theorem for quantum sampling, along the lines of what Aaronson and Arkhipov [AA13] and Bremner, Montanaro, and Shepherd [BMS15, BMS16] were seeking, must use non-relativizing techniques.  In that respect, the situation with approximate sampling is extremely different from that with exact sampling. Perhaps it’s no surprise that one would need non-relativizing techniques to prove a strong quantum supremacy theorem.  In fact, Aaronson and Arkhipov [AA13] were originally led to study BosonSampling precisely because of the connection between bosons and the permanent function, and the hope that one could therefore exploit the famous non-relativizing properties of the permanent to prove hardness.  All the same, this is the first time we have explicit confirmation that non-relativizing techniques will be needed. Maximal Quantum Supremacy for Black-Box Sampling and Relation Problems.  In Section 6, we turn our attention to the black-box model, and specifically to the question: what are the largest possible separations between randomized and quantum query complexities for any approximate sampling or relation problem?  Here we settle another open question.  In 2015, Aaronson and Ambainis [AA15] studied Fourier Sampling, in which we’re given access to a Boolean function , and the goal is to sample a string with probability , where is the Boolean Fourier transform of , normalized so that .  This problem is trivially solvable by a quantum algorithm with only query to .  By contrast, Aaronson and Ambainis showed that there exists a constant  such that any classical algorithm that solves Fourier Sampling, to accuracy  in variation distance, requires  queries to .  They conjectured that this lower bound was tight. Here we refute that conjecture, by proving a  lower bound on the randomized query complexity of Fourier Sampling, as long as  is sufficiently small (say, ).  This implies that, for approximate sampling problems, the gap between quantum and randomized query complexities can be as large as imaginable: namely, versus linear (!).777We have learned (personal communication) that recently, and independently of us, Ashley Montanaro has obtained a communication complexity result that implies this result as a corollary.  This sharply contrasts with the case of partial Boolean functions, for which Aaronson and Ambainis [AA15] showed that any -bit problem solvable with quantum queries is also solvable with  randomized queries, and hence a constant versus linear separation is impossible.  Thus, our result helps once again to underscore the advantage of sampling problems over decision problems for quantum supremacy experiments.  Given the extremely close connection between Fourier Sampling and the  model [BJS10], our result also provides some evidence that classically simulating an -qubit  circuit, to within constant error in variation distance, is about as hard as can be: it might literally require  time. Aaronson and Ambainis [AA15] didn’t directly address the natural relational version of Fourier Sampling, which Aaronson [Aar10] had called Fourier Fishing in 2009.  In Fourier Fishing, the goal is to output any string  such that , with nontrivial success probability.  Unfortunately, the best lower bound on the randomized query complexity of Fourier Fishing that follows from [Aar10] has the form .  As a further contribution, in Section 6 we give a lower bound of  on the randomized query complexity of Fourier Fishing, which both simplifies and subsumes the  lower bound for Fourier Sampling by Aaronson and Ambainis [AA15] (which, of course, we also improve to in this paper). Quantum Supremacy Relative to Efficiently-Computable Oracles.  In Section 7, we ask a new question: when proving quantum supremacy theorems, can we “interpolate” between the black-box setting of Sections 5 and 6, and the non-black-box setting of Sections 3 and 4?  In particular, what happens if we consider quantum sampling algorithms that can access an oracle, but we impose a constraint that the oracle has to be “physically realistic”?  One natural requirement here is that the oracle function  be computable in the class :888More broadly, we could let  be computable in , but this doesn’t change the story too much. in other words, that there are polynomial-size circuits for , which we imagine that our sampling algorithms (both quantum and classical) can call as subroutines.  If the sampling algorithms also had access to explicit descriptions of the circuits, then we’d be back in the computational setting, where we already know that there’s no hope at present of proving quantum supremacy unconditionally.  But what if our sampling algorithms know only that small circuits for exist, without knowing what they are?  Could quantum supremacy be proven unconditionally then? We give a satisfying answer to this question.  First, by adapting constructions due to Zhandry [Zha12] and (independently) Servedio and Gortler [SG04], we show that if one-way functions exist, then there are oracles  such that , and indeed even .  (Here and later, the one-way functions only need to be hard to invert classically, not quantumly.) Note that, in the unrelativized world, there seems to be no hope at present of proving under any hypothesis nearly as weak as the existence of one-way functions.  Instead one has to assume the one-wayness of extremely specific functions, for example those based on factoring or discrete log. Second, and more relevant to near-term experiments, we show that if there exist one-way functions that take at least subexponential time to invert, then there are Boolean functions  such that approximate Fourier Sampling on those ’s requires classical exponential time.  In other words: within our “physically realistic oracle” model, there are feasible-looking quantum supremacy experiments, along the lines of the  proposal [BJS10], such that a very standard and minimal cryptographic assumption is enough to prove the hardness of simulating those experiments classically. Third, we show that the above two results are essentially optimal, by proving a converse result: that even in our oracle model, some computational assumption is still needed to prove quantum supremacy.  The precise statement is this: if and , then  for all .  Or equivalently: if we want to separate quantum from classical approximate sampling relative to efficiently computable oracles, then we need to assume something about the unrelativized world: either  (in which case we wouldn’t even need an oracle), or else  (which is closely related to the assumption we do make, namely that one-way functions exist). So to summarize, we’ve uncovered a “smooth tradeoff” between the model of computation and the hypothesis needed for quantum supremacy.  Relative to some oracle (and even a random oracle), we can prove unconditionally.  Relative to some efficiently computable oracle, we can prove , but only under a weak computational assumption, like the existence of one-way functions.  Finally, with no oracle, we can currently prove  only under special assumptions, such as factoring being hard, or the permanents of Gaussian matrices being hard to approximate in , or our QUATH assumption.  Perhaps eventually, we’ll be able to prove  under the sole assumption that  is infinite, which would be a huge step forward—but at any rate we’ll need some separation of classical complexity classes.999Unless, of course, someone were to separate from unconditionally! One last remark: the idea of comparing complexity classes relative to  oracles seems quite natural even apart from its applications to quantum supremacy.  So in Appendix A, we take an initial stab at exploring the implications of that idea for other central questions in complexity theory.  In particular, we prove the surprising result there that  for all oracles , if and only if the derandomization hypothesis of Impagliazzo and Wigderson [IW97] holds (i.e., there exists a function in  with  circuit complexity).  In our view, this helps to clarify Impagliazzo and Wigderson’s theorem itself, by showing precisely in what way their circuit lower bound hypothesis is stronger than the desired conclusion .  We also show that, if there are quantumly-secure one-way functions, then there exists an oracle  such that . 1.4 Techniques In our view, the central contributions of this work lie in the creation of new questions, models, and hardness assumptions (such as QUATH and quantum supremacy relative to  oracles), as well as in basic observations that somehow weren’t made before (such as the sum-products algorithm for simulating quantum circuits)—all of it motivated by the goal of using complexity theory to inform ongoing efforts in experimental physics to test the Extended Church-Turing Thesis.  While some of our proofs are quite involved, by and large the proof techniques are ones that will be familiar to complexity theorists.  Even so, it seems appropriate to say a few words about techniques here. To prove, in Section 3, that “if QUATH is true, then HOG is hard,” we give a fairly straightforward reduction: namely, we assume the existence of a polynomial-time classical algorithm to find high-probability outputs of a given quantum circuit .  We then use that algorithm (together with a random self-reduction trick) to guess the magnitude of a particular transition amplitude, such as , with probability slightly better than chance, which is enough to refute QUATH. One technical step is to show that, with probability, the distribution over -bit strings sampled by a random quantum circuit is far from the uniform distribution.  But not only can this be done, we show that it can be done by examining only the very last gate of , and ignoring all other gates!  A challenge that we leave open is to improve this, to show that the distribution sampled by is far from uniform, not merely with probability, but with probability.  In Appendix E, we present numerical evidence for this conjecture, and indeed for a stronger conjecture, that the probabilities appearing in the output distribution of a random quantum circuit behave like independent, exponentially-distributed random variables.  (We note that Brandao, Harrow and Horodecki [BHH16] recently proved a closely-related result, which unfortunately is not quite strong enough for our purposes.) In Section 4, to give our polynomial-space, -time classical algorithm for simulating an -qubit, depth- quantum circuit , we use a simple recursive strategy, reminiscent of Savitch’s Theorem.  Namely, we slice the circuit into two layers,  and , of depth  each, and then express a transition amplitude  of interest to us as We then compute each  and  by recursively slicing  and  into layers of depth  each, and so on.  What takes more work is to obtain a further improvement if has only nearest-neighbor interactions on a grid graph—for that, we use a more sophisticated divide-and-conquer approach—and also to interpolate our recursive algorithm with the -space Schrödinger simulation, in order to make the best possible use of whatever memory is available. Our construction, in Section 5, of an oracle relative to which  and yet  is infinite involves significant technical difficulty.  As a first step, we can use a  oracle to collapse  with , and then use one of many known oracles (or, by the recent breakthrough of Rossman, Servedio, and Tan [RST15], even a random oracle) to make  infinite.  The problem is that, if we do this in any naïve way, then the oracle that makes  infinite will also re-separate  and , for example because of the approximate Fourier Sampling problem.  Thus, we need to hide the oracle that makes  infinite, in such a way that a  algorithm can still find the oracle (and hence,  is still infinite), but a  algorithm can’t find it with any non-negligible probability—crucially, not even if the  algorithm’s input  provides a clue about the oracle’s location.  Once one realizes that these are the challenges, one then has about seven pages of work to ensure that  and  remain equal, relative to the oracle that one has constructed.  Incidentally, we know that this equivalence can’t possibly hold for exact sampling, so something must force small errors to arise when the  algorithm simulates the  one.  That something is basically the tiny probability that the quantum algorithm will succeed at finding the hidden oracle, which however can be upper-bounded using quantum-mechanical linearity. In Section 6, to prove a  lower bound on the classical query complexity of approximate , we use the same basic strategy that Aaronson and Ambainis [AA15] used to prove a  lower bound, but with a much more careful analysis.  Specifically, we observe that any Fourier Sampling algorithm would also yield an algorithm whose probability of accepting, while always small, is extremely sensitive to some specific Fourier coefficient, say .  We then lower-bound the randomized query complexity of accepting with the required sensitivity to , taking advantage of the fact that  is simply proportional to , so that all ’s can be treated symmetrically.  Interestingly, we also give a different, much simpler argument that yields a  lower bound on the randomized query complexity of Fourier Fishing, which then immediately implies a  lower bound for Fourier Sampling as well.  However, if we want to improve the bound to , then the original argument that Aaronson and Ambainis [AA15] used to prove  seems to be needed. In Section 7, to prove that one-way functions imply the existence of an oracle  such that , we adapt a construction that was independently proposed by Zhandry [Zha12] and by Servedio and Gortler [SG04].  In this construction, we first use known reductions [HILL99, GGM86] to convert a one-way function into a classically-secure pseudorandom permutation, say .  We then define a new function by , where  is interpreted as an integer written in binary, and is a hidden period.  Finally, we argue that either Shor’s algorithm [Sho97] leads to a quantum advantage over classical algorithms in finding the period of , or else  was not pseudorandom, contrary to assumption.  To show that subexponentially-secure one-way functions imply the existence of an oracle  relative to which Fourier Sampling is classically hard, we use similar reasoning.  The main difference is that now, to construct a distinguisher against a pseudorandom function , we need classical exponential time just to verify the outputs of a claimed polynomial-time classical algorithm for Fourier Sampling —and that’s why we need to assume security. Finally, to prove that and imply  for all , we design a step-by-step classical simulation of a quantum algorithm, call it , that queries an oracle .  We use the assumption  to sample from the probability distribution over queries to  that  makes at any given time step.  Then we use the assumption  to guess a function  that’s consistent with sampled classical queries to .  Because of the limited number of functions in , standard sample complexity bounds for PAC-learning imply that any such  that we guess will probably agree with the “true” oracle on most inputs.  Quantum-mechanical linearity then implies that the rare disagreements between  and will have at most a small effect on the future behavior of . 2 Preliminaries For a positive integer , we use to denote the integers from to . Logarithms are base . 2.1 Quantum Circuits We now introduce some notations for quantum circuits, which will be used throughout this paper. In a quantum circuit, without loss of generality, we assume all gates are unitary and acting on exactly two qubits each101010Except for oracle gates, which may act on any number of qubits.. Given a quantum circuit , slightly abusing notation, we also use to denote the unitary operator induced by . Suppose there are qubits and gates in ; then we index the qubits from to . We also index gates from to in chronological order for convenience. For each subset of the qubits, let be the Hilbert space corresponding to the qubits in , and be the identity operator on . Then the unitary operator for the -th gate can be written as , in which is a unitary operator on (the Hilbert space spanned by the qubits and ), and is the identity operator on the other qubits. We say that a quantum circuit has depth , if its gates can be partitioned into layers (in chronological order), such that the gates in each layer act on disjoint pairs of qubits. Suppose the -th layer consists of the gates in . We define , that is, the sub-circuit between the -th layer and the -th layer. Base Graphs and Grids In Sections 3 and 4, we will sometimes assume locality of a given quantum circuit.  To formalize this notion, we define the base graph of a quantum circuit. Definition 2.1. Given a quantum circuit on qubits, its base graph is an undirected graph defined by , and We will consider a specific kind of base graph, the grids. Definition 2.2. The grid of size is a graph with vertices and edges , and we say that grid has rows and columns. 2.2 Complexity Classes for Sampling Problems Definitions for SampBPP and SampBQP We adopt the following definition for sampling problems from [Aar14]. Definition 2.3 (Sampling Problems, SampBPP, and SampBQP). A sampling problem is a collection of probability distributions , one for each input string , where  is a distribution over , for some fixed polynomial .  Then SampBPP is the class of sampling problems for which there exists a probabilistic polynomial-time algorithm  that, given as input, samples from a probability distribution  such that .  SampBQP is defined the same way, except that  is a quantum algorithm rather than a classical one. Oracle versions of these classes can also be defined in the natural way. A Canonical Form of SampBQP Oracle Algorithms To ease our discussion about , we describe a canonical form of SampBQP oracle algorithms. Any other reasonable definitions of SampBQP oracle algorithms (like with quantum oracle Turing machines) can be transformed into this form easily. Without loss of generality, we can assume a SampBQP oracle algorithm with oracle access to ( is a universal constant) acts in three stages, as follows. 1. Given an input , first uses a classical routine (which does not use the oracles) to output a quantum circuit with qubits and gates in polynomial time, where is a fixed polynomial.  Note that can use the gates in addition to a universal set of quantum gates. 2. Then runs the outputted quantum circuit with the initial state , and measures all the qubits to get an outcome in . 3. Finally, uses another classical routine (which does not use the oracles) on the input , to output its final sample . Clearly, solves different sampling problems (or does not solve any sampling problem at all) given different oracles .  Therefore, we use to indicate the particular algorithm when the oracles are . 2.3 Distinguishing Two Pure Quantum States We also need a standard result for distinguishing two pure quantum states. Theorem 2.4 (Helstrom’s decoder for two pure states). The maximum success probability for distinguishing two pure quantum states and given with prior probabilities and , is given by where is the fidelity between the two states. We’ll also need that for two similar quantum states, the distributions induced by measuring them are close. Corollary 2.5. Let and be two pure quantum state such that . For a quantum state , define be the distribution on induced by some quantum sampling procedure, we have Fix prior probabilities . Note that we have a distinguisher of and with success probability by invoking that quantum sampling procedure. By the assumption, , hence . So we have This implies . 2.4 A Multiplicative Chernoff Bound Lemma 2.6. Suppose are independent random variables taking values in . Let denote their sum and let . Then for any , we have Corollary 2.7. For any , suppose are independent random variables taking values in . Let denote their sum and let . Then for any , we have Replace each by and apply the previous lemma. ∎ 3 The Hardness of Quantum Circuit Sampling We now discuss our random quantum circuit proposal for demonstrating quantum supremacy. 3.1 Preliminaries We first introduce some notations.  We use to denote the group of unitary matrices, for the Haar measure on , and for the Haar measure on -dimensional pure states. For a pure state on qubits, we define to be the list consisting of numbers, for each . Given real numbers , we use to denote the sum of the largest numbers among them, and we let Finally, we say that an output is heavy for a quantum circuit , if it is greater than the median of . 3.2 Random quantum circuit on grids Recall that we assume a quantum circuit consists of only -qubit gates.  Our random quantum circuit on grids of qubits and gates (assuming ) is generated as follows (though the basic structure of our hardness argument will not be very sensitive to details, and would also work for many other circuit ensembles): • All the qubits are arranged as a grid (see Definition 2.2), and a gate can only act on two adjacent qubits. • For each with , we pick the -th qubit and a random neighbor of it.111111The purpose here is to make sure that there is a gate on every qubit. • For each with , we pick a uniform random pair of adjacent qubits in the grid . • Then, in either case, we set the -th gate to be a unitary drawn from acting on these two qubits. Slightly abusing notation, we use to denote both the above distribution on quantum circuits and the distribution on induced by it. Conditional distribution For convenience, for a quantum circuit , we abbreviate as .  Consider a simple quantum algorithm which measures in the computational basis to get an output .  Then by definition, is simply the probability that is heavy for . We want that, when a quantum circuit is drawn, is large (that is, bounded above ), and therefore the simple quantum algorithm has a substantial advantage on generating a heavy output, compared with the trivial algorithm of guessing a random string. For convenience, we also consider the following conditional distribution : it keeps drawing a circuit until the sample circuit satisfies . Lower bound on We need to show that a circuit drawn from has a large probability of having .  In order to show that, we give a cute and simple lemma, which states that the expectation of is large.  Surprisingly, its proof only makes use of the randomness introduced by the very last gate! Lemma 3.1. For and In fact, we conjecture that is large with an overwhelming probability. Conjecture 1. For and , and for all constants , We give some numerical simulation evidence for Conjecture 1 in Appendix E. Remark 3.2. Assuming Conjecture 1, in practice, one can sample from by simply sampling from , the uniform distribution over circuits—doing so only introduces an error probability of . 3.3 The HOG Problem Now we formally define the task in our quantum algorithm proposal. Problem 1 (HOG, or Heavy Output Generation). Given a random quantum circuit from for , generate binary strings in such that at least a fraction of ’s are heavy for . The following proposition states that there is a simple quantum algorithm which solves the above problem with overwhelming probability. Proposition 3.3. There is a quantum algorithm that succeeds at HOG with probability . The algorithm just simulates the circuit with initial state , then measures in the computational basis times independently to output binary strings. From the definition of , we have .  So by a Chernoff bound, with probability , at least a fraction of ’s are heavy for , in which case the algorithm solves HOG. ∎ 3.4 Classical Hardness Assuming QUATH We now state our classical hardness assumption. Assumption 1 (QUATH, or the Quantum Threshold assumption). There is no polynomial-time classical algorithm that takes as input a random quantum circuit for and decides whether is heavy for with success probability . Remark 3.4. Note that is the success probability obtained by always outputting either or .  Therefore, the above assumption means that no efficient algorithm can beat the trivial algorithm even by . Next, we show that QUATH implies that no efficient classical algorithm can solve HOG. Theorem 3.5. Assuming QUATH, no polynomial-time classical algorithm can solve HOG with probability at least . Suppose by contradiction that there is such a classical polynomial-time algorithm .  Using , we will construct an algorithm to violate QUATH. The algorithm is quite simple.  Given a quantum circuit , we first draw a uniform random string . Then for each such that , we apply a gate on the -th qubit.  Note that this gate can be “absorbed” into the last gate acting on the -th qubit in .  Hence, we still get a circuit with gates.  Moreover, it is easy to see that is distributed exactly the same as even if conditioning on a particular , and we have , which means that is heavy for if and only if is heavy for . Next our algorithm runs on circuit to get outputs , and picks an output among these outputs uniformly at random.  If , then the algorithm outputs ; otherwise it outputs a uniform random bit. Since solves HOG with probability , we have that each is heavy for with probability at least . Now, since is a uniform random string, the probability that our algorithm decides correctly whether is heavy for is But this contradicts QUATH, so we are done. ∎ 3.5 Proof for Lemma 3.1 We first need a simple lemma which helps us to lower bound . For a pure quantum state , define In other words, measures the non-uniformity of the distribution obtained by measuring in the computational basis. The next lemma shows that, when is large, so is .  Therefore, in order to establish Lemma 3.1, it suffices to lower-bound . Lemma 3.6. For a pure quantum state , we have We will also need the following technical lemma. Lemma 3.7. Let .  Then The proofs of Lemma 3.6 and Lemma 3.7 are based on simple but tedious calculations, so we defer them to Appendix B. Now we are ready to prove Lemma 3.1. Proof of Lemma 3.1. Surprisingly, our proof only uses the randomness introduced by the very last gate.  That is, the claim holds even if there is an adversary who fixes all the gates except for the last one. We use to denote the -qubit identity operator. Let .  From Lemma 3.6, it suffices to show that Suppose the last gate acts on qubits and . Let the unitary corresponding to the circuit before applying the last gate be , and . Now, suppose we apply another unitary drawn from on the qubit . It is not hard to see that and are identically distributed. So it suffices to show that We are going to show that the above holds even for a fixed .  That is, fix a and let .  Then we will prove that Without loss of generality, we can assume that is the last qubit. Then we write Now we partition the basis states into buckets, one for each string in .  That is, for each , there is a bucket that consists of basis states .  Note that since acts on the last qubit, only amplitudes of basis states in the same bucket can affect each other. For a given , if both and are zero, we simply ignore this bucket.  Otherwise, we can define a quantum state Clearly, we have and . Plugging in, we have (triangle inequality) Now, since is a pure state, and is drawn from , we see that is distributed as a Haar-random pure state.  So from Lemma 3.7, we have Summing up for each , we have which completes the proof. 4 New Algorithms to Simulate Quantum Circuits In this section, we present two algorithms for simulating a quantum circuit with qubits and gates: one algorithm for arbitrary circuits, and another for circuits that act locally on grids.  What’s new about these algorithms is that they use both polynomial space and close to time (but despite that, they don’t violate the QUATH assumption from Section 3, for the reason pointed out in Section 1.3).  Previously, it was known how to simulate a quantum circuit in polynomial space and time (as in the proof of ), or in exponential space and time. In addition, we provide a time-space trade-off scheme, which enables even faster simulation at the cost of more space usage. See Section 2.1 for the quantum circuit notations that are used throughout this section. 4.1 Polynomial-Space Simulation Algorithms for General Quantum Circuits We first present a simple recursive algorithm for general circuits. Theorem 4.1. Given a quantum circuit on qubits with depth , and two computational basis states , we can compute in time and space. In the base case , the answer can be trivially computed in time. When , we have Then, for each , we calculate by recursively calling the algorithm on the two sub-circuits and respectively; and sum them up to calculate (2). It is easy to see the above algorithm is correct, and its running time can be analyzed as follows: let be its running time on a circuit of layers; then we have , and by the above discussion which proves our running time bound. Finally, we can see in each recursion level, we need space to save the indices of and , and space to store an intermediate answer.  Since there are at most recursion levels, the total space is bounded by . ∎ 4.2 Faster Polynomial Space Simulation Algorithms for Grid Quantum Circuits When a quantum circuit is spatially local, i.e., its base graph can be embedded on a grid, we can further speed up the simulation with a more sophisticated algorithm. We first introduce a simple lemma which shows that we can find a small balanced cut in a two-dimensional grid. Lemma 4.2. Given a grid of size such that , we can find a subset such that • , and • after is removed, becomes a union of two disconnected grids with size smaller than . We can assume without loss of generality and simply set to be the set of all the edges between the -th row and the -th row; then both claims are easy to verify. ∎ We now present a faster algorithm for simulating quantum circuits on grids. Theorem 4.3. Given a quantum circuit on qubits with depth , and two computational basis states , assuming that can be embedded into a two-dimensional grid with size (with the embedding explicitly specified), we can compute in time and space. For ease of presentation, we slightly generalize the definition of quantum circuits: now each gate can be of the form (a 2-qubit gate) or (a 1-qubit gate) or simply (a 0-qubit gate, which is introduced just for convenience). The algorithm works by trying to break the current large instance into many small instances which we then solve recursively.  But unlike the algorithm in Theorem 4.1, which reduces an instance to many sub-instances with fewer gates, our algorithm here reduces an instance to many sub-instances with fewer qubits. The base case, qubit. In this case, all the gates are either 1-qubit or 0-qubit; hence the answer can be calculated straightforwardly in time and constant space. Cutting the grid by a small set. When , by Lemma 4.2, we can find a subset of edges with .  After is removed, the grid becomes a union of two disconnected grids
845f7af4296875b8
Read about equation | 128 Discussions | Page 1 1. Y Engineering Equation for a DC motor driving an arm 2. B Topological insulators and their optical properties I have tried to write down the boundary conditions in this case and looked into them. As conditions i) and ii) were trivial, i looked into iii) and iv) for information that I could use. But all I got was that for the transmitted wave to have an angle, the reflective wave should also have an... 3. Boltzman Oscillation How can I create an equation in matlab for image processing? Here is the documentation for the 2DFFT: how would I go about creating this formula on matlab to apply it on an image? My guess is that I need to create the equation and then multiply it to the image I need such as: $$U = VI$$ where V is my... 4. S R Wilder Is it the Thévenin theorem? I just need the meaning of In. 5. ArcHorizon Physics Modeling of a Gas This was the equation that they showed me. I thought P was for pressure, V for Volume, T for Temperature, R for Gas Constant, and n for the number of moles. Was I correct for the initials? 6. Z Time needed for a pressured N2O cylinder to reach the apex of its travel as a projectile... Hello, I am trying to understand the maths/physics/chemistry behind this situation. Here is the scenario. I have 8 grams of pressurized N2O in a cylinder at 60 bar/ 900 psi. If the temperature stays constant (let's say 50-70°C, or at a temperature where the N2O can stay as pressurized as... 7. N Python Convert an equation to Python I have several equations and need to convert it into Python. The problem is that I tried to plot a graph according to the equation. However, the graph that I get is not the same as the original one. In the paper, the equation of error probability for MIM attack is given by: First Image Second... 8. komarxian Multivariable calculus problem Homework Statement Find the points on the surface xy^2z^3=2 that are closest to the origin Homework Equations The Attempt at a Solution x,y,z=/= 0, as when x,y,z = 0 it is untrue. Right?? Otherwise, I am very unsure as to how to approach this problem. Should I be taking partial derivatives... 9. M How do I find a plane that contains two given lines? Homework Statement a. Find a point at where these lines intersect b. Find the equation of a plane that contains the two lines. Homework Equations r[/B] = <1,3,0> + t<3,-3,2> r = <4,0,2> + s<-3,3,0> The Attempt at a Solution I correctly found the point of intersection to be... 10. José Ricardo Ellipse graphic Homework Statement Graph the ellipse 4x² + 2y² = 1 Homework Equations 4x² + 2y² = 1 The Attempt at a Solution 2x² + y²/2 = 1/2 I searched for exercises on Google, and i didn't find an equation like that. I watched videoleassons too but it didn't teach this type of equation. 11. G B Equation for the resolving power of a microscope? Hi I'm reading through a Quantum Mechanics textbook called Quantum Mechanics by Book by Alastair I. M. Rae and in the opening chapter it talks about the Heisenberg uncertainty principle and talks about how a measurement of position of a particle causes an uncertainty from the momentum due to the... 12. L Range calculation for Airbus A320 aircraft Homework Statement Hi to all, the Task I struggle with, is the range calculation from an Airbus A320 with the Breguet Range Equation which is defined as: Homework Equations R = (cl/cd) * (V/(g*SFC)) * ln(w0/w1) with V = velocity g = gravity SFC = specific fuel consumption w0 =... 13. concernedhuman B Is there such a thing as Gm/r? Since Fg = Gmm/r2and Coulomb's law being similar to that: Fe = kQq/r2, and we also have E = kQ/r2 and g = Gm/r2 being alike, I was wondering if there's anything that corresponds to the potential equation kQ/r. I converted it myself and figured that it's going to be Gm/r, and I'm not sure if a... 14. hugo_faurand B Equation with modulus Hello everyone! I'd like to know if it's possible to solve an equation with a modulus like this one : $$ ( \frac{200}{15x}) mod 2 = 0 $$ Thanks in advance. Regards! 15. Subrahmanyan To find the nature of roots of a quintic equation... The asks for us to find the nature of roots of the following equation ,i.e,rational or irrational nature of the roots: the Equation is : x^5+x=5 I have been able to prove that this equation has one positive real root through the use of calculus (it is an increasing function) and the fact that... 16. pairofstrings B What is the connection between x^2 and a square shape? Hello. The curve y = x2 is a parabola that looks like this: I have a shape Square that looks like this: What I am noticing is that if I consider the equation y = x2 and also the shape Square, I find that there is no connection between them but the equation y = x2 is pronounced as x-square... 17. S Equation construct (need help please) Hello everyone, I need some help (or guidance). I have an equation f(x) = A/(x2). I need to construct the equation g(f(x)) with following conditions: - when f(x) -> 0 then g(f(x)) ->1; - when f(x) -> Fmax then g(f(x))->0; (This is important: there is some fixed Fmax value at which g(f(x))... 18. B Two pencils of planes have a common plane Homework Statement Find the value of the parameter α for which the pencil of planes through the straight line AB has a common plane with the pencil of planes through the straight line CD, where A(1, 2α, α), B(3, 2, 1), C(−α, 0, α) and D(−1, 3, −3). Homework Equations Let Δ be a line given by... 19. shintashi B Equations vs. Functions Quadratic and Cubic? So if i take the rules that a straight vertical line drawn through the function with more than one intersection implies it is not a function, to mean that the quadratic equation for a circle is not a function. Furthermore, it also implies a cubic equation, such as x^3 can be a function, because... 20. D B Showing/proving a physical relationship I derived a relationship between frequency and tension of a string, accounting for tension's effect in the linear density of the string. So in a nutshell, the equation is more complicated and is in the form of f^2=aT^2+bT (f is frequency, T is tension, ab are constants involving the control... 21. R Creating system of equations from word problem optimization I have this word problem, and was wondering how I would go about creating a system of equations. Here is the question: Problem: You are a small forest landowner, and decide you want to sustainably harvest some of timber on your property. There are costs related to the infrastructure needed to... 22. A I Empirical equation from two variables (1 input and 1 output) Hi, I have empirical data from my experiments. There are 2 columns of data (2 interdependent variables- temperature and viscosity).. 1 column (temperature) is input variable (temp. of tested material, once it was melted, it was gradually increased during the experiment). 1 column (viscosity)... 23. SemM A How does one "design" a PDE from a physical phenomenon? Hi, I have read some on the PDEs for fluids, and particularly for rogue waves, where for instance the extended Dysthe equation and the NLSE look rather intimidating: Take for instance the Non-linear Schrödinger eqn: \begin{equation} \frac{\partial^2 u}{dx^2}-i\frac{\partial d... 24. G Potential Energy Change Homework Statement A 95-kg mountain climber hikes up a mountain to an elevation of 5000 m. What is the change in the climber's potential energy? Homework Equations I might be missing something but here's everything that might be relevant: w=fx p=w/t p=f*v KE=1/2mv^2 PEg=mgh PEe=1/2kx^2 Wnet=E... 25. G B What is the relation of mass and power? How should power be calculated in a situation where distance and time are both given, as well as mass. At first, I was thinking of just using W=Fx (force as mass x 9.8) to solve for work, and then I would take the solution for work and put it into P=wt. Does this make sense? That's my best... 26. S A Solving the Schrödinger equation for free electrons Dear all, sorry I made a new post similar to the previous post "Initial conditions..", however, a critical point was missed in the previous discussion: The initial conditions y(0)=1 and y'(0)=0 are fine and help in solving the Schrödinger equation, however, studying free electrons, the equation... 27. S LaTeX Split up a horrible equation Hi, how do I split up this horrible equation into several lines? given in latex code: \Psi_2=... 28. A Solution of an exponential equation Homework Statement I am trying to solve an equation. 128^b - 127^b = 147.058. Homework Equations The Attempt at a Solution I have tried numerical methods like Bisection method and Newton-Raphson, but I need analytical solution. Thank you. 29. Alexander350 B Solving a differential equation with a unit vector in it I need to solve: \dot{\mathbf{r}}=-kv\hat{r} - \dot{\mathbf{r}_s} However, I do not know how to deal with the fact that there is a unit vector. How can this be done? \dot{\mathbf{r}_s} is a constant vector.
6f82065be37e5ad5
Slave-rotor mean field theories of strongly correlated systems and the Mott transition in finite dimensions Serge Florens Institut für Theorie der Kondensierten Materie, Universität Karlsruhe, 76128 Karlsruhe, Germany Laboratoire de Physique Théorique, Ecole Normale Supérieure, 24 rue Lhomond, 75231 Paris Cedex 05, France    Antoine Georges Centre de Physique Théorique, Ecole Polytechnique, 91128 Palaiseau Cedex Laboratoire de Physique Théorique, Ecole Normale Supérieure, 24 rue Lhomond, 75231 Paris Cedex 05, France The multiorbital Hubbard model is expressed in terms of quantum phase variables (“slave rotors”) conjugate to the local charge, and of auxiliary fermions, providing an economical representation of the Hilbert space of strongly correlated systems. When the phase variables are treated in a local mean-field manner, similar results to the dynamical mean-field theory are obtained, namely a Brinkman-Rice transition at commensurate fillings together with a “preformed” Mott gap in the single-particle density of states. The slave- rotor formalism allows to go beyond the local description and take into account spatial correlations, following an analogy to the superfluid-insulator transition of bosonic systems. We find that the divergence of the effective mass at the metal- insulator transition is suppressed by short range magnetic correlations in finite- dimensional systems. Furthermore, the strict separation of energy scales between the Fermi- liquid coherence scale and the Mott gap, found in the local picture, holds only approximately in finite dimensions, due to the existence of low-energy collective modes related to zero-sound. I Introduction Strongly correlated fermion systems constitute a challenge, both from a fundamental point of view (with phenomena such as the Mott transition Imada et al. (1998) and high- temperature superconductivity), and on a more quantitative level with the need of reliable tools to handle intermediate and strong coupling regimes (even for simplified models such as the Hubbard model). In recent years, the dynamical mean-field theory (DMFT) has allowed for significant progress in this respect Georges et al. (1996). In particular, this approach has led to a detailed theory of the Mott transition, and to a quantitative description of the physics of strongly correlated metals. Despite these successes, the limitations of this approach have been emphasized on many occasions. The main one has to do with the effect of spatial correlations (e.g magnetic short-range correlations), and more precisely with the effect of these correlations on the properties of quasiparticles. For example, the tendency to form singlet bonds due to superexchange is widely believed to be a key physical effect in weakly doped Mott insulators. Also, at the technical level, the application of DMFT to materials with a large orbital degeneracy (e.g in combination with ab-initio methods Held et al. (2001); Lichtenstein et al. (2002); Georges (2004)), as well as cluster extensions of DMFT Georges et al. (1996); Maier et al. (2004) are computationally challenging because they involve the solution of a multi-orbital quantum impurity model. For these reasons, there is still a strong need for approximate, simpler treatments of strongly correlated fermion models. Those treatments should incorporate some of the DMFT successes (e.g regarding the description of the metal-insulator transition), but they should also pave the road for describing physical effects beyond DMFT at least at a qualitative level. The purpose of this paper is to present a simple mean field description of correlated systems which fullfills some of these goals. Our main idea is to focus on the degrees of freedom associated to the relevant physical variable associated to the Mott transition, namely a slave quantum rotor field, dual to the local electronic charge. This slave rotor representation was introduced previously by us for the description of quantum impurity models and mesoscopic devices Florens and Georges (2002); Florens et al. (2003), and is applied here in the context of lattice models. This allows for a simple reformulation of the orbitally degenerate Hubbard model, which, when the interaction has full orbital symmetry, is quite superior to previously developed slave- boson representations Kotliar and Ruckenstein (1986); Hasegawa (1997); Florens et al. (2002). When the simplest possible (single-site) mean-field approximation is used in conjunction with this slave-rotor representation, a description of the Mott transition very similar to that of DMFT is found. The metallic phase disappears through a Brinkman-Rice transition, at which the quasiparticle wieight vanishes and the effective mass diverges. The slave rotor approach does preserve Hubbard bands in the insulator, and a “preformed” Mott spectral gap opening up discontinuously at the transition is found, as in DMFT. The most interesting aspect of our approach lies however in the possibility of going beyond this purely local mean-field description. By decoupling the spinons and slave rotor degrees of freedom, the Hubbard model is mapped onto a free spinon hamiltonian self-consistently coupled to a quantum XY lattice model. The (dis)ordering transition of the latter corresponds to the Mott transition, in analogy with the superfluid-insulator transition of the bosonic Hubbard model. Because spatial correlations are now included, we find important modifications to the DMFT picture. In particular, the effective mass remains finite at the transition, due to the quenching of the macroscopic entropy by magnetic correlations in the Mott phase. Importantly, low-energy charge collective modes are shown to affect the opening of the Mott gap, which now develops in a continuous manner, so that the separation of energy scales found in DMFT only holds in an approximate manner. These simple results can be considered as deviations from the DMFT predictions that could possibly be observed in photoemission experiments Perfetti et al. (2003). However, restauration of the local gauge symmetry should occur due to fluctuations beyond the mean-field approximation, possibly modifying the latter result on a qualitative way. The paper is organised as follows. In section II we introduce the exact slave rotor description of a simple atomic level with orbital degeneracy, and show that an approximate treatment of the local constraint is sufficient to describe correctly the full Coulomb staircase, as well as one-particle spectra. Then, in section III, we develop the simplest (local) mean field treatment of both the Anderson and Hubbard models, and in the latter case, study the multiorbital Mott transition. Finally, spatial fluctuations beyond DMFT are included in section IV, with an emphasis on the behavior of the effective mass and the excitation spectrum. The conclusion presents several possible applications and extensions of our formalism, and also discusses some of the open issues raised by our results. Ii Rotor representation of interacting fermions ii.1 Slave-rotor representation In Ref. Florens and Georges (2002) (see also Florens (2003)), we introduced a representation of the Hilbert space of fermions in terms of a collective phase degree of freedom , conjugate to the total charge, and of auxiliary fermions . The spin/orbital index runs over values (e.g for ). In the following, we consider only interactions which have the full spin/orbital symmetry. Let us consider the Hamiltonian corresponding to a single “atomic level”, in the presence of a local Hubbard repulsion: The crucial point is that the spectrum of the atomic Hamiltonian (1) depends only on the total fermionic charge and has a simple quadratic dependence on : There are states, but only different energy levels, with degeneracies . In conventional slave boson methods Kotliar and Ruckenstein (1986); Florens et al. (2002), a bosonic field is introduced for each atomic state (along with spin-carrying auxiliary fermions ). Hence, these methods are not describing the atomic spectrum in a very economical manner, and lead to very tedious calculations when orbital degeneracy becomes large, even at the mean-field level Hasegawa (1997); Fresard and Kotliar (1997); Bünemann et al. (1998); Florens et al. (2002). However, we stress that the extensive number of degrees of freedom necessary in those other approaches can become useful when the symmetry is broken, either by magnetic order or crystal fields. The spectrum of (1) can actually be reproduced by introducing, besides the set of auxiliary fermions , a single additional variable, namely the angular momentum associated with a quantum rotor , an angular variable in . Indeed, the energy levels (2) can be obtained using the following Hamiltonian A constraint must be imposed, which insures that the total number of fermions is equal to the angular momentum (up to a shift, in our conventions): This restricts the allowed values of the angular momentum to be , while in the absence of any constraint can be an arbitrary (positive or negative) integer. The spectrum of (3) is , with thanks to (4), so that it coincides with (2). It is easily checked that the full Hilbert space is correctly described as: in which denotes the antisymmetric fermion state built out of - and -fermions, respectively, and denotes the quantum rotor eigenstate with angular momentum , i.e. . For , this corresponds to: , , and . The creation of a physical electron with spin is associated to the action of on such a state as well as raising the total charge (angular momentum) by one unit. Since the raising operator is , this leads to the representation: The key advantage of the quantum rotor representation is that the original quartic interaction between fermions has been replaced in (3) by a simple kinetic term for the phase field, . We point out here that a similar phase representation was developed before in the context of Coulomb blockade in mesoscopic systems, see e.g grabert (1994); Schoeller and Schön (1994); Lebanon et al. (2003). However, the present work and our previous paper Florens and Georges (2002) present the first applications of the rotor technique to the context of strongly correlated lattice models. In particular, the question of quasiparticle coherence which is crucial to the description of a Fermi liquid cannot be investigated seriously with a phase-only description Herrero et al. (1999), as shown in Florens et al. (2003). In this perspective, the slave rotor should be seen as a natural extension (and simplification) of the usual slave boson techniques Coleman (1984); Kotliar (1991) in the context of a finite but orbitally symmetric Coulomb repulsion. In principle, it can also be applied to systems with long-range interactions Florens et al. (2004); Florens (2003). ii.2 Treating the constraint on average: atomic limit In the following, we will study different kinds of mean-field approximations based on this slave-rotor representation. A common trait of these mean-field approximations is that the number constraint (4) will be treated on average. This is equivalent to treating the constraint in a “grand-canonical” ensemble, which would of course be exact in the limit of a large spin/orbital degeneracy . In this section, we investigate the accuracy of this approximation for the atomic Hamiltonian (1), for finite values of . ii.2.1 Coulomb staircase: occupancy vs. Let us first consider the dependence of the average occupancy on the position of the atomic level , which reads: with . In the limit of zero temperature, the dependence of on is the “Coulomb staircase” in Fig. 1. Coulomb staircase in the atomic limit for the case of two orbitals, Figure 1: Coulomb staircase in the atomic limit for the case of two orbitals, . When treating the constraint on average, a Lagrange multiplier is introduced which is conjugate to (4), and one optimizes over instead of fully integrating over it. This amounts to consider the following effective Hamiltonians: The Lagrange multiplier is determined by the average constraint equation: in which is the average of in the Hamiltonian (9): with: and . Solving (10) for as a function of and temperature yields the dependence of the total charge within this approximation: We need to compare this approximation to the exact result (7) in the atomic limit. A graphical representation (Fig. 2) is useful in order to understand the solution of (10). Graphical solution of the average constraint equation ( Figure 2: Graphical solution of the average constraint equation (10). The intersect (cross) moves exactly along the Coulomb staircase shown in figure 1. At , one finds , as long as . The exact dependence of the average charge upon at is correctly reproduced by our approximation, corresponding to the “Coulomb staircase”: Note that vanishes linearly with temperature according to: : this is why the full Coulomb staircase can be reproduced with a single Fermi factor in (12). At finite temperature, our approximation does not coincide with the exact result for , but deviations are only sizeable for temperatures comparable to , which is not a severe limitation in practice. ii.2.2 Spectral functions We now study the consequences of the approximate treatment of the constraint for the Green’s function and spectral function. Following (8,9), the quantum rotor and auxiliary fermion degrees of freedom are described by two independent Hamiltonians, so that the Green’s function of the physical electron factorizes into: with . Equivalently, the physical electron spectral function is given by: Let us consider , and in the range corresponding to the plateau of charge in the Coulomb staircase. The ground-state energy is and its degeneracy is . The two excited states obtained by adding or removing a particle correspond to transition energies: . When acting with on the ground-state, only those ground-state components which do not already contain contribute, and there are such components. Similarly, when acting with , only the components in which is occupied contribute, and there are of them. From these considerations, we see that the exact spectral function reads, at : These two atomic transitions are the precursors of the Hubbard bands in the solid. Note that they have unequal weights, except at half-filling . At finite temperature, additional peaks appear (except for ), corresponding to transition between two excited states (with exponentially small weight for ). Remarkably, the expressions (14,15) in which the quantum rotor and auxiliary fermions are treated as decoupled, do reproduce this exact result at . The easiest way to see this is to notice that, at , , since . The rotor Green’s function is for and for . Substituting into (14), this corresponds to the exact expression (16). Alternatively, one can use the expressions of the spectral functions into (15): and , keeping in mind that while as . Again, deviations between the approximate treatment and the exact results are found at finite temperature, but remain small for . Let us emphasize that, because the rotor Green’s function is a continuous function at , with , the factorized approximation (14) insures that the physical (d-electron) spectral function is correctly normalized with total spectral weight equal to unity. To summarize, we have found that treating the constraint on average reproduces accurately the atomic limit at , both regarding the Coulomb staircase dependence of vs. , and regarding the spectral function. This is a key point for the methods introduced in this article, which allows them to describe reasonably the high energy features of strongly correlated systems. ii.2.3 Functional integral formulation We briefly introduce here a functional integral formalism for the and degrees of freedom, and derive the action associated with (1). This is simply done by switching from phase and angular-momentum operators to fields depending on imaginary time , with . The action is constructed from , and an integration over is performed. It is also necessary to introduce a Lagrange multiplier in order to implement the constraint . We note that, because of the charge conservation on the local impurity, can be chosen to be independent of time, with . This leads to the following expression of the action: The constraint is implemented exactly provided is integrated over. The above approximation amounts to evaluate the integral by a saddle-point approximation over , and the saddle-point is found to be on the real axis, with . Finally, let us mention that, in a previous publication Florens and Georges (2002), we have explained in detail the connection between the rotor construction and the Hubbard-Stratonovich decoupling of the interaction in the charge channel. Iii The simplest mean-field approximation In this section, we introduce a very simple mean-field approximation based on the slave rotors variables. This approximation is similar in spirit to the condensation of slave bosons in conventional slave boson mean-field theories. We illustrate this approximation on two examples: the Anderson impurity model and the Hubbard model. iii.1 Anderson impurity model The Anderson impurity model describes a local orbital hybridized to a conduction electron bath: This Hamiltonian can be rewritten in terms of the slave rotor and auxiliary fermion variables: submitted to the constraint (4). The simplest possible approximation is to decouple the rotor and fermion variables, leading to two effective Hamiltonians: The parameters , and in these expressions are determined by the coupled self-consistent equations: in which the averages are calculated with the effective Hamiltonians above. Let us first examine the particle-hole symmetric case () in which the solution of (23) is . The rotor sector is described by the effective Hamiltonian (20) corresponding to the Schrödinger equation: For , the ground-state wave function is the state , uniform on , corresponding to maximal phase fluctuations and thus to the absence of charge fluctuations. This is associated with the atomic limit, as explained above. As soon as the hybridization is non-zero, we shall see that . The wave function is then maximum () for , and acquires a non-zero expectation value. This corresponds to a non-zero effective hybridization , so that the auxiliary fermion effective Hamiltonian is that of a resonant level model. This captures the physics of the Kondo effect, and the corresponding Kondo resonance at the Fermi level. Even though is a singular perturbation on the atomic limit, its effect can be easily understood analytically in the present framework by treating the potential energy perturbatively. To first order in , the ground-state wave function reads: with and . This yields: Hence, using (21), one obtains , which yields the following self-consistent equation for the effective hybridization when substituted into (22): The right-hand side of this equation is easily evaluated for the resonant level model (19). For simplicity, we consider a flat conduction band , and focus on the universal regime: , with . To dominant order in , (27) reads: This yields the following expression for and for the width of the Kondo resonance when : This coincides with the exact expression Hewson (1996). The local orbital spectral function obtained from (14) reads: The first term in this expression is the Kondo resonance, and carries a spectral weight . It satisfies the Friedel sum-rule . Away from the particle-hole symmetric case (), the location of the resonance is set by , which is the renormalized impurity level familiar from conventional slave-boson theories. The rotor approximation does conserve total spectral weight, and therefore yields an incoherent contribution to the spectral function with a weight . This incoherent contribution is correctly centered around the atomic transitions, as explained above. However, the width of these Hubbard bands is incorrectly described by the simple approximation presented here, in which phase fluctuations are underestimated at short times. As a result, the Hubbard bands have a bandwidth of order in this approximation (instead of the expected, and much broader width, of order ). We note however that conventional slave-boson approximations with a condensed boson neglect altogether the Hubbard bands at the saddle-point level, and therefore the present approximation, simplified as it may be, is preferable in this respect. An improved method for the treatment of phase degrees of freedom, leading to a much more accurate description of the Hubbard bands, has been discussed in previous publications Florens and Georges (2002); Florens et al. (2003). This method consists in a set of coupled integral equations for the Green’s functions of the auxiliary fermion and of the slave rotor, in the spirit of the non-crossing approximation. iii.2 Hubbard model iii.2.1 Slave rotor formulation In this section, we consider the Hubbard model: which can be rewritten in terms of the rotor and auxiliary fermion variables as: Note that, in this context, is the chemical potential controlling the average density per site. Let us make a first approximation, which consists in decoupling the rotor and fermion variables on links (besides treating the constraint on average, as above), see Kotliar and Liu (1988) for a similar approach in the case of the t-J model. We then obtain two effective Hamiltonians: corresponding respectively to free fermionic spinons with an effective hopping and to a quantum XY-model for the phase variables with effective exchange constants . These effective parameters are determined by coupled self-consistent equations: in which the average values are calculated with the effective Hamiltonians above. In addition, the Lagrange multiplier is determined from the constraint equation: Let us emphasize that, in the decoupling leading to (33,34), we have assumed that the average values and on a given bond are both real. In fact, one could look for more general classes of solutions in which both and . This would correspond to solutions with orbital currents around a plaquette, as proposed by several authors Affleck and Marston (1989). Spontaneous orbital currents are very naturally described using the slave rotor method, but will not be considered further in this paper, which aims at the general formalism. iii.2.2 Simplest mean-field In the next section, we shall investigate some physical consequences of equations (33,34,35) which approximate the Hubbard model by free spinons coupled self-consistently to an XY-model for the phase degrees of freedom. We point out that the decoupling between fermion and rotor degrees of freedom can be viewed as a controlled approximation corresponding to a large-N limit of a multichannel model, as detailed in appendix A. Here, in the same spirit as above, we consider a further simplification, which consists in treating the quantum XY model at the mean-field level. In this framework, the phase degrees of freedom is described by a mean-field Hamiltonian of independent sites: with . Combining this with (35) and calculating the average values with the free-fermion Hamiltonian (33), we finally obtain the following self-consistency equations for the variational parameters and : Finally, the relation between the chemical potential and average number of particle per site and color is given by: In these expressions, is the density of states (d.o.s) of the band in the absence of interactions. The auxiliary fermion (quasiparticle) Green’s function reads: We recognize as the quasiparticle weight, which also determines the quasiparticle mass enhancement . These two quantities are related because of the simple single-site approximation made here. At zero temperature, the number equation (40) implies that: in which is the chemical potential of the non-interacting system such that . From (41), it is seen that the Fermi surface is located at , and thus (42) implies that Luttinger theorem is satisfied. In fact, within this simple approximation in which the self-energy is independent of momentum, the Fermi surface is unchanged by interactions altogether. The equations for and , at and for a given density, simplify into: with the average kinetic energy per electronic degree of freedom in the non-interacting model. iii.2.3 Mott transition and orbital degeneracy We expect a Mott transition to occur at each commensurate filling ( being an integer). This is associated with the vanishing of , and therefore the above equations can be analyzed analytically close to the transition (where is small) from a perturbative analysis in , similar to the one performed in section III.1 for the Anderson model in the Kondo regime. The ground-state wave function of in the insulating phase () is with . First-order perturbation theory in yields: Since vanishes at the transition, but is finite, it follows from (42) that . For vanishing , the relation between and is identical to that of the atomic limit, Eq. (13) established in the previous section: with . Finally, combining (45) and (44), we obtain: In this expression, and should be viewed as depending on the chemical potential according to the relations just given. This expression determines the boundary between the metallic and Mott insulating phase in the plane. It is depicted for the case (two orbitals with spin) in Fig. 3. Phase diagram for Figure 3: Phase diagram for (two orbitals) at , as a function of the chemical potential and the interaction strength . The three lobes correspond to the Mott insulator phases associated with half-filling () and quarter-filling () respectively. The condition determines the tip of each insulating lobe, i.e the critical coupling at which the insulating phase is entered as one increases for a fixed commensurate density . Differentiating (46), it is seen that this happens for , i.e precisely at the center of each step of the Coulomb staircase. The critical coupling thus reads: The phase diagram in the plane is depicted in Fig. 4 for a flat d.o.s of half-width , in which case . Phase diagram in the ( Figure 4: Phase diagram in the (,) plane. The Mott insulator lobes collapse to lines at commensurate fillings, when is larger than (shown as dots). We see that the critical coupling is biggest at half-filling (), which is expected since orbital fluctuations are largest in this case. This conclusion may depend on the precise shape of the d.o.s however (and in particular may not hold for densities of states such that ). The critical coupling increases linearly with orbital degeneracy . In fact, an analysis of the DMFT equations for large orbital degeneracy was made in Ref. Florens et al. (2002), and the exact behavior of the critical coupling at leading order in found there is correctly reproduced by the simple mean-field detailed here. It is also instructive to compare the present results with that of the multi-orbital Gutzwiller approximation Lu (1994), which reads: . Our expression has the same behavior at large , but yields in general a smaller critical coupling: . For small orbital degeneracies, we believe (on the basis of, e.g, DMFT results) the Gutzwiller expression of to be quantitatively more accurate. The slave-rotor mean field equations are easily solved numerically by determining iteratively the parameters and . At each iteration, the spectrum of the single-rotor Schrödinger equations is computed (using e.g a decomposition on the atomic basis states ). In Fig. 5, the ground-state wave function is displayed for several values of at half-filling. Rotor ground state wave function Figure 5: Rotor ground state wave function with values of the local interaction ranging from (peaked curve) to at the Mott transition (flat curve). The curves nicely illustrate how one goes from the insulator (in which case there are little charge fluctuations, and maximal phase fluctuations so that the wave-function is delocalized over all values) to the metal (in which case charge fluctuations become large at small , and the wave-function is peaked such as to limit phase fluctuations). The corresponding quasi-particle weight is displayed in Fig. 6 as a function of . The simple slave-rotor mean-field is compared to the DMFT result and to the Gutzwiller approximation (GA). It is seen that, close to the transition, the slave-rotor mean field reproduces more accurately the DMFT answer than the GA. It is not very accurate at weak-coupling however (even though correctly goes to at , it has an incorrect small- expansion). In fact, it is a quite general feature of this slave-rotor mean field that the method is more accurate in strongly correlated regimes. Quasi-particle weight Figure 6: Quasi-particle weight as a function of at ; DMFT calculation (thin line), rotor mean-field theory (thick line) and Gutzwiller approximation (broken line). In Fig. 7, we plot the number of particles as a function of the chemical potential for . The value of has been chosen to be bigger than the critical couplings yielding an insulating state, for any commensurate filling. The curve illustrates the plateaus found at each commensurate filling, the central one (half-filling) being narrower (compare to Fig. 3). The effective mass enhancement () is also plotted in Fig. 8 as a function of chemical potential for a smaller value of , such that a metallic phase is found at any filling. The curves illustrates how a largest effective mass enhancement is found at low and high fillings , and a comparatively smaller close to half-filling (again, this conclusion depends on the shape of the d.o.s). Total occupancy Figure 7: Total occupancy as a function of for a value larger than all critical interactions , in the two orbital case (). The Mott insulators are seen here as charge plateaus. Effective mass Figure 8: Effective mass for below all Mott transitions, in the two orbital case (). The description of the Mott transition obtained within this simplest mean-field has many common features with the Brinkman-Rice (BR) Brinkman and Rice (1970) one. Indeed, the effective mass diverges at the transition and the quasi-particle residue vanishes as in BR. There is one significant difference however, which is that in the present description, the optical gap of the insulator does not coincide with the chemical potential jump for infinitesimal doping away from a commensurate filling. Indeed, within this simple mean-field, the spectral function of the insulator is identical to that of the atomic limit (not surprisingly, the simple mean-field with only two variational parameters describes the charge fluctuations in the insulator in an oversimplified manner). As a result, the optical gap simply reads in our approach, and is therefore not critical at the Mott transition. In contrast, the chemical potential jump vanishes continuously at . Indeed, solving (46) for yields: These features are very similar to those obtained within dynamical mean-field theory Georges et al. (1996). This is not surprising, since the single-site mean field approximation to the XY-model indeed becomes exact in the limit of infinite coordination of the lattice. Note however that this is not the case of the approximation (33)-(35) which consists in decoupling the rotor and fermion variables (see section IV). Within DMFT, the quasiparticle weight vanishes at a Brinkman-Rice like critical point while the optical gap of the insulator vanishes at a Hubbard-like critical point . As a result, the strongly correlated metal close to the transition displays a clear separation of energy scales: the quasiparticle coherence scale being much smaller than the (“preformed”) gap of the insulator . The simple mean-field of this section is in a sense a somewhat extreme simplification of this picture, in which and is sent to (this is consistent with the known fact Florens et al. (2002) that while , and that the simple mean-field becomes more accurate for large-). Iv Including spatial correlations and phase fluctuations In this section, we go beyond the single-site mean-field approximation, and investigate the physical consequences of the approximate description of the Hubbard model introduced in Sec. III.2.1. This description, summarized by Eqs.(33-35), consists in a free fermion model coupled self-consistently to a quantum XY-model for the phase degrees of freedom. iv.1 General considerations Let us first emphasize some general aspects of this description, before turning to explicit calculations. The Hamiltonian for the phase degrees of freedom has two possible phases: a disordered phase without long-range phase order, and a long-range ordered phase. At zero-temperature, one expects a quantum phase transition from the ordered phase to the disordered phase as the ratio is increased. Since the Green’s function of the physical electrons read, within this approximation: it is seen that the quasiparticle weight , associated with the limit of large-distance and large time separation (low frequency), is given by: Thus, the phase with long-range order for the rotors corresponds to the metal (), while the disordered phase correspond to the Mott insulator (). Obviously, the description of the Mott metal-insulator transition that follows is closely analogous to that of the superfluid-Mott insulator transition in the bosonic Hubbard model Fisher et al. (1989); Sachdev (2000). Two remarks about this description of the metal and of the insulator are in order. First, it is of course unphysical to think of a metal as having long-range phase coherence. Naturally, this is only true of the saddle-point approximation in which the rotors and spinon degrees of freedom are entirely decoupled. Fluctuations will induce interactions between these degrees of freedom, restore inelastic scattering and thus destroy phase coherence. The absence of inelastic scattering at the saddle-point level is a well-known feature of slave-boson theories. Note futhermore that despite the ordering of the rotors, the metallic phase becomes a superconductor only when is also non-zero (i.e when there is spinon pairing). Second, the insulator envisioned here is a non-magnetic insulator without any spin or translational symmetry breaking, i.e a spin-liquid. Even in the disordered phase, on a given bond (e.g nearest-neighbour) is non-zero (it corresponds to the energy density of the XY model). Therefore in the insulating phase, so that the spinons have a Fermi surface (with Luttinger volume). This also implies that remains finite through the Mott transition and therefore that the effective mass does not diverge, despite the fact that . These last remarks apply to any finite dimension, but of course not to . In this limit, the single-site mean field of the previous section applies and . Finally, we emphasize that the non-magnetic nature of the insulator is of course associated with the fact that the rotor degrees of freedom are associated with the charge and are not appropriate to properly describe spin ordering. Therefore, they are better suited to lattices with strong frustration (or models with large orbital degeneracy) in which a spin-liquid insulator is a realistic possibility. Finally, because long-range order for the rotors corresponds to breaking a continuous O(2) symmetry, a Goldstone mode will be present in the ordered (metallic) phase. This mode is present in any finite dimension, but disappears in the limit. It corresponds to the zero-sound mode of the metal. As we shall see, these long-wavelength modes play an important role: they change the low-energy description of the transition as compared to the (DMFT) limit. As a result, the separation of energy scales does not apply in a strict sense (the “preformed” gap found within DMFT is filled up with spectral weight coming from these low-energy modes). As we shall see however, this spectral weight remains small in high dimensions (including ), so that an approximate separation of scales still applies. iv.2 Sigma-model representation: saddle-point equations in the spherical limit In order to perform explicit calculations with the quantum rotor Hamiltonian (34), we shall use an approximation that has proven successful in the context of quantum impurity models with slave rotors Florens and Georges (2002); Florens et al. (2003). It consists in replacing the quantum rotor by a complex bosonic field and to treat the constraint on average. Alternatively, this can be viewed as extending the O(2) symmetry to O(M) and taking the large-M (spherical) limit. This is a well known approximation to non-linear sigma models Sachdev (2000), which preserves many qualitative features of the quantum phase transition. For details of the formalism in the slave rotor context, see Ref. Florens and Georges (2002). In the following, we focus on the half-filled case (since we are mainly interested in the Mott transition), with a particle-hole symmetric d.o.s , so that we can set The spinon and rotor (now X-field) Green’s function read 111A rescaling of the Coulomb repulsion was used in order to preserve the exact atomic limit, as discussed in Florens and Georges (2002): In these expressions, and are respectively, fermionic and bosonic Matsubara frequencies, is a Lagrange multiplier associated with the constraint , while and are the self-consistent parameters entering the effective spinon hopping and XY coupling constants: , . The self-consistent equations which determine , and read: These expressions have been written here for a simple tight-binding band with nearest-neighbor hopping on a -dimensional cubic lattice (). As above, denotes the band d.o.s, and is the half-bandwidth. For simplicity, we have set the orbital degeneracy to in these equations. iv.3 The Mott transition: Mott-Hubbard meets Brinkman-Rice In this section, we investigate the solution of these equations at zero-temperature. This leads to a description of the finite-dimensional Mott transition that we analyze in detail. iv.3.1 The insulating phase Lets us note first that Eq. (56) readily determines at : From the form (53) of the X-field Green’s function, one sees that the bosonic spectrum has a gap as long as . In this case, there is no long-range order for the phase degree of freedom, and this corresponds to the insulating phase. The insulating gap reads: and we can rewrite Eqs. (54,55) as selfconsistent equations for the gap and the renormalization of the spinon hopping . This reads, at : These equations are valid in the insulating phase, when . The gap vanishes at a critical coupling obtained by setting in (59): In this expression, is the critical coupling corresponding to the limit, in agreement with expression (47) of the previous section (with ). Note that, in the limit, one must scale the hopping as , so that and the r.h.s of (61) goes to unity. The integral in (61) is smaller than unity in general, so that decreases as dimensionality is reduced. We also note that in one dimension, this integral has a logarithmic singularity at band edge, since , so that Eq. (61) yields , which is indeed the exact result for a half-filled Hubbard model with  Lieb and Wu (1968) (see however 222In the spherical X-field approximation, a finite is not possible in because of the Mermin-Wagner theorem. Had we kept the O(2) rotor variable, we would be able to correctly describe the Berezinskii-Kosterlitz-Thouless nature of the Mott transition, even in the approximation of decoupled spinons and rotors. A finite is possible in , as indeed found for larger orbital degeneracy, see e.g the analysis of the case in Assaraf et al. (1999).). Substracting Eq. (59) from the same equation with (which defines ), one obtains: The expansion of this expression for small depends on dimensionality. For , the integral is convergent at band edge , recalling that near the bottom of the band. In contrast, the small- expansion is singular for . This analysis finally leads to the following behavior of the gap close to the critical point: Hence we find that the exponent changes from its mean-field value for (as found e.g in the single-site mean-field of the previous section and the Gutzwiller approximation) to a non- mean field exponent for . Therefore, corresponds to the upper critical dimension in this description of the Mott transition (logarithmic corrections are found in that case). Below , the exponent corresponds to that of the large- limit of the quantum model in -dimensions, i.e to that of the
03e79e40fe4f201f
Do many-particle neutrino interactions cause a novel coherent effect? Alexander Friedland Theoretical Division, T-8, MS B285, Los Alamos National Laboratory, Los Alamos, NM 87545    Cecilia Lunardini We investigate whether coherent flavor conversion of neutrinos in a neutrino background is substantially modified by many-body effects, with respect to the conventional one-particle effective description. We study the evolution of a system of interacting neutrino plane waves in a box. Using its equivalence to a system of spins, we determine the character of its behavior completely analytically. We find that, if the neutrinos are initially in flavor eigenstates, no coherent flavor conversion is realized, in agreement with the effective one-particle description. This result does not depend on the size of the neutrino wavepackets and therefore has a general character. The validity of the several important applications of the one-particle formalism is thus confirmed. preprint: hep-ph/0307140, LA-UR-03-3847 1 Introduction The flavor composition of a neutrino system may be modified as a result of the interactions between the neutrinos and a background medium. The efficiency of this process depends, in addition to the properties of the background itself (composition, density, temperature, etc.), on the coherent (or incoherent) character of the interaction. Here we study the case of coherent neutrino conversion in a background of neutrinos. The interaction of a neutrino beam with a neutrino background has the peculiar feature that, due to momentum exchange, a neutrino from the background may be scattered into the beam and vice versa. This differs from the case of scattering on electrons and/or nucleons, and implies that the effect of coherent scattering on neutrinos can not be treated in analogy with that on ordinary matter. In the literature, the effects of the neutrino-neutrino coherent scattering on neutrino conversion have traditionally been treated following the pioneering work by Pantaleone [1, 2], Sigl & Raffelt [3], and McKellar & Thomson [4]. In these papers, a single-particle evolution equation for each neutrino is derived, which has the form of the usual oscillation equation for neutrinos in matter with an extra term designed to capture the effect of neutrino–neutrino scattering. According to this equation, the effect of the interaction with the neutrino background is significant if the neutrino-neutrino potential is at least comparable to the potential due to matter (zero-temperature and thermal terms) and/or to the vacuum oscillation terms: Here is the (anti)neutrino number density, is the electron (positron) number density, is the neutrino energy, is the -boson mass and is the Fermi constant. The condition (1) may be satisfied in the early Universe close to the Big Bang Nucleosynthesis epoch (at temperature MeV) or just outside a supernova core (at ). Indeed, in these environments the number density of neutrinos is comparable to, or larger than, that of electrons and nucleons. At the same time, incoherent scattering is negligible and neutrinos stream freely in the medium, affected by coherent scattering (refraction) only. The evolution of the neutrino flavor composition in both cases has been extensively studied using the one-particle equation. An important result [5, 6, 7, 8, 9] is the strong bound on the chemical potential of non-electron relic neutrinos that follows from studying neutrino oscillations in the early Universe, with important implications on BBN. Other important applications include hypothetical oscillations between active and sterile neutrino states that may have generated a lepton asymmetry in the early Universe [10, 11, 12, 13, 14] and possible effects of a sterile neutrino on the -processes in a supernova [15, 16, 17, 18, 19]. There is an important theoretical issue at the foundation of all of the mentioned analyses that, if unresolved, casts doubt on their validity. The issue is the existence and the range of validity of the description of neutrino self-refraction in terms of a set of one-particle equations. Generally speaking, the interactions between neutrinos in the ensemble may be expected to create quantum correlations, or, in other words “entangled” states of many neutrinos which are not products of individual wavefunctions. A priori, a system in which such entanglement exists requires a many-particle description. The importance of this point was recognized early on [2], but was not pursued further in the following years. Recently, the problem of possible existence and effects of quantum correlations has been investigated using two different approaches. The first, put forth in our recent paper [20], is to understand the flavor evolution of a many-neutrinos system as an effect of the interference of many elementary neutrino-neutrino scattering events. Using this construction, we argued that, in the limit of infinite number of neutrinos, the many-body description factorizes into one-particle equations. These equations coincide with those given in the previous literature up to terms which change the overall phases of neutrino states. While such terms can be potentially important in non-oscillation phenomena, they do not affect the flavor evolution of the system. As a particular example, we considered an ensemble of neutrinos which are initially in flavor eigenstates. Following our analysis, we found that in this system, somewhat counterintuitively, no coherent flavor conversion takes place. This finding is in agreement with the standard one-particle description of [1, 2, 3, 4]. The second approach, introduced by Bell et al. [21], is to consider the evolution of a system of interacting neutrino plane waves in a box. The idea is to study the properties of the many-body Hamiltonian of this system with the goal of determining the rate with which statistical equilibration is achieved. The analysis is applied to a system of neutrinos which are initially in flavor eigenstates. It is concluded that the time scale of the flavor evolution of a given neutrino in this system is inversely proportional to the neutrino density and to the Fermi constant: , which is characteristic of coherent flavor conversion. This result contrasts with the prediction of the one-particle description, and therefore has been considered as an indication of its breakdown and of the existence of new coherent effects due to entanglement. Clearly, this would have profound implications on the applications of neutrino-induced neutrino conversion to supernova physics and cosmology. Given the importance of the subject, in this paper we present a further study of neutrino flavor conversion in a neutrino background. The aim is to clarify the origin of the contrast between Ref. [21] and the other literature and give a definite answer to the question of the importance of entanglement and its effects. Since the suggested new effect comes from simultaneous interaction between many neutrinos, we study, following [21], the Schrödinger equation of a system of many neutrino plane waves in a box. We consider the free streaming regime and neglect vacuum oscillation terms, which can be included by a straightforward generalization of our results. We show that the problem is equivalent to that of a system of spins, for which, in the case of constant spin-spin coupling, the equilibration time can be determined completely analytically. For neutrinos initially in flavor eigenstates, in the limit of the infinite number of neutrinos, this solution shows no coherent conversion, in agreement with our earlier results. The paper is organized as follows. In Sect. 2 we lay the foundations of our analysis, by discussing the structure of the neutrino–neutrino interaction in the flavor space and in the real space. We also perform an elementary analysis of coherence in a system of several interacting neutrinos and use it to motivate the following study. In Sect. 3 the many body approaches are discussed and the relevant results of our previous paper are reviewed. In Sect. 4 we formulate the problem of interacting neutrino plane waves in a box, present the full analytical solution and give its derivation. A discussion and conclusions follow in Sect. 5 and 6. 2 General considerations 2.1 The flavor structure of the interaction In the range of neutrino energies that are relevant to the applications ( eV), the interaction between neutrinos is described by the low-energy neutral current (NC) Hamiltonian Higher order effects, such as those coming from the expansion of the propagator or particle production (bremsstrahlung) will be neglected in our analysis. A property of this Hamiltonian that will play a very important role later is its invariance under the rotation of the flavor group. Because of this property, the interaction between pairs of neutrinos – viewed in flavor space – must be equivalent to the interaction between pairs of spins. The one-particle equations of [1, 2, 3, 4, 20] indeed preserve the structure of the problem and have a form of the interaction between spins111In [22], this property of the equations has been used to give an elegant physical analogy designed to explain the “synchronized” oscillations of neutrinos in a neutrino background.. It would be logically inconsistent, however, to rely on this fact in the analysis, in which these equations are being tested. Therefore, below we present an explicit proof of the equivalence, valid regardless of whether the interaction is coherent or not. Let us consider the interaction of two neutrinos, and, in the interests of clarity, for a moment suppress the Dirac indices on the fermions and the gamma matrices. Since the NC interactions conserve flavor, in the Hamiltonian (2) the flavor space wavefunction of a given outgoing neutrino is equal to that of one of the incoming neutrinos. Let us denote the flavor wavefunctions by and . There are then two possible combinations: Since , the term in Eq. (3) equals . The term in Eq. (4) could be transformed using the well-known property of the (4-dimensional) -matrices, (in the notation convention of [23]), or One gets In this form, the equivalence between a system of neutrinos and a system of spins is manifest. The complete (including both the contributions from (6) and (3)) flavor space Hamiltonian for the interaction of two neutrinos, 1 and 2, is proportional to which coincides with the square of the operator of the total angular momentum, , as expected. The strength of the interaction depends on the scattering angle in real space. Let us denote the states of the incoming neutrinos as and and those of the outgoing states as and . The spatial dependence of the scattering amplitude is given by If the neutrino wavepackets are sufficiently broad so that several wavepackets overlap, the preceding argument can be trivially generalized to show that the interactions between the “overlapping” neutrinos have the structure of the interactions between the corresponding spins. In the case of neutrino plane waves in a box, each spin interacts with all the others and the total Hamiltonian is the sum of the interactions of all pairs (see Sect. 4.3). 2.2 Coherence of neutrino-neutrino scattering: basic features We next discuss general properties of coherent neutrino-neutrino scattering. By definition, scattering is coherent when the waves scattered by different particles in a target interfere with each other. When the scatterer is distinguishable from the incident particle, the coherence condition is satisfied when the incident particle is scattered forward. For other directions, scattering is incoherent (unless the particle spacings in the target satisfy particular conditions, e.g. periodicity). In contrast, for neutrino-neutrino scattering, scattering can be coherent when the momentum of the scattered neutrino coincides with the momentum of one of the incident neutrinos. The corresponding values of the scattering angle are and , where denotes the angle between the two initial-state neutrinos. For the purpose of studying coherent effects, one may replace the full Hamiltonian (2) with a “toy” Hamiltonian which restricts scattering angles to values and only, for which coherent interference may occur. This simplified interaction can be viewed as a flavor exchange between neutrinos which do not change their momenta. We emphasize that, while all coherent effects in the system are fully captured in this way, most of the incoherent effects are left out. Hence, one must be careful while interpreting incoherent effects found in this framework, as will be seen later. Following [2], we write the reduced Hamiltonian for a pair of interacting neutrinos with momenta and in the form where is the normalization volume. The value of the angular factor, , follows from (8) (taking into account the anticommutativity of the spinors), while the flavor structure of the interaction is determined by Eq. (7). The condition by itself, however, does not guarantee coherence. We can see that by considering the scattering of an electron neutrino on two “background” muon neutrinos. Suppose that the initial momenta of the muon neutrinos are orthogonal to that of the electron neutrino and, further, that the neutrino wave packets are sufficiently narrow so that the two scattering events are independent, with each given by Eq. (9). Then, as a result of the interaction, we get where . Both the interaction time and the volume depend on the size of the wavepackets. Taking the wavepackets for simplicity to be spheres of size , we can write We notice that , as long as the size of the wavepackets is much greater than (100 GeV), i.e., the neutrino energy is well below the weak scale. This justifies neglecting terms of order and higher in Eq. (10). Let us compute the probability that, as a result of the interaction, the incident neutrino was converted into . Introduce the “ number” operator that acts only on the state of the incident neutrino. Using the orthogonality of the states and , the probability in question is This shows that the process is incoherent: the probability we found coincides with the sum of the conversion probabilities for each scattering (a coherent process would yield ). As can be easily confirmed, as the number of “background” neutrinos is increased, the probability of flavor conversion goes as . It can be further shown that if the background neutrinos are in a flavor superposition state , the probability of conversion is not, in general, equal to the sum of the probabilities for each scattering. Indeed, a straightforward calculation yields . Comparing this with the conversion probability for a single scattering event, , we see that the conversion process is partially coherent, and the degree of coherence increases with decreasing . A generalization to a larger number of scatterers is [20] which clearly shows the presence of partial coherence (the terms proportional to ). The above example shows that whether a neutrino undergoes coherent conversion or not depends on the flavor state of the many background neutrinos it interacts with. Moreover, as can be seen from Eq. (10), each neutrino-neutrino interaction creates an entangled state. Hence, the flavor evolution of a system of several interacting neutrinos demands a many-particle description. The question is: does the wavefunction of the system somehow factorize into individual one-particle wavefunctions in the limit of large , and, if yes, under what conditions? Below we describe two recently proposed approaches to this problem. 3 Single- vs. many-particle description: two approaches 3.1 First approach: interference of many elementary scattering amplitudes The first method was developed in [20]. The idea is to describe the flavor evolution in a neutrino gas by generalizing the procedure of adding elementary scattering amplitudes, as was outlined in the three-neutrino example of Sect. 2.2. Let us consider the interactions between two orthogonal neutrino beams. Once the results for this setup are understood, they can be straightforwardly generalized to more complicated systems. Let us assume that the neutrino wavepacket size is much smaller than the particle spacing (), so that different neutrino-neutrino interactions can be treated independently using Eq. (9). As an initial configuration, we take the first beam to be made of electron neutrinos , and the second beam to be made of neutrinos in the flavor superposition state . It can be shown [20] that the result of the interaction during a small time can be written as a direct generalization of Eq. (10). Here the state is defined to be orthogonal to : . The evolution in Eqs. (15,16) cannot be described using only one-particle equations, since the final state in Eq. (16) is an entangled many-particle state. Nevertheless, as observed in [20], the coherent part of the evolution can be. Indeed, the last term in Eq. (16) represents an incoherent effect, since it contains the sums of mutually orthogonal terms, and could be dropped. The sum of the remaining three terms is equal, to the first order in , to a product of rotated single-particle states. Since for small a state of each neutrino undergoes a small rotation and since nothing in this argument depends on the particular choice of the initial states, Eqs. (15,16) specify how the states will rotate at any point in time. This observation means that, for each neutrino, one can write a differential equation describing its evolution for any — not necessarily small — . This equation takes on the form [20] where, for generality, we restored the angular factor and summed over possible angles . The flavor-space wavefunction of the given neutrino is denoted by , those of the other (“background”) neutrinos by . The quantity denotes the number density of neutrinos whose momenta make an angle with the given neutrino. This equation is similar to the one-particle evolution equations given in [1, 3, 4]. In fact, the only difference is in the last two terms in the square brackets, which have no effect on flavor evolution. Let us discuss a particular case when the neutrinos at are in the flavor eigenstates. We note that, according to Eq. (3.1), a coherent conversion effect in this case is absent. This result in turn could be traced to Eq. (16), where the second and the third terms vanish for or . The only remaining source of flavor transformation is the last term in Eq. (16), which is nonzero for . Since this term represents incoherent scattering, for a neutrino traveling through a gas of neutrinos of opposite flavor the conversion probability for small has the form . At first, it is not obvious how this incoherent effect is related to the incoherent flavor exchange that occurs in a real neutrino ensemble. Indeed, as mentioned in the introduction to Eq. (9), by replacing the true interaction Eq. (2) with the forward scattering Hamiltonian, we have truncated most of the incoherent effects. Nevertheless, a connection between the two can be made. The argument, which follows below, is very instructive and will prove helpful in the subsequent Sections. When discussing the “forward scattering probability”, we in fact mean the probability for the neutrino to be scattered in a certain (small) solid angle around the direction of the incoming neutrino (the probability of neutrino scattering at any fixed angle is, of course, zero). A neutrino wave packet with the cross section area and wavelength is diverging due to diffraction into a solid angle . The neutrino wave which is scattered into the same solid angle will interfere with the incident wave. Thus, . With this in mind, let us consider the result of forward scattering of a given neutrino on other neutrinos in the medium. According to Eq. (9), for neutrino wave packets of longitudinal size and cross section , . The resulting flavor conversion probability depends on whether the forward scattering amplitudes add up coherently or incoherently. In the first case, , in the second, . The number of interactions during time is ( is the neutrino number density) and we find that in the case of coherent forward scattering the dependence on the size of the wave packet drops out, This is indeed seen from Eq. (3.1), which does not contain the size of the neutrino wavepacket. On the other hand, for the case of incoherent forward scattering, the cross section of the wave packet does enter the final result, The dependence on the size of the wavepacket look puzzling. Nevertheless, it is easily understood once we recall that, by construction, the result in (19) represents the fraction of incoherent scattering events for which one of the two neutrinos is scattered in the forward cone . This means that, up to a numerical coefficient, the probability of incoherent scattering in any direction is given by This has the form , where is a typical weak interaction cross section, , exactly as one would expect. Thus, in summary, for the purpose of studying coherent effects, the full Hamiltonian (2) may be replaced by the forward scattering flavor exchange Hamiltonian, as done in Eq. (9), with the understanding that in the final result the terms that go like represent the complete coherent effect while the terms that go like correspond to only the fraction of the incoherent scattering that occurs in the forward diffraction cone. 3.2 Second approach: solving a many-particle equation for neutrino plane waves in a box An alternative method of studying coherent processes in a neutrino gas was proposed in [21]. The idea is to consider a system of interacting neutrino plane waves in a box and study its flavor evolution, again with the restriction to forward scattering only. The neutrinos are taken to be initially in flavor states and the goal is to test the existence of coherent conversion effects by looking at the rate of equilibration of the system. The “forward scattering” Hamiltonian can be found by generalizing Eq. (9). A basis of states is formed by the initial configuration of neutrinos and all its possible permutations. Taking, e.g., electron and muon neutrinos, there are distinct basis states. The entries of the Hamiltonian are found as follows. (i) The diagonal entries receive contributions (this includes flavor-blind scattering processes with scattering angle and processes with scattering angle ). (ii) Each off-diagonal entry receives a single contribution from the exchange process that connects the two corresponding basis states; if the two basis states are not connected by a single permutation, the corresponding off-diagonal entry vanishes. (iii) Each contribution equals , where is the angle between the momenta of the two interacting neutrinos. For example, for the case of there are basis states: , , , , , , and the Hamiltonian is given by Here is the angular factor for the pair and , where is the sum of all distinct . Let us discuss the meaning of the equilibration time . If conversion is coherent, is defined as the time over which a neutrino of a given momentum is expected to change its flavor. The momentum itself stays unchanged over this time scale. If conversion is incoherent, is defined as the time over which the neutrino momenta will be randomized (each neutrino can no longer be identified by its momentum). In the model we are considering, however, the definition of equilibration for coherent process also applies to the incoherent process, since the interaction is chosen to preserve the neutrino momentum. Let us determine what dependence of would be a sign of coherent or incoherent conversion. To do this, it is useful to repeat the exercise of Sect. 3.1 of estimating the conversion probability as a function of time. For neutrino plane waves in the box the following two modifications to the argument need to be made: first, the normalization volume becomes the size of the box and, second, the interaction time becomes the actual time elapsed from the beginning of the evolution. The scattering amplitude becomes and, since each neutrino simultaneously interact with all the others, the number of interactions is . Using these ingredients, we find Here, as in the case of the interacting neutrino wavepackets, we find that in the dependence on the size of the box cancels out (and the result is the same as Eq. (18)), while for incoherent scattering the size of the box enters the final result. Since, as was already stressed, the incoherent scattering result is, in some sense, artificial, there is no cause for alarm. Inverting Eqs. (22,23) and Eqs. (18,19), we find that the equilibration time for coherent scattering in both cases has the dependence while for incoherent scattering one finds for the case of neutrino wavepackets, and for plane waves in a box. In what follows, we solve the evolution of the system of neutrinos in the box completely analytically. This gives the equilibration time , which we compare to Eqs. (24) and (26) to determine the character of the evolution. 4 Plane waves in a box: analytical treatment 4.1 Setup As done in [21], we consider a system of neutrino plane waves, of which are initially in the state and the remaining are in the state. We write this configuration of the system as: Here each neutrino wave is identified by its position in the list, which corresponds to a given momentum. The Hamiltonian for this system has the form described in Sect. 3.2. Since it is equivalent to the Hamiltonian of a system of spins (Sect. 2.1), in the following we adopt the terminology of angular momenta. As a convention, we identify the “up” (pointing along the axis) state of a spin with and the “down” state with . Thus, the initial state (27) can be expressed as To make an analytical treatment possible, we set for all neutrino pairs. The resulting Hamiltonian describes an ensemble of spins interacting with each other with equal strength222This makes it different from the familiar Ising model.. This simplifying assumption does not change the coherent (or incoherent) character of the conversion effects, as also pointed out in [21]. With this assumption, the nonzero off-diagonal entries of the Hamiltonian become equal to while the diagonal entries become . We recall (Sect. 3.1) that for the initial configuration (27) the single-particle formalism predicts no coherent effects, as can be seen from Eq. (3.1). We investigate if this conclusion is changed once the simultaneous interactions between all neutrinos are considered. 4.2 Results Let us first display our results. Their derivation and a more detailed discussion is given in Sec. 4.3. The probability that the first spin – which is in the state at – is found in the same up state at time () can be calculated by a completely analytical procedure. It is given by the Fourier series: where . The coefficients are the squares of the Clebsch-Gordan coefficients for the addition of two angular momenta, and : We follow the standard notation, in which denote the values of the angular momenta being added, denote their projections along the direction, and and are the corresponding values for the total spin. The time-evolution of the survival probability Figure 1: The time-evolution of the survival probability , for different values of (numbers on the curves). The unit of the horizontal axis is chosen to be , as used in [21]. If the conversion were coherent, the curves should reach equilibration at the same point on this scale. Figure 1 shows the survival probability , obtained by summing numerically the series (30). The horizontal axis represents time in units of the equilibration time predicted for coherent conversion, (Eq. (24) for neutrinos). The curves refer to . Having the same choice of units, our plot can be directly compared to fig. 2 of ref. [21], where, however, smaller values of were used333In ref. [21] a different form of the spin-spin (neutrino-neutrino) coupling was adopted, namely, the angular factors were randomly generated. This produces a numerical difference with respect to our results, but does not affect the conclusions about the equilibration time.. From Fig. 1, it is clear that the equilibration time is larger than and the discrepancy grows with . In the limit the series (30) is well approximated by the following function: where is the imaginary error function We have checked that for the values of shown in Fig. 1 the approximation is very good and the curves obtained using (32) are indistinguishable from those obtained from Eq. (30). From Eq. (32) we see that the characteristic equilibration time equals: This scaling of the equilibration time can be clearly seen in Fig. 1. It is precisely (up to numerical factors) the form (26) expected if the equilibration process is incoherent. 4.3 Derivation of results 4.3.1 The eigenvalues of the Hamiltonian From Eqs. (7) and (29) it follows that for each pair and the interaction is given by and, therefore, the Hamiltonian for the whole system is where the summation is done over all pairs. We notice that this Hamiltonian can be related to the operator of the square of the total angular momentum of the system, By comparing Eqs. (35) and (36) we find which has the eigenvalues Since the flavor evolution will depend only on the difference of the eigenvalues (as will be shown later), in the following we will omit the second term in Eq. (38) and write A given value of in general corresponds to more than one state. This degeneracy can be seen by simply counting the states. For our specific setup, with , there are basis states, while there are only different values of . In general, therefore, an orthonormal basis of eigenstates of the Hamiltonian has the form: where the index labels eigenstates with the same value of . As an illustration of the present discussion, we give the Hamiltonian for the case of Sect. 3.2: The eigenvalues, which are degenerate as expected, are . They correspond to the following values of : , as can be seen from Eq. (38). 4.3.2 The Fourier series Let us start with proving Eq. (30). In the interest of clarity, we first display some definitions. Let us denote as and the “up” and “down” states of the first spin. Given a state of the spins of the system, it will be convenient to use the projections and , which are states of spins. In particular, we introduce the state obtained by removing the first spin from the initial configuration (Eq. (28)): For any state, we can define another state obtained from the first one upon a reflection along the -axis. In this transformation, each constituent spin is transformed into spin and vice versa. For instance, the application of this transformation to gives its “mirror” state: In what follows we will exploit the properties of various states under this reflection. We adopt the notation to indicate the result of the following procedure: consider the state and take its projection on the subspace formed by the states of spins that have the total angular momentum and projection along the axis; normalize the result to 1. As an example, indicates the normalized projection of on the subspace of the states of spins having total angular momentum and -projection equal to zero. It is also convenient to define the state: (which has spins) and denote as its mirror state obtained upon -reflection. The proof is constituted by four parts. 1. Expansion of the initial state The initial state can be expanded on the basis of the eigenstates of the Hamiltonian: which then can be evolved in time as Since the energy depends only on , the summation over can be done, yielding The factors equal the Clebsch-Gordan coefficients: This can be understood considering that the initial state, , equals the sum of two angular momenta with , and projections , . The first momentum is given by the sum of all the “up” spins, while the second is the sum of all the “down” spins. 2. Decomposition into states of and spins It is useful to decompose the state as: Similarly to Eq. (48), here we have: where was given in Eq. (31). The argument to explain (50) is analogous to that used to justify Eq. (48): since the state has spins “up” and N spins “down”, it is given by the sum of two angular momenta with , and projections , . These momenta are the result of summing all the spins “up” and all the spins “down” separately. Each of the states can be decomposed as: where the states and in the second line are related to the corresponding ones in the first line by the -reflection operation. Notice that the state has parity under -reflection. Requiring that the right hand side of Eq. (51) has the same parity yields We also note that where the last relation represents the normalization condition. 3. Properties of the decomposition (51). We can calculate the state and demand that it vanishes at . This is equivalent to requiring a null probability to find the first spin initially in the down state. Combining this condition with Eqs. (47), (51) and (52) we find from which it follows that and, by -reflection: 4. Calculation of the survival probability . Let us calculate the probability that at the time the first spin is found in its initial state, : The combination of Eqs. (47), (51) and (53), gives the expression of : which, taking into account the result (57), simplifies to:
5e901ea3376b37bc
A Brief History of Time: From the Big Bang to Black Holes by Stephen Hawking (1988) The whole history of science has been the gradual realisation that events do not happen in an arbitrary manner, but that they reflect a certain underlying order. (p.122) This book was a publishing phenomenon when it was published in 1988. Nobody thought a book of abstruse musings about obscure theories of cosmology would sell, but it became a worldwide bestseller, selling more than 10 million copies in 20 years. It was on the London Sunday Times bestseller list for more than five years and was translated into 35 languages by 2001. So successful that Hawking went on to write seven more science books on his own, and co-author a further five. Accessible As soon as you start reading you realise why. From the start is it written in a clear accessible way and you are soon won over to the frank, sensible, engaging tone of the author. He tells us he is going to explain things in the simplest way possible, with an absolute minimum of maths or equations (in fact, the book famously includes only one equation E = mc²). Candour He repeatedly tells us that he’s going to explain things in the simplest possible way, and the atmosphere is lightened when Hawking – by common consent one of the great brains of our time – confesses that he has difficulty with this or that aspect of his chosen subject. (‘It is impossible to imagine a four-dimensional space. I personally find it hard enough to visualise three-dimensional space!’) We are not alone in finding it difficult! Historical easing Also, like most of the cosmology books I’ve read, it takes a deeply historical view of the subject. He doesn’t drop you into the present state of knowledge with its many accompanying debates i.e. at the deep end. Instead he takes you back to the Greeks and slowly, slowly introduces us to their early ideas, showing why they thought what they thought, and how the ideas were slowly disproved or superseded. A feel for scientific change So, without the reader being consciously aware of the fact, Hawking accustoms us to the basis of scientific enquiry, the fundamental idea that knowledge changes, and from two causes: from new objective observations, often the result of new technologies (like the invention of the telescope which enabled Galileo to make his observations) but more often from new ideas and theories being worked out, published and debated. Hawking’s own contributions There’s also the non-trivial fact that, from the mid-1960s onwards, Hawking himself has made a steadily growing contribution to some of the fields he’s describing. At these points in the story, it ceases to be an objective history and turns into a first-person account of the problems as he saw them, and how he overcame them to develop new theories. It is quite exciting to look over his shoulder as he explains how and why he came up with the new ideas that made him famous. There are also hints that he might have trodden on a few people’s toes in the process, for those who like their science gossipy. Thus it is that Hawking starts nice and slow with the ancient Greeks, with Aristotle and Ptolemy and diagrams showing the sun and other planets orbiting round the earth. Then we are introduced to Copernicus, who first suggested the planets orbit round the sun, and so on. With baby steps he takes you through the 19th century idea of the heat death of the universe, on to the discovery of the structure of the atom at the turn of the century, and then gently introduces you to Einstein’s special theory of relativity of 1905. (The special theory of relativity doesn’t take account of gravity, the general theory of relativity of 1915, does, take account of gravity). Chapter 1 Our Picture of the Universe (pp.1-13) Aristotle thinks earth is stationary. Calculates size of the earth. Ptolemy. Copernicus. In 1609 Galileo starts observing Jupiter using the recently invented telescope. Kepler suggests the planets move in ellipses not perfect circles. 1687 Isaac newton publishes Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy) ‘probably the most important single work ever published in the physical sciences’, among many other things postulating a law of universal gravity. One implication of Newton’s theory is that the universe is vastly bigger than previously conceived. In 1823 Heinrich Olbers posited his paradox which is, if the universe is infinite, the night sky out to be as bright as daylight because the light from infinite suns would reach us. Either it is not infinite or it has some kind of limit, possibly in time i.e. a beginning. The possible beginning or end of the universe were discussed by Immanuel Kant in his obscure work A Critique of Pure Reason  (1781). Various other figures debated variations on this theme until in 1929 Edwin Hubble made the landmark observation that, wherever you look, distant galaxies are moving away from us i.e. the universe is expanding. Working backwards from this observation led physicists to speculate that the universe was once infinitely small and infinitely dense, in a state known as a singularity, which must have exploded in an event known as the big bang. He explains what a scientific theory is: A theory is just a model of the universe, or a restricted part of it, and a set of rules that relate quantities in the model to observations that we make… A theory is a good theory if it satisfies two requirements: it must accurately describe a large class of observations on the basis of a model that contains only a few arbitrary elements, and it must make definite predictions about the results of future observations. A theory is always provisional. The more evidence proving it, the stronger it gets. But it only takes one good negative observation to disprove a theory. Today scientists describe the universe in terms of two basic partial theories – the general theory of relativity and quantum mechanics. They are the great intellectual achievements of the first half of this century. But they are inconsistent with each other. One of the major endeavours of modern physics is to try and unite them in a quantum theory of gravity. Chapter 2 Space and Time (pp.15-34) Aristotle thought everything in the universe was naturally at rest. Newton disproved this with his first law – whenever a body is not acted on by any force it will keep on moving in a straight line at the same speed. Newton’s second law stats that, When a body is acted on by a force it will accelerate or change its speed at a rate that is proportional to the force. Newton’s law of gravity states that every particle attracts every other particle in the universe with a force which is directly proportional to the product of their masses and inversely proportional to the square of the distance between their centres. But like Aristotle, Newton believed all the events he described took place in a kind of big static arena named absolute space, and that time was an absolute constant. The speed of light was also realised to be a constant. In 1676 Danish astronomer Ole Christensen estimated the speed of light to be 140,000 miles per second. We now know it is 186,000 miles per second. In the 1860s James Clerk Maxwell unified the disparate theories which had been applied to magnetism and electricity. In 1905 Einstein published his theory of relativity. It is derived not from observation but from Einstein working through in his head the consequences and shortcomings of the existing theories. Newton had posited a privileged observer, someone outside the universe who was watching it as if a play on a stage. From this privileged position a number of elements appeared constant, such as time. Einstein imagines a universe in which there is no privileged outside point of view. We are all inside the universe and all moving. The theory threw up a number of consequences. One is that energy is equal to mass times the speed of light squared, or E = mc². Another is that nothing may travel faster than the speed of light. Another is that, as an object approaches the speed of light its mass increases. One of its most disruptive ideas is that time is relative. Different observes, travelling at different speeds, will see a beam of light travel take different times to travel a fixed distance. Since Einstein has made it axiomatic that the speed of light is fixed, and we know the distance travelled by the light is fixed, then time itself must appear different to different observers. Time is something that can change, like the other three dimensions. Thus time can be added to the existing three dimensions to create space-time. The special theory of relativity was successful in explaining how the speed of light appears the same to all observers, and describing what happens to things when they move close to the speed of light. But it was inconsistent with Newton’s theory of gravity which says objects attract each other with a force related to the distance between them. If you move on of the objects the force exerted on the other object changes immediately. This cannot be if nothing can travel faster than the speed of light, as the special theory of relativity postulates. Einstein spent the ten or so years from 1905 onwards attempting to solve this difficulty. Finally, in 1915, he published the general theory of relativity. The revolutionary basis of this theory is that space is not flat, a consistent  continuum or Newtonian stage within which events happen and forces interact in a sensible way. Space-time is curved or warped by the distribution of mass or energy within it, and gravity is a function of this curvature. Thus the earth is not orbiting around the sun in a circle, it is following a straight line in warped space. The mass of the sun curves space-time in such a way that although the earth follows a straight line in four-dimensional pace-time, it appears to us to move along a circular orbit in three-dimensional space. (p.30) In fact, at a planetary level Einstein’s maths is only slightly different from Newton’s but it predicts a slight difference in the orbit of Mercury which observations have gone on to prove. Also, the general theory predicts that light will bend, following a straight line but through space that is warped or curved by gravity. Thus the light from a distant star on the far side of the sun will bend as it passes close to the sun due to the curvature in space-time caused by the sun’s mass. And it was an expedition to West Africa in 1919 to observe an eclipse, which showed that light from distant stars did in fact bend slightly as it passed the sun, which helped confirm Einstein’s theory. Newton’s laws of motion put an end to the idea of absolute position in space. The theory of relativity gets rid of absolute time. Hence the thought experiment popularised by a thousand science fiction books that astronauts who set off in a space ship which gets anywhere near the speed of light will experience a time which is slower than the people they leave behind on earth. In the theory of relativity there is no unique absolute time, but instead each individual has his own personal measure of time that depends on where he is and how he is moving. (p.33) Obviously, since most of us are on planet earth, moving at more or less the same speed, everyone’s personal ‘times’ coincide. Anyway, the key central implication of Einstein’s general theory of relativity is this: Before 1915, space and time were thought of as a fixed arena in which events took place, but which was not affected by what happened in it. This was true even of the special theory of relativity. Bodies moved, forces attracted and repelled, but time and space simply continued, unaffected. It was natural to think that space and time went on forever. the situation, however, is quite different in the general theory of relativity. Space and time are now dynamic quantities. : when a body moves, or a force acts, it affects the curvature of space and time – and in turn the structure of space-time affects the way in which bodies move and forces act. Space and time not only affect but also are affected by everything that happens in the universe. (p.33) This view of the universe as dynamic and interacting, by demolishing the old eternal static view, opened the door to a host of new ways of conceiving how the universe might have begun and might end. Chapter 3 The Expanding Universe (pp.35-51) Our modern picture of the universe dates to 1924 when American astronomer Edwin Hubble demonstrated that ours is not the only galaxy. We now know the universe is home to some hundred million galaxies, each containing some hundred thousand million stars. We live in a galaxy that is about one hundred thousand light-years across and is slowly rotating. Hubble set about cataloguing the movement of other galaxies and in 1929 published his results which showed that they are all moving away from us, and that, the further away a galaxy is, the faster it is moving. The discovery that the universe is expanding was one of the great intellectual revolutions of the twentieth century. (p.39) From Newton onwards there was a universal assumption that the universe was infinite and static. Even Einstein invented a force he called ‘the cosmological constant’ in order to counter the attractive power of gravity and preserve the model of a static universe. It was left to Russian physicist Alexander Friedmann to seriously calculate what the universe would look like if it was expanding. In 1965 two technicians, Arno Penzias and Robert Wilson, working at Bell Telephone Laboratories discovered a continuous hum of background radiation coming from all parts of the sky. This echoed the theoretical work being done by two physicists, Bob Dicke and Jim Peebles, who were working on a suggestion made by George Gamow that the early universe would have been hot and dense. They posited that we should still be able to see the light from this earliest phase but that it would, because the redshifting, appear as radiation. Penzias and Wilson were awarded the Nobel Prize in 1987. How can the universe be expanding? Imagine blowing up a balloon with dots (or little galaxies) drawn on it: they all move apart from each other and the further apart they are, the larger the distance becomes; but there is no centre to the balloon. Similarly the universe is expanding but not into anything. There is no outside. If you set out to travel to the edge you would find no edge but instead find yourself flying round the periphery and end up back where you began. There are three possible states of a dynamic universe. Either 1. it will expand against the contracting force of gravity until the initial outward propulsive force is exhausted and gravity begins to win; it will stop expanding, and start to contract. Or 2. it is expanding so fast that the attractive, contracting force of gravity never wins, so the universe expands forever and matter never has time to clump together into stars and planets. Or 3. it is expanding at just the right speed to escape collapsing back in on itself, but but so fast as to make the creation of matter impossible. This is called the critical divide. Physicists now believe the universe is expanding at just around the value of the critical divide, though whether it is just under or just above (i.e. the universe will eventually cease expanding, or not) is not known. Dark matter We can calculate the mass of all the stars and galaxies in the universe and it is a mystery that our total is only about a hundredth of the mass that must exist to explain the gravitational behaviour of stars and galaxies. In other words, there must a lot of ‘dark matter’ which we cannot currently detect in order for the universe to be shaped the way it is. So we don’t know what the likely future of the universe is (endless expansion or eventual contraction) but all the Friedmann models do predict that the universe began in an infinitely dense, infinitely compact, infinitely hot state – the singularity. Because mathematics cannot really handle infinite numbers, this means that the general theory of relativity… predicts that there is a point in the universe where the theory itself breaks down… In fact, all our theories of science are formulated on the assumption that space-time is smooth and nearly flat, so they break down at the big bang singularity, where the curvature of space-time is infinite. (p.46) Opposition to the theory came from Hermann Bondi, Thomas Gold and Fred Hoyle who formulated the steady state theory of the universe i.e. it has always been and always will be. All that is needed to explain the slow expansion is the appearance of new particles to keep it filled up, but the rate is very low (about one new particle per cubic kilometre per year). They published it in 1948 and worked through all its implications for the next few decades, but it was killed off as a theory by the 1965 observations of the cosmic background radiation. He then explains the process whereby he elected to do a PhD expanding Roger Penrose’s work on how a dying star would collapse under its own weight to a very small size. The collaboration resulted in a joint 1970 paper which proved that there must have been a big bang, provided only that the theory of general relativity is correct, and the universe contains as much matter as we observe. If the universe really did start out as something unimaginably small then, from the 1970s onwards, physicists turned their investigations to what happens to matter at microscopic levels. Chapter 4 The Uncertainty Principle (pp.53-61) 1900 German scientist Max Planck suggests that light, x-rays and other waves can only be emitted at an arbitrary wave, in packets he called quanta. He theorised that the higher the frequency of the wave, the more energy would be required. This would tend to restrict the emission of high frequency waves. In 1926 Werner Heisenberg expanded on these insights to produce his Uncertainty Principle. In order to locate a particle in order to measure its position and velocity you need to shine a light on it. One has to use at least one quantum of energy. However, exposing the particle to this quantum will disturb the velocity of the particle. In other words, the more accurately you try to measure the position of the particle, the less accurately you can measure its speed, and vice versa. (p.55) Heisenberg showed that the uncertainty in the position of the particle times the uncertainty in its velocity times the mass of the particle can never be smaller than a certain quantity, which is known as Planck’s constant. For the rest of the 1920s Heisenberg, Erwin Schrödinger and Paul Dirac reformulated mechanics into a new theory titled quantum mechanics. In this theory particles no longer have separate well-defined positions and velocities, instead they have a general quantum state which is a combination of position and velocity. Quantum mechanics introduces an unavoidable element of unpredictability or randomness into science. (p.56) Also, particles can no longer be relied on to be particles. As a result of Planck and Heisenberg’s insights, particles have to be thought of as sometimes behaving like waves, sometimes like particles. In 1913 Niels Bohr had suggested that electrons circle round a nucleus at certain fixed points, and that it takes energy to dislodge them from these optimum orbits. Quantum theory helped explain Bohr’s theory by conceptualising the circling electrons not as particles but as waves. If electrons are waves, as they circle the nucleus, their wave lengths would cancel each other out unless they are perfect numbers. The frequency of the waves have to be able to circle the nucleus in perfect integers. This defines the height of the orbits electrons can take. Chapter 5 Elementary Particles and Forces of Nature (pp.63-79) A chapter devoted to the story of how we’ve come to understand the world of sub-atomic particles. Starting (as usual) with Aristotle and then fast-forwarding through Galton, Einstein’s paper on Brownian motion, J.J. Thomson’s discovery of electrons, and, in 1911, Ernest Rutherford’s demonstration that atoms are made up of tiny positively charged nucleus around which a number of tiny positively charged particles, electrons, orbit. Rutherford thought the nuclei contained ‘protons’, which have a positive charge and balance out the negative charge of the electrons. In 1932 James Chadwick discovered the nucleus contains neutrons, same mass as the proton but no charge. In 1965 quarks were discovered by Murray Gell-Mann. In fact scientists went on to discover six types, up, down, strange, charmed, bottom and top quarks. A proton or neutron is made up of three quarks. He explains the quality of spin. Some particles have to be spin twice to return to their original appearance. They have spin 1/2. All the matter we can see in the universe has the spin 1/2. Particles of spin 0, 1, and 2 give rise to the forces between the particles. Pauli’s exclusionary principle: two similar particles cannot exist in the same state, they cannot have the same position and the same velocity. The exclusionary principle is vital since it explains why the universe isn’t a big soup of primeval particles. The particles must be distinct and separate. In 1928 Paul Dirac explained why the electron must rotate twice to return to its original position. He also predicted the existence of the positron to balance the electron. In 1932 the positron was discovered and Dirac was awarded a Nobel Prize. Force carrying particles can be divided into four categories according to the strength of the force they carry and the particles with which they interact. 1. Gravitational force, the weakest of the four forces by a long way. 2. The electromagnetic force interacts with electrically charged particles like electrons and quarks. 3. The weak nuclear force, responsible for radioactivity. In findings published in 1967 Abdus Salam and Steven Weinberg suggested that in addition to the photon there are three other spin-1 particles known collectively as massive vector bosons. Initially disbelieved, experiments proved them right and they collected the Nobel Prize in 1979. In 1983 the team at CERN proved the existence of the three particles, and the leaders of this team also won the Nobel Prize. 4. The strong nuclear force holds quarks together in the proton and neutron, and holds the protons and neutrons together in the nucleus. This force is believed to be carried by another spin-1 particle, the gluon. They have a property named ‘confinement’ which is that you can’t have a quark of a single colour, the number of quarks bound together must cancel each other out. The idea behind the search for a Grand Unified Theory is that, at high enough temperature, all the particles would behave in the same way, i.e. the laws governing the four forces would merge into one law. Most of the matter on earth is made up of protons and neutrons, which are in turn made of quarks. Why is there this preponderance of quarks and not an equal number of anti-quarks? Hawking introduces us to the notion that all the laws of physics obey three separate symmetries known as C, P and T. In 1956 two American physicists suggested that the weak force does not obey symmetry C. Hawking then goes on to explain more about the obedience or lack of obedience to the rules of symmetry of particles at very high temperatures, to explain why quarks and matter would outbalance anti-quarks and anti-matter at the big bang in a way which, frankly, I didn’t understand. Chapter 6 Black Holes (pp.81-97) In a sense, all the preceding has been just preparation, just a primer to help us understand the topic which Hawking spent the 1970s studying and which made his name – black holes. The term black hole was coined by John Wheeler in 1969. Hawking explains the development of ideas about what happens when a star dies. When a star is burning, the radiation of energy in the forms of heat and light counteracts the gravity of its mass. When it runs out of fuel, gravity takes over and the star collapses in on itself. The young Indian physicist Subrahmanyan Chandrasekhar calculated that a cold star with a mass of more than one and a half times the mass of our sin would not be able to support itself against its own gravity and contract to become a ‘white dwarf’ with a radius of a few thousand miles and a density of hundreds of tones per square inch. The Russian Lev Davidovich Landau speculated that the same sized star might end up in a different state. Chandrasekhar had used Pauli’s exclusionary principle as applied to electrons i.e. calculated the smallest densest state the mass could reach assuming no electron can be in the place of any other electron. Landau calculated on the basis of the exclusionary principle repulsion operative between neutrons and protons. Hence his model is known as the ‘neutron star’, which would have a radius of only ten miles or so and a density of hundreds of millions of tonnes per cubic inch. (In an interesting aside Hawking tells us that physics was railroaded by the vast Manhattan Project to build an atomic bomb, and then to build a hydrogen bomb, throughout the 1940s and 50s. This tended to sideline large-scale physics about the universe. It was only the development of a) modern telescopes and b) computer power, that revived interest in astronomy.) A black hole is what you get when the gravity of a collapsing star becomes so high that it prevents light from escaping its gravitational field. Hawking and Penrose showed that at the centre of a black hole must be a singularity of infinite density and space-time curvature. In 1967 the study of black holes was revolutionised by Werner Israel. He showed that, according to general relativity, all non-rotating black holes must be very simple and perfectly symmetrical. Hawking then explains several variations on this theory put forward by Roger Penrose, Roy Kerr, Brandon Carter who proved that a hole would have an axis of symmetry. Hawking himself confirmed this idea. In 1973 David Robinson proved that a black hole had to have ‘a Kerr solution’. In other words, no matter how they start out, all black holes end up looking the same, a belief summed up in the pithy phrase, ‘A black hole has no hair’. What is striking about all this is that it was pure speculation, derived entirely from mathematical models without a shred of evidence from astronomy. Black holes are one of only a fairly small number of cases in the history of science in which a theory was developed in great detail as a mathematical model before there was any evidence from observations that it was correct. (p.92) Hawking then goes on to list the best evidence we have for black holes, which is surprisingly thin. Since they are by nature invisible black holes can only be deduced by their supposed affect on nearby stars or systems. Given that black holes were at the centre of Hawking’s career, and are the focus of these two chapters, it is striking that there is, even now, very little direct empirical evidence for their existence. (Eerily, as I finished reading A Brief History of Time, the announcement was made on 10 April 2019 that the first ever image has been generated of a black hole – Theory predicts that other stars which stray close to a black hole would have clouds of gas attracted towards it. As this matter falls into the black hole it will a) be stripped down to basic sub-atomic particles b) make the hole spin. Spinning would make the hole acquire a magnetic field. The magnetic field would shoot jets of particles out into space along the axis of rotation of the hole. These jets should be visible to our telescopes. First ever image of a black hole, captured the Event Horizon Telescope (EHT). The hole is 40 billion km across, and 500 million trillion km away Chapter 7 Black Holes Ain’t So Black (pp.99-113) Black holes are not really black after all. They glow like a hot body, and the smaller they are, the hotter they glow. Again, Hawking shares with us the evolution of his thinking on this subject, for example how he was motivated in writing a 1971 paper about black holes and entropy at least partly in irritation against another researcher who he felt had misinterpreted his earlier results. Anyway, it all resulted in his 1973 paper which showed that a black hole ought to emit particles and radiation as if it were a hot body with a temperature that depends only on the black hole’s mass. The reasoning goes thus: quantum mechanics tells us that all of space is fizzing with particles and anti-particles popping into existence, cancelling each other out, and disappearing. At the border of the event horizon, particles and anti-particles will be popping into existence as everywhere else. But a proportion of the anti-particles in each pair will be sucked inside the event horizon, so that they cannot annihilate their partners, leaving the positive particles to ping off into space. Thus, black holes should emit a steady stream of radiation! If black holes really are absorbing negative particles as described above, then their negative energy will result in negative mass, as per Einstein’s most famous equation, E = mc² which shows that the lower the energy, the lower the mass. In other words, if Hawking is correct about black holes emitting radiation, then black holes must be shrinking. Gamma ray evidence suggests that there might be 300 black holes in every cubic light year of the universe. Hawking then goes on to estimate the odds of detecting a black hole a) in steady existence b) reaching its final state and blowing up. Alternatively we could look for flashes of light across the sky, since on entering the earth’s atmosphere gamma rays break up into pairs of electrons and positrons. No clear sightings have been made so far. (Threaded throughout the chapter has been the notion that black holes might come in two types: one which resulted from the collapse of stars, as described above. And others which have been around since the start of the universe as a function of the irregularities of the big bang.) Summary: Hawking ends this chapter by claiming that his ‘discovery’ that radiation can be emitted from black holes was ‘the first example of a prediction that depended in an essential way on both the great theories of this century, general relativity and quantum mechanics’. I.e. it is not only an interesting ‘discovery’ in its own right, but a pioneering example of synthesising the two theories. Chapter 8 The Origin and Fate of the Universe (pp.115-141) This is the longest chapter in the book and I found it the hardest to follow. I think this is because it is where he makes the big pitch for His Theory, for what’s come to be known as the Hartle-Hawking state. Let Wikipedia explain: Hartle and Hawking suggest that if we could travel backwards in time towards the beginning of the Universe, we would note that quite near what might otherwise have been the beginning, time gives way to space such that at first there is only space and no time. Beginnings are entities that have to do with time; because time did not exist before the Big Bang, the concept of a beginning of the Universe is meaningless. According to the Hartle-Hawking proposal, the Universe has no origin as we would understand it: the Universe was a singularity in both space and time, pre-Big Bang. Thus, the Hartle–Hawking state Universe has no beginning, but it is not the steady state Universe of Hoyle; it simply has no initial boundaries in time or space. (Hartle-Hawking state Wikipedia article) To get to this point Hawking begins by recapping the traditional view of the ‘hot big bang’, i.e. the almost instantaneous emergence of matter from a state of infinite mass, energy and density and temperature. This is the view first put forward by Gamow and Alpher in 1948, which predicted there would still be very low-level background radiation left over from the bang – which was then proved with the discovery of the cosmic background radiation in 1965. Hawking gives a picture of the complete cycle of the creation of the universe through the first generation of stars which go supernova blowing out into space the heavier particles which then go into second generation stars or clouds of gas and solidify into things like planet earth. In a casual aside, he gives his version of the origin of life on earth: The earth was initially very hot and without an atmosphere. In the course of time it cooled and acquired an atmosphere from the emission of gases from the rocks. This early atmosphere was not one in which we could have survived. It contained no oxygen, but a lot of other gases that are poisonous to us, such as hydrogen sulfide. There are, however, other primitive forms of life that can flourish under such conditions. It is thought that they developed in the oceans, possibly as a result of chance combinations of atoms into large structures, called macromolecules, which were capable of assembling other atoms in the ocean into similar structures. They would thus have reproduced themselves and multiplied. In some cases there would have been errors in the reproduction. Mostly these errors would have been such that the new macromolecule could not reproduce itself and eventually would have been destroyed. However, a few of the errors would have produced new macromolecules that were even better at reproducing themselves. They would have therefore had an advantage and would have tended to replace the original macromolecules. In this way a process of evolution was started that led to the development of more and more complicated, self-reproducing organisms. The first primitive forms of life consumed various materials, including hydrogen sulfide, and released oxygen. This gradually changed the atmosphere to the composition that it has today and allowed the development of higher forms of life such as fish, reptiles, mammals, and ultimately the human race. (p.121) (It’s ironic that he discusses the issue so matter-of-factly, demonstrating that, for him at least, the matter is fairly cut and dried and not worth lingering over. Because, of course, for scientists who’ve devoted their lives to the origins-of-life question it is far from over. It’s a good example of the way that every specialist thinks that their specialism is the most important subject in the world, the subject that will finally answer the Great Questions of Life whereas a) most people have never heard about the issues b) wouldn’t understand them and c) don’t care.) Hawking goes on to describe chaotic boundary conditions and describe the strong and the weak anthropic principles. He then explains the theory proposed by Alan Guth of inflation i.e. the universe, in the first milliseconds after the big bang, underwent a process of enormous hyper-growth, before calming down again to normal exponential expansion. Hawking describes it rather differently from Barrow and Davies. He emphasises that, to start with, in a state of hypertemperature and immense density, the four forces we know about and the spacetime dimensions were all fused into one. They would be in ‘symmetry’. Only as the early universe cooled would it have undergone a ‘phase transition’ and the symmetry between forces been broken. If the temperature fell below the phase transition temperature without symmetry being broken then the universe would have a surplus of energy and it is this which would have cause the super-propulsion of the inflationary stage. The inflation theory: • would allow for light to pass from one end of the (tiny) universe to the other and explains why all regions of the universe appear to have the same properties • explain why the rate of expansion of the universe is close to the critical rate required to make it expand for billions of years (and us to evolve) • would explain why there is so much matter in the universe Hawking then gets involved in the narrative explaining how he and others pointed out flaws in Guth’s inflationary model, namely that the phase transition at the end of the inflation ended in ‘bubble’s which expanded to join up. But Hawking and others pointed out that the bubbles were expanding so fat they could never join up. In 1981 the Russian Andre Linde proposed that the bubble problem would be solved if  a) the symmetry broke slowly and b) the bubbles were so big that our region of the universe is all contained within a single bubble. Hawking disagreed, saying Linde’s bubbles would each have to be bigger than the universe for the maths to work out, and counter-proposing that the symmetry broke everywhere at the same time, resulting in the uniform universe we see today. Nonetheless Linde’s model became known as the ‘new inflationary model’, although Hawking considers it invalid. [In these pages we get a strong whiff of cordite. Hawking is describing controversies and debates he has been closely involved in and therefore takes a strongly partisan view, bending over backwards to be fair to colleagues, but nonetheless sticking to his guns. In this chapter you get a strong feeling for what controversy and debate within this community must feel like.) Hawking prefers the ‘chaotic inflationary model’ put forward by Linde in 1983, in which there is no phase transition or supercooling, but which relies on quantum fluctuations. At this point he introduces four ideas which are each challenging and which, taken together, mark the most difficult and confusing part of the book. First he says that, since Einstein’s laws of relativity break down at the moment of the singularity, we can only hope to understand the earliest moments of the universe in terms of quantum mechanics. Second, he says he’s going to use a particular formulation of quantum mechanics, namely Richard Feynman’s idea of ‘a sum over histories’. I think this means that Feynman said that in quantum mechanics we can never know precisely which route a particle takes, the best we can do is work out all the possible routes and assign them probabilities, which can then be handled mathematically. Third, he immediately points out that working with Feynman’s sum over histories approach requires the use of ‘imaginary’ time, which he then goes on to explain. To avoid the technical difficulties with Feynman’s sum over histories, one must use imaginary time. (p.134) And then he points out that, in order to use imaginary time, we must use Euclidean space-time instead of ‘real’ space-time. All this happens on page 134 and was too much for me to understand. On page 135 he then adds in Einstein’s idea that the gravitational field us represented by curved space-time. It is now that he pulls all these ideas together to assert that, whereas in the classical theory of gravity, which is based on real space-time there are only two ways the universe can behave – either it has existed infinitely or it had a beginning in a singularity at a finite point in time; in the quantum theory of gravity, which uses Euclidean space-time, in which the time direction is on the same footing as directions in space it is possible: for space-time to be finite in extent and yet to have no singularities that formed a boundary or edge. In Hawking’s theory the universe would be finite in duration but not have a boundary in time because time would merge with the other three dimensions, all of which cease to exist during and just after a singularity. Working backwards in time, the universe shrinks but it doesn’t shrink, as a cone does, to a single distinct point – instead it has a smooth round bottom with no distinct beginning. The Hartle-Hawking no boundary Hartle and Hawking No-Boundary Proposal Finally Hawking points out that this model of a no-boundary universe derived from a Feynman interpretation of quantum gravity does not give rise to all possible universes, but only to a specific family of universes. One aspect of these histories of the universe in imaginary time is that none of them include singularities – which would seem to render redundant all the work Hawking had done on black holes in ‘real time’. He gets round this by saying that both models can be valid, but in order to demonstrate different things. It is simply a matter of which is the more useful description. (p.139) He winds up the discussion by stating that further calculations based on this model explain the two or three key facts about the universe which all theories must explain i.e. the fact that it is clumped into lumps of matter and not an even soup, the fact that it is expanding, and the fact that the background radiation is minutely uneven in some places suggesting very early irregularities. Tick, tick, tick – the no-boundary proposal is congruent with all of them. It is a little mind-boggling, as you reach the end of this long and difficult chapter, to reflect that absolutely all of it is pure speculation without a shred of evidence to support it. It is just another elegant way of dealing with the problems thrown up by existing observations and by trying to integrate quantum mechanics with Einsteinian relativity. But whether it is ‘true’ or not, not only is unproveable but also is not really the point. Chapter 9 The Arrow of Time (pp.143-153) If Einstein’s theory of general relativity is correct and light always appears to have the same velocity to all observers, no matter what position they’re in or how fast they’re moving, THEN TIME MUST BE FLEXIBLE. Time is not a fixed constant. Every observer carries their own time with them. Hawking points out that there are three arrows of time: • the thermodynamic arrow of time which obeys the Second Law of Thermodynamics namely that entropy, or disorder, increases – there are always many more disordered states than ordered ones • the psychological arrow of time which we all perceive • the cosmological arrow of time, namely the universe is expanding and not contracting Briskly, he tells us that the psychological arrow of time is based on the thermodynamic one: entropy increases and our lives experience that and our minds record it. For example, human beings consume food – which is a highly ordered form of energy – and convert it into heat – which is a highly disordered form. Hawking tells us that he originally thought that, if the universe reach a furthest extent and started to contract, disorder (entropy) would decrease, and everything in the universe would happen backwards. Until Don Page and Raymond Laflamme, in their different ways, proved otherwise. Now he believes that the contraction would not occur until the universe had been almost completely thinned out and all the stars had died i.e. the universe had become an even soup of basic particles. THEN it would start to contract. And so his current thinking is that there would be little or no thermodynamic arrow of time (all thermodynamic processes having come to an end) and all of this would be happening in a universe in which human beings could not exist. We will never live to see the contraction phase of the universe. If there is a contraction phase. Chapter 10: The Unification of Physics (pp.155-169) The general theory of relativity and quantum mechanics both work well for their respective scales (stars and galaxies, sub-atomic particles) but cannot be made to mesh, despite fifty of more years of valiant attempts. Many of the attempts produce infinity in their results, so many infinities that a strategy has been developed called ‘renormalisation’ which gets rid of the infinities, although Hawking conceded is ‘rather dubious mathematically’. Grand Unified Theories is the term applied to attempts to devise a theory (i.e. a set of mathematical formulae) which will take account of the four big forces we know about: electromagnetism, gravity, the strong nuclear force and the weak nuclear force. In the mid-1970s some scientists came up with the idea of ‘supergravity’ which postulated a ‘superparticle’, and the other sub-atomic particles variations on the super-particle but with different spins. According to Hawking the calculations necessary to assess this theory would take so long nobody has ever done it. So he moves onto string theory i.e. the universe isn’t made up of particles but of open or closed ‘strings’, which can join together in different ways to form different particles. However, the problem with string theory is that, because of the mathematical way they are expressed, they require more than four dimensions. A lot more. Hawking mentions anywhere from ten up to 26 dimensions. Where are all these dimensions? Well, strong theory advocates say they exist but are very very small, effectively wrapped up into sub-atomic balls, so that you or I never notice them. Rather simplistically, Hawking lists the possibilities about a complete unified theory. Either: 1. there really is a grand unified theory which we will someday discover 2. there is no ultimate theory but only an infinite sequence of possibilities which will describe the universe with greater and greater, but finite accuracy 3. there is no theory of the universe at all, and events will always seems to us to occur in a random way This leads him to repeat the highfalutin’ rhetoric which all physicists drop into at these moments, about the destiny of mankind etc. Discovery of One Grand Unified Theory: would bring to an end a long and glorious chapter in the history of humanity’s intellectual struggle to understand the universe. But it would also revolutionise the ordinary person’s understanding of the laws that govern the universe. (p.167) I profoundly disagree with this view. I think it is boilerplate, which is a phrase defined as ‘used in the media to refer to hackneyed or unoriginal writing’. Because this is not just the kind of phrasing physicists use when referring to the search for GUTs, it’s the same language biologists use when referring to the quest to understand how life derived from inorganic chemicals, it’s the same language the defenders of the large Hadron Collider use to justify spending billions of euros on the search for ever-smaller particles, it’s the language used by the guys who want funding for the Search for Extra-Terrestrial Intelligence), it’s the kind of language used by the scientists bidding for funding for the Human Genome Project. Each of these, their defenders claim, is the ultimate most important science project, quest and odyssey ever,  and when they find the solution it will for once and all answer the Great Questions which have been tormenting mankind for millennia. Etc. Which is very like all the world’s religions claiming that their God is the only God. So a) there is a pretty obvious clash between all these scientific specialities which each claim to be on the brink of revealing the Great Secret. But b) what reading this book and John Barrow’s Book of Universes convinces me is that i) we are very far indeed from coming even close to a unified theory of the universe and more importantly ii) if one is ever discovered, it won’t matter. Imagine for a moment that a new iteration of string theory does manage to harmonise the equations of general relativity and quantum mechanics. How many people in the world are really going to be able to understand that? How many people now, currently, have a really complete grasp of Einsteinian relativity and Heisenbergian quantum uncertainty in their strictest, most mathematical forms? 10,000? 1000,000 earthlings? If and when the final announcement is made who would notice, who would care, and why would they care? If the final conjunction is made by adapting string theory to 24 dimensions and renormalising all the infinities in order to achieve a multi-dimensional vision of space-time which incorporates both the curvature of gravity and the unpredictable behaviour of sub-atomic particles – would this really revolutionise the ordinary person’s understanding of the laws that govern the universe? Chapter 11 Conclusion (pp.171-175) Recaps the book and asserts that his and James Hartle’s no-boundary model for the origin of the universe is the first to combine classic relativity with Heisenberg uncertainty. Ends with another rhetorical flourish of trumpets which I profoundly disagree with for the reasons given above. Maybe I’m wrong, but I think this is a hopelessly naive view of human nature and culture. Einstein’s general theory has been around for 104 years, quantum mechanics for 90 years. Even highly educated people understand neither of them, and what Hawking calls ‘just ordinary people’ certainly don’t – and it doesn’t matter.  Of course the subject matter is difficult to understand, but Hawking makes a very good fist of putting all the ideas into simple words and phrases, avoiding all formulae and equations, and the diagrams help a lot. My understanding is that A Brief History of Time was the first popular science to put all these ideas before the public in a reasonably accessible way, and so opened the floodgates for countless other science writers, although hardly any of the ideas in it felt new to me since I happen to have just reread the physics books by Barrow and Davies which cover much the same ground and are more up to date. But my biggest overall impression is how provisional so much of it seems. You struggle through the two challenging chapters about black holes – Hawking’s speciality – and then are casually told that all this debating and arguing over different theories and model-making had gone on before any black holes were ever observed by astronomers. In fact, even when Hawking died, in 2018, no black holes had been conclusively identified. It’s a big shame he didn’t live to see this famous photograph being published and confirmation of at least the existence of the entity he devoted so much time to theorising about. Related links Reviews of other science books The Environment Genetics and life Human evolution Particle physics The Book of Universes by John D. Barrow (2011) This book is twice as long and half as good as Barrow’s earlier primer, The Origin of the Universe. In that short book Barrow focused on the key ideas of modern cosmology – introducing them to us in ascending order of complexity, and as simply as possible. He managed to make mind-boggling ideas and demanding physics very accessible. This book – although it presumably has the merit of being more up to date (published in 2011 as against 1994) – is an expansion of the earlier one, an attempt to be much more comprehensive, but which, in the process, tends to make the whole subject more confusing. The basic premise of both books is that, since Einstein’s theory of relativity was developed in the 1910s, cosmologists and astronomers and astrophysicists have: 1. shown that the mathematical formulae in which Einstein’s theories are described need not be restricted to the universe as it has traditionally been conceived; in fact they can apply just as effectively to a wide variety of theoretical universes – and the professionals have, for the past hundred years, developed a bewildering array of possible universes to test Einstein’s insights to the limit 2. made a series of discoveries about our actual universe, the most important of which is that a) it is expanding b) it probably originated in a big bang about 14 billion years ago, and c) in the first few milliseconds after the bang it probably underwent a period of super-accelerated expansion known as the ‘inflation’ which may, or may not, have introduced all kinds of irregularities into ‘our’ universe, and may even have created a multitude of other universes, of which ours is just one If you combine a hundred years of theorising with a hundred years of observations, you come up with thousands of theories and models. In The Origin of the Universe Barrow stuck to the core story, explaining just as much of each theory as is necessary to help the reader – if not understand – then at least grasp their significance. I can write the paragraphs above because of the clarity with which The Origin of the Universe explained it. In The Book of Universes, on the other hand, Barrow’s aim is much more comprehensive and digressive. He is setting out to list and describe every single model and theory of the universe which has been created in the past century. He introduces the description of each model with a thumbnail sketch of its inventor. This ought to help, but it doesn’t because the inventors generally turn out to be polymaths who also made major contributions to all kinds of other areas of science. Being told a list of Paul Dirac’s other major contributions to 20th century science is not a good way for preparing your mind to then try and understand his one intervention on universe-modelling (which turned, in any case, out to be impractical and lead nowhere). Another drawback of the ‘comprehensive’ approach is that a lot of these models have been rejected or barely saw the light of day before being disproved or – more complicatedly – were initially disproved but contained aspects or insights which turned out to be useful forty years later, and were subsequently recycled into revised models. It gets a bit challenging to try and hold all this in your mind. In The Origin of the Universe Barrow sticks to what you could call the canonical line of models, each of which represented the central line of speculation, even if some ended up being disproved (like Hoyle and Gold and Bondi’s model of the steady state universe). Given that all of this material is pretty mind-bending, and some of it can only be described in advanced mathematical formulae, less is definitely more. I found The Book of Universes simply had too many universes, explained too quickly, and lost amid a lot of biographical bumpf summarising people’s careers or who knew who or contributed to who’s theory. Too much information. One last drawback of the comprehensive approach is that quite important points – which are given space to breathe and sink in in The Origin of the Universe are lost in the flood of facts in The Book of Universes. I’m particularly thinking of Einstein’s notion of the cosmological constant which was not strictly necessary to his formulations of relativity, but which Einstein invented and put into them solely in order to counteract the force of gravity and ensure his equations reflected the commonly held view that the universe was in a permanent steady state. This was a mistake and Einstein is often quoted as admitting it was the biggest mistake of his career. In 1965 scientists discovered the cosmic background radiation which proved that the universe began in an inconceivably intense explosion, that the universe was therefore expanding and that the explosive, outward-propelling force of this bang was enough to counteract the contracting force of the gravity of all the matter in the universe without any need for a hypothetical cosmological constant. I understand this (if I do) because in The Origin of the Universe it is given prominence and carefully explained. By contrast, in The Book of Universes it was almost lost in the flood of information and it was only because I’d read the earlier book that I grasped its importance. The Book of Universes Barrow gives a brisk recap of cosmology from the Sumerians and Egyptians, through the ancient Greeks’ establishment of the system named after Ptolemy in which the earth is the centre of the solar system, on through the revisions of Copernicus and Galileo which placed the sun firmly at the centre of the solar system, on to the three laws of Isaac Newton which showed how the forces which govern the solar system (and more distant bodies) operate. There is then a passage on the models of the universe generated by the growing understanding of heat and energy acquired by Victorian physicists, which led to one of the most powerful models of the universe, the ‘heat death’ model popularised by Lord Kelvin in the 1850s, in which, in the far future, the universe evolves to a state of complete homegeneity, where no region is hotter than any other and therefore there is no thermodynamic activity, no life, just a low buzzing noise everywhere. But this is all happens in the first 50 pages and is just preliminary throat-clearing before Barrow gets to the weird and wonderful worlds envisioned by modern cosmology i.e. from Einstein onwards. In some of these models the universe expands indefinitely, in others it will reach a peak expansion before contracting back towards a Big Crunch. Some models envision a static universe, in others it rotates like a top, while other models are totally chaotic without any rules or order. Some universes are smooth and regular, others characterised by clumps and lumps. Some are shaken by cosmic tides, some oscillate. Some allow time travel into the past, while others threaten to allow an infinite number of things to happen in a finite period. Some end with another big bang, some don’t end at all. And in only a few of them do the conditions arise for intelligent life to evolve. The Book of Universes then goes on, in 12 chapters, to discuss – by my count – getting on for a hundred types or models of hypothetical universes, as conceived and worked out by mathematicians, physicists, astrophysicists and cosmologists from Einstein’s time right up to the date of publication, 2011. A list of names Barrow namechecks and briefly explains the models of the universe developed by the following (I am undertaking this exercise partly to remind myself of everyone mentioned, partly to indicate to you the overwhelming number of names and ideas the reader is bombarded with): • Aristotle • Ptolemy • Copernicus • Giovanni Riccioli • Tycho Brahe • Isaac Newton • Thomas Wright (1771-86) • Immanuel Kant (1724-1804) • Pierre Laplace (1749-1827) devised what became the standard Victorian model of the universe • Alfred Russel Wallace (1823-1913) discussed the physical conditions of a universe necessary for life to evolve in it • Lord Kelvin (1824-1907) material falls into the central region of the universe and coalesce with other stars to maintain power output over immense periods • Rudolf Clausius (1822-88) coined the word ‘entropy’ in 1865 to describe the inevitable progress from ordered to disordered states • William Jevons (1835-82) believed the second law of thermodynamics implies that universe must have had a beginning • Pierre Duhem (1961-1916) Catholic physicist accepted the notion of entropy but denied that it implied the universe ever had a beginning • Samuel Tolver Preson (1844-1917) English engineer and physicist, suggested the universe is so vast that different ‘patches’ might experience different rates of entropy • Ludwig Boltzmann and Ernst Zermelo suggested the universe is infinite and is already in a state of thermal equilibrium, but just with random fluctuations away from uniformity, and our galaxy is one of those fluctuations • Albert Einstein (1879-1955) his discoveries were based on insights, not maths: thus he saw the problem with Newtonian physics is that it privileges an objective outside observer of all the events in the universe; one of Einstein’s insights was to abolish the idea of a privileged point of view and emphasise that everyone is involved in the universe’s dynamic interactions; thus gravity does not pass through a clear, fixed thing called space; gravity bends space. The American physicist John Wheeler once encapsulated Einstein’s theory in two sentences: Matter tells space how to curve. Space tells matter how to move. (quoted on page 52) • Marcel Grossmann provided the mathematical underpinning for Einstein’s insights • Willem de Sitter (1872-1934) inventor of, among other things, the de Sitter effect which represents the effect of the curvature of spacetime, as predicted by general relativity, on a vector carried along with an orbiting body – de Sitter’s universe gets bigger and bigger for ever but never had a zero point; but then de Sitter’s model contains no matter • Vesto Slipher (1875-1969) astronomer who discovered the red shifting of distant galaxies in 1912, the first ever empirical evidence for the expansion of the galaxy • Alexander Friedmann (1888-1925) Russian mathematician who produced purely mathematical solutions to Einstein’s equation, devising models where the universe started out of nothing and expanded a) fast enough to escape the gravity exerted by its own contents and so will expand forever or b) will eventually succumb to the gravity of its own contents, stop expanding and contract back towards a big crunch. He also speculated that this process (expansion and contraction) could happen an infinite number of times, creating a cyclic series of bangs, expansions and contractions, then another bang etc A graphic of the oscillating or cyclic universe (from Discovery magazine) • Arthur Eddington (1882-1944) most distinguished astrophysicist of the 1920s • George Lemaître (1894-1966) first to combine an expanding universe interpretation of Einstein’s equations with the latest data about redshifting, and show that the universe of Einstein’s equations would be very sensitive to small changes – his model is close to Eddington’s so that it is often called the Eddington-Lemaître universe: it is expanding, curved and finite but doesn’t have a beginning • Edwin Hubble (1889-1953) provided solid evidence of the redshifting (moving away) of distant galaxies, a main plank in the whole theory of a big bang, inventor of Hubble’s Law: • Objects observed in deep space – extragalactic space, 10 megaparsecs (Mpc) or more – are found to have a redshift, interpreted as a relative velocity away from Earth • This Doppler shift-measured velocity of various galaxies receding from the Earth is approximately proportional to their distance from the Earth for galaxies up to a few hundred megaparsecs away • Richard Tolman (1881-1948) took Friedmann’s idea of an oscillating universe and showed that the increased entropy of each universe would accumulate, meaning that each successive ‘bounce’ would get bigger; he also investigated what ‘lumpy’ universes would look like where matter is not evenly spaced but clumped: some parts of the universe might reach a maximum and start contracting while others wouldn’t; some parts might have had a big bang origin, others might not have • Arthur Milne (1896-1950) showed that the tension between the outward exploding force posited by Einstein’s cosmological constant and the gravitational contraction could actually be described using just Newtonian mathematics: ‘Milne’s universe is the simplest possible universe with the assumption that the universe s uniform in space and isotropic’, a ‘rational’ and consistent geometry of space – Milne labelled the assumption of Einsteinian physics that the universe is the same in all places the Cosmological Principle • Edmund Fournier d’Albe (1868-1933) posited that the universe has a hierarchical structure from atoms to the solar system and beyond • Carl Charlier (1862-1934) introduced a mathematical description of a never-ending hierarchy of clusters • Karl Schwarzschild (1873-1916) suggested  that the geometry of the universe is not flat as Euclid had taught, but might be curved as in the non-Euclidean geometries developed by mathematicians Riemann, Gauss, Bolyai and Lobachevski in the early 19th century • Franz Selety (1893-1933) devised a model for an infinitely large hierarchical universe which contained an infinite mass of clustered stars filling the whole of space, yet with a zero average density and no special centre • Edward Kasner (1878-1955) a mathematician interested solely in finding mathematical solutions to Einstein’s equations, Kasner came up with a new idea, that the universe might expand at different rates in different directions, in some parts it might shrink, changing shape to look like a vast pancake • Paul Dirac (1902-84) developed a Large Number Hypothesis that the really large numbers which are taken as constants in Einstein’s and other astrophysics equations are linked at a deep undiscovered level, among other things abandoning the idea that gravity is a constant: soon disproved • Pascual Jordan (1902-80) suggested a slight variation of Einstein’s theory which accounted for a varying constant of gravitation as through it were a new source of energy and gravitation • Robert Dicke (1916-97) developed an alternative theory of gravitation • Nathan Rosen (1909-995) young assistant to Einstein in America with whom he authored a paper in 1936 describing a universe which expands but has the symmetry of a cylinder, a theory which predicted the universe would be washed over by gravitational waves • Ernst Straus (1922-83) another young assistant to Einstein with whom he developed a new model, an expanding universe like those of Friedman and Lemaître but which had spherical holes removed like the bubbles in an Aero, each hole with a mass at its centre equal to the matter which had been excavated to create the hole • Eugene Lifschitz (1915-85) in 1946 showed that very small differences in the uniformity of matter in the early universe would tend to increase, an explanation of how the clumpy universe we live in evolved from an almost but not quite uniform distribution of matter – as we have come to understand that something like this did happen, Lifshitz’s calculations have come to be seen as a landmark • Kurt Gödel (1906-1978) posited a rotating universe which didn’t expand and, in theory, permitted time travel! • Hermann Bondi, Thomas Gold and Fred Hoyle collaborated on the steady state theory of a universe which is growing but remains essentially the same, fed by the creation of new matter out of nothing • George Gamow (1904-68) • Ralph Alpher and Robert Herman in 1948 showed that the ratio of the matter density of the universe to the cube of the temperature of any heat radiation present from its hot beginning is constant if the expansion is uniform and isotropic – they calculated the current radiation temperature should be 5 degrees Kelvin – ‘one of the most momentous predictions ever made in science’ • Abraham Taub (1911-99) made a study of all the universes that are the same everywhere in space but can expand at different rates in different directions • Charles Misner (b.1932) suggested ‘chaotic cosmology’ i.e. that no matter how chaotic the starting conditions, Einstein’s equations prove that any universe will inevitably become homogenous and isotropic – disproved by the smoothness of the background radiation. Misner then suggested the Mixmaster universe, the  most complicated interpretation of the Einstein equations in which the universe expands at different rates in different directions and the gravitational waves generated by one direction interferes with all the others, with infinite complexity • Hannes Alfvén devised a matter-antimatter cosmology • Alan Guth (b.1947) in 1981 proposed a theory of ‘inflation’, that milliseconds after the big bang the universe underwent a swift process of hyper-expansion: inflation answers at a stroke a number of technical problems prompted by conventional big bang theory; but had the unforeseen implication that, though our region is smooth, parts of the universe beyond our light horizon might have grown from other areas of inflated singularity and have completely different qualities • Andrei Linde (b.1948) extrapolated that the inflationary regions might create sub-regions in  which further inflation might take place, so that a potentially infinite series of new universes spawn new universes in an ‘endlessly bifurcating multiverse’. We happen to be living in one of these bubbles which has lasted long enough for the heavy elements and therefore life to develop; who knows what’s happening in the other bubbles? • Ted Harrison (1919-2007) British cosmologist speculated that super-intelligent life forms might be able to develop and control baby universe, guiding the process of inflation so as to promote the constants require for just the right speed of growth to allow stars, planets and life forms to evolve. Maybe they’ve done it already. Maybe we are the result of their experiments. • Nick Bostrom (b.1973) Swedish philosopher: if universes can be created and developed like this then they will proliferate until the odds are that we are living in a ‘created’ universe and, maybe, are ourselves simulations in a kind of multiverse computer simulation Although the arrival of Einstein and his theory of relativity marks a decisive break with the tradition of Newtonian physics, and comes at page 47 of this 300-page book, it seemed to me the really decisive break comes on page 198 with the publication Alan Guth’s theory of inflation. Up till the Guth breakthrough, astrophysicists and astronomers appear to have focused their energy on the universe we inhabit. There were theoretical digressions into fantasies about other worlds and alternative universes but they appear to have been personal foibles and everyone agreed they were diversions from the main story. However, the idea of inflation, while it solved half a dozen problems caused by the idea of a big bang, seems to have spawned a literally fantastic series of theories and speculations. Throughout the twentieth century, cosmologists grew used to studying the different types of universe that emerged from Einstein’s equations, but they expected that some special principle, or starting state, would pick out one that best described the actual universe. Now, unexpectedly, we find that there might be room for many, perhaps all, of these possible universes somewhere in the multiverse. (p.254) This is a really massive shift and it is marked by a shift in the tone and approach of Barrow’s book. Up till this point it had jogged along at a brisk rate namechecking a steady stream of mathematicians, physicists and explaining how their successive models of the universe followed on from or varied from each other. Now this procedure comes to a grinding halt while Barrow enters a realm of speculation. He discusses the notion that the universe we live in might be a fake, evolved from a long sequence of fakes, created and moulded by super-intelligences for their own purposes. Each of us might be mannequins acting out experiments, observed by these super-intelligences. In which case what value would human life have? What would be the definition of free will? Maybe the discrepancies we observe in some of the laws of the universe have been planted there as clues by higher intelligences? Or maybe, over vast periods of time, and countless iterations of new universes, the laws they first created for this universe where living intelligences could evolve have slipped, revealing the fact that the whole thing is a facade. These super-intelligences would, of course, have computers and technology far in advance of ours etc. I felt like I had wandered into a prose version of The Matrix and, indeed, Barrow apologises for straying into areas normally associated with science fiction (p.241). Imagine living in a universe where nothing is original. Everything is a fake. No ideas are ever new. There is no novelty, no originality. Nothing is ever done for the first time and nothing will ever be done for the last time… (p.244) And so on. During this 15-page-long fantasy the handy sequence of physicists comes to an end as he introduces us to contemporary philosophers and ethicists who are paid to think about the problem of being a simulated being inside a simulated reality. Take Robin Hanson (b.1959), a research associate at the Future of Humanity Institute of Oxford University who, apparently, advises us all that we ought to behave so as to prolong our existence in the simulation or, hopefully, ensure we get recreated in future iterations of the simulation. Are these people mad? I felt like I’d been transported into an episode of The Outer Limits or was back with my schoolfriend Paul, lying in a summer field getting stoned and wondering whether dandelions were a form of alien life that were just biding their time till they could take over the world. Why not, man? I suppose Barrow has to include this material, and explain the nature of the anthropic principle (p.250), and go on to a digression about the search for extra-terrestrial life (p.248), and discuss the ‘replication paradox’ (in an infinite universe there will be infinite copies of you and me in which we perform an infinite number of variations on our lives: what would happen if you came face to face with one of your ‘copies?? p.246) – because these are, in their way, theories – if very fantastical theories – about the nature of the universe and he his stated aim is to be completely comprehensive. The anthropic principle Observations of the universe must be compatible with the conscious and intelligent life that observes it. The universe is the way it is, because it has to be the way it is in order for life forms like us to evolve enough to understand it. Still, it was a relief when he returned from vague and diffuse philosophical speculation to the more solid territory of specific physical theories for the last forty or so pages of the book. But it was very noticeable that, as he came up to date, the theories were less and less attached to individuals: modern research is carried out by large groups. And he increasingly is describing the swirl of ideas in which cosmologists work, which often don’t have or need specific names attached. And this change is denoted, in the texture of the prose, by an increase in the passive voice, the voice in which science papers are written: ‘it was observed that…’, ‘it was expected that…’, and so on. • Edward Tryon (b.1940) American particle physicist speculated that the entire universe might be a virtual fluctuation from the quantum vacuum, governed by the Heisenberg Uncertainty Principle that limits our simultaneous knowledge of the position and momentum, or the time of occurrence and energy, of anything in Nature. • George Ellis (b.1939) created a catalogue of ‘topologies’ or shapes which the universe might have • Dmitri Sokolov and Victor Shvartsman in 1974 worked out what the practical results would be for astronomers if we lived in a strange shaped universe, for example a vast doughnut shape • Yakob Zeldovich and Andrei Starobinsky in 1984 further explored the likelihood of various types of ‘wraparound’ universes, predicting the fluctuations in the cosmic background radiation which might confirm such a shape • 1967 the Wheeler-De Witt equation – a first attempt to combine Einstein’s equations of general relativity with the Schrödinger equation that describes how the quantum wave function changes with space and time • the ‘no boundary’ proposal – in 1982 Stephen Hawking and James Hartle used ‘an elegant formulation of quantum  mechanics introduced by Richard Feynman to calculate the probability that the universe would be found to be in a particular state. What is interesting is that in this theory time is not important; time is a quality that emerges only when the universe is big enough for quantum effects to become negligible; the universe doesn’t technically have a beginning because the nearer you approach to it, time disappears, becoming part of four-dimensional space. This ‘no boundary’ state is the centrepiece of Hawking’s bestselling book A Brief History of Time (1988). According to Barrow, the Hartle-Hawking model was eventually shown to lead to a universe that was infinitely large and empty i.e. not our one. • In 1986 Barrow proposed a universe with a past but no beginning because all the paths through time and space would be very large closed loops • In 1997 Richard Gott and Li-Xin Li took the eternal inflationary universe postulated above and speculated that some of the branches loop back on themselves, giving birth to themselves The self-creating universe of J.Richard Gott III and Li-Xin Li • In 2001 Justin Khoury, Burt Ovrut, Paul Steinhardt and Neil Turok proposed a variation of the cyclic universe which incorporated strong theory and they called the ‘ekpyrotic’ universe, epkyrotic denoting the fiery flame into which each universe plunges only to be born again in a big bang. The new idea they introduced is that two three-dimensional universes may approach each other by moving through the additional dimensions posited by strong theory. When they collide they set off another big bang. These 3-D universes are called ‘braneworlds’, short for membrane, because they will be very thin • If a universe existing in a ‘bubble’ in another dimension ‘close’ to ours had ever impacted on our universe, some calculations indicate it would leave marks in the cosmic background radiation, a stripey effect. • In 1998 Andy Albrecht, João Maguijo and Barrow explored what might have happened if the speed of light, the most famous of cosmological constants, had in fact decreased in the first few milliseconds after the bang? There is now an entire suite of theories known as ‘Varying Speed of Light’ cosmologies. • Modern ‘String Theory’ only functions if it assumes quite a few more dimensions than the three we are used to. In fact some string theories require there to be more than one dimension of time. If there are really ten or 11 dimensions then, possibly, the ‘constants’ all physicists have taken for granted are only partial aspects of constants which exist in higher dimensions. Possibly, they might change, effectively undermining all of physics. • The Lambda-CDM model is a cosmological model in which the universe contains three major components: 1. a cosmological constant denoted by Lambda (Greek Λ) and associated with dark energy; 2. the postulated cold dark matter (abbreviated CDM); 3. ordinary matter. It is frequently referred to as the standard model of Big Bang cosmology because it is the simplest model that provides a reasonably good account of the following properties of the cosmos: • the existence and structure of the cosmic microwave background • the large-scale structure in the distribution of galaxies • the abundances of hydrogen (including deuterium), helium, and lithium • the accelerating expansion of the universe observed in the light from distant galaxies and supernovae He ends with a summary of our existing knowledge, and indicates the deep puzzles which remain, not least the true nature of the ‘dark matter’ which is required to make sense of the expanding universe model. And he ends the whole book with a pithy soundbite. Speaking about the ongoing acceptance of models which posit a ‘multiverse’, in which all manner of other universes may be in existence, but beyond the horizon of where can see, he says: Copernicus taught us that our planet was not at the centre of the universe. Now we may have to accept that even our universe is not at the centre of the Universe. Related links Reviews of other science books The Environment Genetics and life Human evolution Particle physics The Last Three Minutes by Paul Davies (1994) The telescope is also a timescope. (p.127) 1. Doomsday 2. The Dying Universe 3. The First Three Minutes 4. Stardoom He explains gravitational-wave emission. 5. Nightfall John Wheeler coined the term ‘black hole’. 6. Weighing the Universe 7. Forever Is A Long Time 8. Life In the Slow Lane 9. Life In the Fast Lane 10. Sudden Death – and rebirth So far he has described: 11. Worlds Without End? Related links Reviews of other science books The Environment Genetics and life Human evolution Particle physics The Origin of the Universe by John D. Barrow (1994) In the beginning, the universe was an inferno of radiation, too hot for any atoms to survive. In the first few minutes, it cooled enough for the nuclei of the lighter elements to form. Only millions of years later would the cosmos be cool enough for whole atoms to appear, followed soon by simple molecules, and after billions of years by the complex sequence of events that saw the condensation of material into stars and galaxies. Then, with the appearance of stable planetary environments, the complicated products of biochemistry were nurtured, by processes we still do not understand. (The Origin of the Universe, p.xi) In the late 1980s and into the 1990s science writing became fashionable and popular. A new generation of science writers poured forth a wave of books popularising all aspects of science. The ones I remember fell into two broad categories, evolution and astrophysics. Authors such as Stephen Jay Gould and Edward O. Wilson, Richard Dawkins and Steve Jones (evolution and genetics) and Paul Davies, John Gribbin, John Polkinghorne and, most famously of all, Stephen Hawking, (cosmology and astrophysics) not only wrote best-selling books but cropped up as guests on radio shows and even presented their own TV series. Early in the 1990s the literary agent John Brockman created a series titled Science Masters in which he commissioned experts across a wide range of the sciences to write short, jargon-free and maths-light introductions to their fields. This is astrophysicist John D. Barrow’s contribution to the series, a short, clear and mind-blowing introduction to current theory about how our universe began. The Origin of the Universe Billions It is now thought the universe is about 13.7 billion years old, the solar system is 4.57 billion years old and the earth is 4.54 billion years old. The oldest surface rocks anywhere on earth are in northwestern Canada near the Great Slave Lake, and are 4.03 billion years. The oldest fossilised bacteria date from 3.48 billion years ago. Visible universe The visible universe is the part of the universe which light has had time to cross and reach us. If the universe is indeed 13.7 billion years old, and nothing can travel faster than the speed of light (299,792,458 metres per second) then there is, in effect, a ‘horizon’ to what we can see. We can only see the part of the universe which is about 13.7 billion years old. Whether there is any universe beyond our light horizon, and what it looks like, is something we can only speculate about. Steady state Until the early 20th century philosophers and scientists thought the universe was fixed, static and stable. Even Einstein put into his theory of relativity a factor he named ‘the cosmological constant’, which wasn’t strictly needed, solely in order to make the universe appear static and so conform to contemporary thinking. The idea of this constant was to counteract the attractive force of gravity, in order to ensure his steady state version of the universe didn’t collapse into a big crunch. Alexander Friedmann It was a young mathematician, Alexander Friedmann, who looked closely at Einstein’s formulae and showed that the cosmological constant was not necessary, not if the universe was expanding; in this case, no hypothetical repelling force would be needed, just the sheer speed of outward expansion. Einstein eventually conceded that including the constant in the formulae of relativity had been a major mistake. Edwin Hubble In what Barrow calls ‘the greatest discovery of twentieth century science’, the American astronomer Edwin Hubble in the 1920s discovered that distant galaxies are moving away from us, and the further away they are, the faster they are moving, which became known as Hubble’s Law. He established this by noticing the ‘red-shifting’ of frequencies denoting detectable elements in these galaxies i.e. their light frequencies had been altered downwards, as light (and sound and all waves are) when something is moving away from the observer. Critical divide An argument against the steady-state theory of the universe is that, over time, the gravity of all the objects in it would pull everything together and it would all collapse into one massive clump. Only an initial throwing out of material could counter-act the affect of all that gravity. So how fast is the universe expanding? Imagine a rate, x. Below that speed, the effect of gravity will eventually overcome the outward acceleration, the universe will slow down, stop expanding and start to contract. Significantly above this speed, x, and the universe would continue flying apart in all directions so quickly that gas clouds, stars, galaxies and planets would never be formed. As far as we know, the actual acceleration of the universe hovers just around this rate, x – just fast enough to prevent the universe from collapsing, but not too fast for it to be impossible for matter to form. Just the right speed to create the kind of universe we see around us. The name for this threshold is the critical divide. Starstuff Stars are condensations of matter large enough to create at their centre nuclear reactions. These reactions burn hydrogen into helium for a long, sedate period, as our sun is doing. At the end of their lives stars undergo a crisis, an explosive period of rapid change during which helium is transformed into carbon nitrogen, oxygen, silicon, phosphorus and many of the other, heavier elements. When the ailing star finally explodes as a supernova these elements disperse into space and ultimately find their way into clouds of gas which condense as planets. Thus every plant, animal and person alive on earth is made out of chemical elements forged in the unthinkable heat of dying stars – which is what Joni Mitchell meant when she sang, ‘We are stardust’. Heat death A theory that the universe will continue expanding and matter become so attenuated that there are no heat or dynamic inequalities left to fuel thermal reactions i.e. matter ends up smoothly spread throughout space with no reactions happening anywhere. Thermodynamic equilibrium reached at a universal very low temperature. The idea was formulated by William Thomson, Lord Kelvin, in the 1850s who extrapolated from Victorian knowledge of mechanics and heat. 170 years later, updated versions of heat death remain a viable theory for the very long-term future of the universe. Steady state The ‘steady state’ theory of the universe was developed by astrophysicists Thomas Gold, Hermann Bondi and Fred Hoyle in 1948. They theorised that. although the universe appeared to be expanding it had always existed, the expansion being caused by a steady rate of creation of new matter. This theory was disproved in the mid-1960s by the confirmation of background radiation Background radiation theorised In the 1940s George Gamow and assistants Alpher and Herman theorised that, if the universe began in a hot dense state way back, there should be evidence, namely a constant layer of background radiation everywhere which, they calculated, would be 5 degrees above absolute zero. Background radiation proved In the 1960s researchers at Bell Laboratories, calibrating a sensitive radio antenna, noticed a constant background interference to their efforts which seemed to be coming from every direction of the sky. A team from Princeton interpreted this as the expected background radiation and measured it at 2.5 degrees Kelvin. It is called ‘cosmic microwave background radiation’ and is one of the strong proofs for the Big Bang theory. The uniformity of the background radiation was confirmed by observations from NASA’s Cosmic Background Explorer satellite in the early 1990s. Empty universe There is very little material in the universe. If all the stars and galaxies in the universe were smoothed out into a sea of atoms, there would only be about one atom per cubic meter of space. Inflation This is a theory developed in 1979 by theoretical physicist Alan Guth – the idea is that the universe didn’t arise from a singularity which exploded and grew at a steady state but instead, in the first milliseconds, underwent a period of hyper-growth, which then calmed back down to ‘normal’ expansion. The theory has been elaborated and generated numerous variants but is widely accepted because it explains many aspects of the universe we see today – from its large-scale structure to the way it explains how minute quantum fluctuations in this initial microscopic inflationary region, once magnified to cosmic size, became the seeds for the growth of structure in the Universe. The inflation is currently thought to have taken place from 10−36 seconds after the conjectured Big Bang singularity to sometime between 10−33 or 10−32 seconds after. Chaotic inflationary universe Proposed by Soviet physicist Andrei Linde in 1983, this is the idea that multiple distinct sections of the very early universe might have experienced inflation at different rates and so have produced a kind of cluster of universes, like bubbles in a bubble bath, except that these bubbles would have to be at least nine billion light years in size in order to produce stable stars. Possibly the conditions in each of the universes created by chaotic inflation could be quite different. Eternal inflation A logical extension of chaotic inflation is that you not only have multiple regions which undergo inflation at the same time, but you might have sub-regions which undergo inflation at different times – possibly one after the other, in other words maybe there never was a beginning, but this process of successive creations and hyper-inflations has been going on forever and is still going on but beyond our light horizon (which, as mentioned above, only reaches to about 13.7 billion light years away). Time Is time a fixed and static quality which creates a kind of theatre, an external frame of reference, in which the events of the universe take place, as in the Newtonian view? Or, as per Einstein, is time itself part of the universe, inseparable from the stuff of the universe and can be bent and distorted by forces in the universe? This is why Einstein used the expression ‘spacetime’? The quantum universe Right back at the very beginning, at 10−43 seconds, the size of the visible universe was smaller than its quantum wavelength — so its entire contents would have been subject to the uncertainty which is the characteristic of quantum physics. Time is affected by a quantum view of the big bang because, when the universe was still shrunk to a microscopic size, the quantum uncertainty which applied to it might be interpreted as meaning there was no time. That time only ‘crystallised’ out as a separate ‘dimension’ once the universe had expanded to a size where quantum uncertainty no longer dictated. Some critics of the big bang theory ask, ‘What was there before the big bang?’ to which exponents conventionally reply that there was no ‘before’. Time as we experience it ceased to exist and became part of the initial hyper-energy field. This quantum interpretation suggests that there in fact was no ‘big bang’ because there was literally no time when it happened. Traditional visualisations of the big bang show an inverted cone, at the top is the big universe we live in and as you go back in time it narrows to a point – the starting point. Imagine, instead, something more like a round-bottomed sack: there’s a general expansion upwards and outwards but if you penetrate back to the bottom of the sack there is no ‘start’ point. This theory was most fully worked out by Stephen Hawking and James Hartle. Wormholes The book ends with speculations about the possibility that ‘wormholes’ existed in the first few milliseconds, tubes connecting otherwise distant parts of the exploding ball of universe. I understood the pictures of these but couldn’t understand the problems in the quantum theory of the origin which they set out to solve. And the final section emphasises that everything cosmologists work on relates to the visible universe. It may be that the special conditions of the visible universe which we know about, are only one set of starting conditions which apply to other areas of the universe beyond our knowledge or to other universes. We will never know. Barrow is an extremely clear and patient explainer. He avoids formulae. Between his prose and the many illustrations I understood most of what he was trying to say, though a number of concepts eluded me. But the ultimate thing that comes over is his scepticism. Barrow summarises recent attempts to define laws governing the conditions prevailing at the start of the universe by, briefly describing the theories of James Hartle and Stephen Hawking, Alex Vilenkin, and Roger Penrose. But he does so only to go on to emphasise that they are all ‘highly speculative’. They are ‘ideas for ideas’ (p.135). By the end of the book you get the idea that a very great deal of cosmology is either speculative, or highly speculative. But then half way through he says it’s a distinguishing characteristic of physicists that they can’t stop tinkering – with data, with theories, with ideas and speculations. So beyond the facts and then the details of the theories he describes, it is insight into this quality in the discipline itself, this restless exploration of new ideas and speculations relating to some of the hardest-to-think-about areas of human knowledge, which is the final flavour the reader is left with. Related links Reviews of other science books The Environment Genetics and life Human evolution Particle physics %d bloggers like this:
bf5f55d3ee312bb2
World Library   Flag as Inappropriate Email this Article Quantum field theory Article Id: WHEBN0000025267 Reproduction Date: Title: Quantum field theory   Author: World Heritage Encyclopedia Language: English Subject: List of nonlinear partial differential equations, Quantum mechanics, Quantum gravity, Particle physics, Wave function Publisher: World Heritage Encyclopedia Quantum field theory In theoretical physics, quantum field theory (QFT) is a theoretical framework for constructing quantum mechanical models of subatomic particles in particle physics and quasiparticles in condensed matter physics. A QFT treats particles as excited states of an underlying physical field, so these are called field quanta. In QFT, quantum mechanical interactions between particles are described by interaction terms between the corresponding underlying fields. QFT interaction terms are similar in spirit to those between charges with electric and magnetic fields in Maxwell's equations. However, unlike the classical fields of Maxwell's theory, fields in QFT generally exist in quantum superpositions of states and are subject to the laws of quantum mechanics. Quantum mechanical systems have a fixed number of particles, with each particle having a finite number of degrees of freedom. In contrast, the excited states of a QFT can represent any number of particles. This makes quantum field theories especially useful for describing systems where the particle count/number may change over time, a crucial feature of relativistic dynamics. Because the fields are continuous quantities over space, there exist excited states with arbitrarily large numbers of particles in them, providing QFT systems with an effectively infinite number of degrees of freedom. Infinite degrees of freedom can easily lead to divergences of calculated quantities (i.e., the quantities become infinite). Techniques such as renormalization of QFT parameters or discretization of spacetime, as in lattice QCD, are often used to avoid such infinities so as to yield physically meaningful results. Most theories in standard particle physics are formulated as relativistic quantum field theories, such as QED, QCD, and the Standard Model. QED, the quantum field-theoretic description of the electromagnetic field, approximately reproduces Maxwell's theory of electrodynamics in the low-energy limit, with small non-linear corrections to the Maxwell equations required due to virtual electron–positron pairs. In the perturbative approach to quantum field theory, the full field interaction terms are approximated as a perturbative expansion in the number of particles involved. Each term in the expansion can be thought of as forces between particles being mediated by other particles. In QED, the electromagnetic force between two electrons is caused by an exchange of photons. Similarly, intermediate vector bosons mediate the weak force and gluons mediate the strong force in QCD. The notion of a force-mediating particle comes from perturbation theory, and does not make sense in the context of non-perturbative approaches to QFT, such as with bound states. The gravitational field and the electromagnetic field are the only two fundamental fields in nature that have infinite range and a corresponding classical low-energy limit, which greatly diminishes and hides their "particle-like" excitations. Albert Einstein in 1905, attributed "particle-like" and discrete exchanges of momenta and energy, characteristic of "field quanta", to the electromagnetic field. Originally, his principal motivation was to explain the thermodynamics of radiation. Although the photoelectric effect and Compton scattering strongly suggest the existence of the photon, it is now understood that they can be explained without invoking a quantum electromagnetic field; therefore, a more definitive proof of the quantum nature of radiation is now taken up into modern quantum optics as in the antibunching effect.[2] There is currently no complete quantum theory of the remaining fundamental force, gravity. Many of the proposed theories to describe gravity as a QFT postulate the existence of a graviton particle that mediates the gravitational force. Presumably, the as yet unknown correct quantum field-theoretic treatment of the gravitational field will behave like Einstein's general theory of relativity in the low-energy limit. Quantum field theory of the fundamental forces itself has been postulated to be the low-energy effective field theory limit of a more fundamental theory such as superstring theory. The early development of the field involved Dirac, Fock, Pauli, Heisenberg and Bogolyubov. This phase of development culminated with the construction of the theory of quantum electrodynamics in the 1950s. Gauge theory Gauge theory was formulated and quantized, leading to the unification of forces embodied in the standard model of particle physics. This effort started in the 1950s with the work of Yang and Mills, was carried on by Martinus Veltman and a host of others during the 1960s and completed by the 1970s through the work of Gerard 't Hooft, Frank Wilczek, David Gross and David Politzer. Grand synthesis Parallel developments in the understanding of phase transitions in condensed matter physics led to the study of the renormalization group. This in turn led to the grand synthesis of theoretical physics, which unified theories of particle and condensed matter physics through quantum field theory. This involved the work of Michael Fisher and Leo Kadanoff in the 1970s, which led to the seminal reformulation of quantum field theory by Kenneth G. Wilson. Classical and quantum fields A classical field is a function defined over some region of space and time.[3] Two physical phenomena which are described by classical fields are Newtonian gravitation, described by Newtonian gravitational field g(x, t), and classical electromagnetism, described by the electric and magnetic fields E(x, t) and B(x, t). Because such fields can in principle take on distinct values at each point in space, they are said to have infinite degrees of freedom.[3] Classical field theory does not, however, account for the quantum-mechanical aspects of such physical phenomena. For instance, it is known from quantum mechanics that certain aspects of electromagnetism involve discrete particles—photons—rather than continuous fields. The business of quantum field theory is to write down a field that is, like a classical field, a function defined over space and time, but which also accommodates the observations of quantum mechanics. This is a quantum field. It is not immediately clear how to write down such a quantum field, since quantum mechanics has a structure very unlike a field theory. In its most general formulation, quantum mechanics is a theory of abstract operators (observables) acting on an abstract state space (Hilbert space), where the observables represent physically observable quantities and the state space represents the possible states of the system under study.[4] For instance, the fundamental observables associated with the motion of a single quantum mechanical particle are the position and momentum operators \hat{x} and \hat{p}. Field theory, in contrast, treats x as a way to index the field rather than as an operator.[5] There are two common ways of developing a quantum field: the path integral formalism and canonical quantization.[6] The latter of these is pursued in this article. Lagrangian formalism Quantum field theory frequently makes use of the Lagrangian formalism from classical field theory. This formalism is analogous to the Lagrangian formalism used in classical mechanics to solve for the motion of a particle under the influence of a field. In classical field theory, one writes down a Lagrangian density, \mathcal{L}, involving a field, φ(x,t), and possibly its first derivatives (∂φ/∂t and ∇φ), and then applies a field-theoretic form of the Euler–Lagrange equation. Writing coordinates (t, x) = (x0, x1, x2, x3) = xμ, this form of the Euler–Lagrange equation is[3] \frac{\partial}{\partial x^\mu} \left[\frac{\partial\mathcal{L}}{\partial(\partial\phi/\partial x^\mu)}\right] - \frac{\partial\mathcal{L}}{\partial\phi} = 0, where a sum over μ is performed according to the rules of Einstein notation. By solving this equation, one arrives at the "equations of motion" of the field.[3] For example, if one begins with the Lagrangian density \mathcal{L}(\phi,\nabla\phi) = -\rho(t,\mathbf{x})\,\phi(t,\mathbf{x}) - \frac{1}{8\pi G}|\nabla\phi|^2, and then applies the Euler–Lagrange equation, one obtains the equation of motion 4\pi G \rho(t,\mathbf{x}) = \nabla^2 \phi. This equation is Newton's law of universal gravitation, expressed in differential form in terms of the gravitational potential φ(t, x) and the mass density ρ(t, x). Despite the nomenclature, the "field" under study is the gravitational potential, φ, rather than the gravitational field, g. Similarly, when classical field theory is used to study electromagnetism, the "field" of interest is the electromagnetic four-potential (V/c, A), rather than the electric and magnetic fields E and B. Quantum field theory uses this same Lagrangian procedure to determine the equations of motion for quantum fields. These equations of motion are then supplemented by commutation relations derived from the canonical quantization procedure described below, thereby incorporating quantum mechanical effects into the behavior of the field. Single- and many-particle quantum mechanics In quantum mechanics, a particle (such as an electron or proton) is described by a complex wavefunction, ψ(x, t), whose time-evolution is governed by the Schrödinger equation: -\frac \sum_{p\in S_N} |\phi_{p(1)}\rang \otimes \cdots \otimes |\phi_{p(N)} \rang, where |\phi_i\rang are the single-particle states, Nj is the number of particles occupying state j, and the sum is taken over all possible permutations p acting on N elements. In general, this is a sum of N! (N factorial) distinct terms. \sqrt{\frac{\prod_j N_j!}{N!}} is a normalizing factor. There are several shortcomings to the above description of quantum mechanics, which are addressed by quantum field theory. First, it is unclear how to extend quantum mechanics to include the effects of special relativity.[7] Attempted replacements for the Schrödinger equation, such as the Klein–Gordon equation or the Dirac equation, have many unsatisfactory qualities; for instance, they possess energy eigenvalues that extend to –∞, so that there seems to be no easy definition of a ground state. It turns out that such inconsistencies arise from relativistic wavefunctions having a probabilistic interpretation in position space, as probability conservation is not a relativistically covariant concept. The second shortcoming, related to the first, is that in quantum mechanics there is no mechanism to describe particle creation and annihilation;[8] this is crucial for describing phenomena such as pair production, which result from the conversion between mass and energy according to the relativistic relation E = mc2. Second quantization In this section, we will describe a method for constructing a quantum field theory called second quantization. This basically involves choosing a way to index the quantum mechanical degrees of freedom in the space of multiple identical-particle states. It is based on the Hamiltonian formulation of quantum mechanics. Several other approaches exist, such as the Feynman path integral,[9] which uses a Lagrangian formulation. For an overview of some of these approaches, see the article on quantization. For simplicity, we will first discuss second quantization for bosons, which form perfectly symmetric quantum states. Let us denote the mutually orthogonal single-particle states which are possible in the system by |\phi_1\rang, |\phi_2\rang, |\phi_3\rang, and so on. For example, the 3-particle state with one particle in state |\phi_1\rang and two in state |\phi_2\rang is \frac{1}{\sqrt{3}} \left[ |\phi_1\rang |\phi_2\rang |\phi_2\rang + |\phi_2\rang |\phi_1\rang |\phi_2\rang + |\phi_2\rang |\phi_2\rang |\phi_1\rang \right]. The first step in second quantization is to express such quantum states in terms of occupation numbers, by listing the number of particles occupying each of the single-particle states |\phi_1\rang, |\phi_2\rang, etc. This is simply another way of labelling the states. For instance, the above 3-particle state is denoted as |1, 2, 0, 0, 0, \dots \rangle. An N-particle state belongs to a space of states describing systems of N particles. The next step is to combine the individual N-particle state spaces into an extended state space, known as Fock space, which can describe systems of any number of particles. This is composed of the state space of a system with no particles (the so-called vacuum state, written as |0\rang), plus the state space of a 1-particle system, plus the state space of a 2-particle system, and so forth. States describing a definite number of particles are known as Fock states: a general element of Fock space will be a linear combination of Fock states. There is a one-to-one correspondence between the occupation number representation and valid boson states in the Fock space. At this point, the quantum mechanical system has become a quantum field in the sense we described above. The field's elementary degrees of freedom are the occupation numbers, and each occupation number is indexed by a number j indicating which of the single-particle states |\phi_1\rang, |\phi_2\rang,\dots,|\phi_j\rang,\dots it refers to: | N_1, N_2, N_3, \dots, N_j, \dots \rang . The properties of this quantum field can be explored by defining creation and annihilation operators, which add and subtract particles. They are analogous to ladder operators in the quantum harmonic oscillator problem, which added and subtracted energy quanta. However, these operators literally create and annihilate particles of a given quantum state. The bosonic annihilation operator a_2 and creation operator a_2^\dagger are easily defined in the occupation number representation as having the following effects: a_2 | N_1, N_2, N_3, \dots \rang = \sqrt{N_2} \mid N_1, (N_2 - 1), N_3, \dots \rang, a_2^\dagger | N_1, N_2, N_3, \dots \rang = \sqrt{N_2 + 1} \mid N_1, (N_2 + 1), N_3, \dots \rang. It can be shown that these are operators in the usual quantum mechanical sense, i.e. linear operators acting on the Fock space. Furthermore, they are indeed Hermitian conjugates, which justifies the way we have written them. They can be shown to obey the commutation relation \left[a_i , a_j \right] = 0 \quad,\quad \left[a_i^\dagger , a_j^\dagger \right] = 0 \quad,\quad \left[a_i , a_j^\dagger \right] = \delta_{ij}, where \delta stands for the Kronecker delta. These are precisely the relations obeyed by the ladder operators for an infinite set of independent quantum harmonic oscillators, one for each single-particle state. Adding or removing bosons from each state is therefore analogous to exciting or de-exciting a quantum of energy in a harmonic oscillator. Applying an annihilation operator a_k followed by its corresponding creation operator a_k^\dagger returns the number N_k of particles in the kth single-particle eigenstate: a_k^\dagger\,a_k|\dots, N_k, \dots \rangle=N_k| \dots, N_k, \dots \rangle. The combination of operators a_k^\dagger a_k is known as the number operator for the kth eigenstate. The Hamiltonian operator of the quantum field (which, through the Schrödinger equation, determines its dynamics) can be written in terms of creation and annihilation operators. For instance, for a field of free (non-interacting) bosons, the total energy of the field is found by summing the energies of the bosons in each energy eigenstate. If the kth single-particle energy eigenstate has energy E_k and there are N_k bosons in this state, then the total energy of these bosons is E_k N_k. The energy in the entire field is then a sum over k: E_\mathrm{tot} = \sum_k E_k N_k This can be turned into the Hamiltonian operator of the field by replacing N_k with the corresponding number operator, a_k^\dagger a_k. This yields H = \sum_k E_k \, a^\dagger_k \,a_k. It turns out that a different definition of creation and annihilation must be used for describing fermions. According to the Pauli exclusion principle, fermions cannot share quantum states, so their occupation numbers Ni can only take on the value 0 or 1. The fermionic annihilation operators c and creation operators c^\dagger are defined by their actions on a Fock state thus c_j | N_1, N_2, \dots, N_j = 0, \dots \rangle = 0 c_j | N_1, N_2, \dots, N_j = 1, \dots \rangle = (-1)^{(N_1 + \cdots + N_{j-1})} | N_1, N_2, \dots, N_j = 0, \dots \rangle c_j^\dagger | N_1, N_2, \dots, N_j = 0, \dots \rangle = (-1)^{(N_1 + \cdots + N_{j-1})} | N_1, N_2, \dots, N_j = 1, \dots \rangle c_j^\dagger | N_1, N_2, \dots, N_j = 1, \dots \rangle = 0. These obey an anticommutation relation: \left\{c_i , c_j \right\} = 0 \quad,\quad \left\{c_i^\dagger , c_j^\dagger \right\} = 0 \quad,\quad \left\{c_i , c_j^\dagger \right\} = \delta_{ij}. Field operators We have previously mentioned that there can be more than one way of indexing the degrees of freedom in a quantum field. Second quantization indexes the field by enumerating the single-particle quantum states. However, as we have discussed, it is more natural to think about a "field", such as the electromagnetic field, as a set of degrees of freedom indexed by position. To this end, we can define field operators that create or destroy a particle at a particular point in space. In particle physics, these operators turn out to be more convenient to work with, because they make it easier to formulate theories that satisfy the demands of relativity. The bosonic field operators obey the commutation relation \left[\phi(\mathbf{r}) , \phi(\mathbf{r'}) \right] = 0 \quad,\quad \left[\phi^\dagger(\mathbf{r}) , \phi^\dagger(\mathbf{r'}) \right] = 0 \quad,\quad \left[\phi(\mathbf{r}) , \phi^\dagger(\mathbf{r'}) \right] = \delta^3(\mathbf{r} - \mathbf{r'}) The field operator is not the same thing as a single-particle wavefunction. The former is an operator acting on the Fock space, and the latter is a quantum-mechanical amplitude for finding a particle in some position. However, they are closely related, and are indeed commonly denoted with the same symbol. If we have a Hamiltonian with a space representation, say where the indices i and j run over all particles, then the field theory Hamiltonian (in the non-relativistic limit and for negligible self-interactions) is H = - \frac{\hbar^2}{2m} \int d^3\!r \ \phi^\dagger(\mathbf{r}) \nabla^2 \phi(\mathbf{r}) + \frac{1}{2}\int\!d^3\!r \int\!d^3\!r' \; \phi^\dagger(\mathbf{r}) \phi^\dagger(\mathbf{r}') U(|\mathbf{r} - \mathbf{r}'|) \phi(\mathbf{r'}) \phi(\mathbf{r}). Once the Hamiltonian operator is obtained as part of the canonical quantization process, the time dependence of the state is described with the Schrödinger equation, just as with other quantum theories. Alternatively, the Heisenberg picture can be used where the time dependence is in the operators rather than in the states. Unification of fields and particles The "second quantization" procedure that we have outlined in the previous section takes a set of single-particle quantum states as a starting point. Sometimes, it is impossible to define such single-particle states, and one must proceed directly to quantum field theory. For example, a quantum theory of the electromagnetic field must be a quantum field theory, because it is impossible (for various reasons) to define a wavefunction for a single photon.[10] In such situations, the quantum field theory can be constructed by examining the mechanical properties of the classical field and guessing the corresponding quantum theory. For free (non-interacting) quantum fields, the quantum field theories obtained in this way have the same properties as those obtained using second quantization, such as well-defined creation and annihilation operators obeying commutation or anticommutation relations. Quantum field theory thus provides a unified framework for describing "field-like" objects (such as the electromagnetic field, whose excitations are photons) and "particle-like" objects (such as electrons, which are treated as excitations of an underlying electron field), so long as one can treat interactions as "perturbations" of free fields. There are still unsolved problems relating to the more general case of interacting fields that may or may not be adequately described by perturbation theory. For more on this topic, see Haag's theorem. Physical meaning of particle indistinguishability The second quantization procedure relies crucially on the particles being identical. We would not have been able to construct a quantum field theory from a distinguishable many-particle system, because there would have been no way of separating and indexing the degrees of freedom. Particle conservation and non-conservation During second quantization, we started with a Hamiltonian and state space describing a fixed number of particles (N), and ended with a Hamiltonian and state space for an arbitrary number of particles. Of course, in many common situations N is an important and perfectly well-defined quantity, e.g. if we are describing a gas of atoms sealed in a box. From the point of view of quantum field theory, such situations are described by quantum states that are eigenstates of the number operator \hat{N}, which measures the total number of particles present. As with any quantum mechanical observable, \hat{N} is conserved if it commutes with the Hamiltonian. In that case, the quantum state is trapped in the N-particle subspace of the total Fock space, and the situation could equally well be described by ordinary N-particle quantum mechanics. (Strictly speaking, this is only true in the noninteracting case or in the low energy density limit of renormalized quantum field theories) For example, we can see that the free-boson Hamiltonian described above conserves particle number. Whenever the Hamiltonian operates on a state, each particle destroyed by an annihilation operator ak is immediately put back by the creation operator a_k^\dagger. On the other hand, it is possible, and indeed common, to encounter quantum states that are not eigenstates of \hat{N}, which do not have well-defined particle numbers. Such states are difficult or impossible to handle using ordinary quantum mechanics, but they can be easily described in quantum field theory as quantum superpositions of states having different values of N. For example, suppose we have a bosonic field whose particles can be created or destroyed by interactions with a fermionic field. The Hamiltonian of the combined system would be given by the Hamiltonians of the free boson and free fermion fields, plus a "potential energy" term such as where a_k^\dagger and ak denotes the bosonic creation and annihilation operators, c_k^\dagger and ck denotes the fermionic creation and annihilation operators, and Vq is a parameter that describes the strength of the interaction. This "interaction term" describes processes in which a fermion in state k either absorbs or emits a boson, thereby being kicked into a different eigenstate k+q. (In fact, this type of Hamiltonian is used to describe interaction between conduction electrons and phonons in metals. The interaction between electrons and photons is treated in a similar way, but is a little more complicated because the role of spin must be taken into account.) One thing to notice here is that even if we start out with a fixed number of bosons, we will typically end up with a superposition of states with different numbers of bosons at later times. The number of fermions, however, is conserved in this case. In condensed matter physics, states with ill-defined particle numbers are particularly important for describing the various superfluids. Many of the defining characteristics of a superfluid arise from the notion that its quantum state is a superposition of states with different particle numbers. In addition, the concept of a coherent state (used to model the laser and the BCS ground state) refers to a state with an ill-defined particle number but a well-defined phase. Axiomatic approaches The preceding description of quantum field theory follows the spirit in which most physicists approach the subject. However, it is not mathematically rigorous. Over the past several decades, there have been many attempts to put quantum field theory on a firm mathematical footing by formulating a set of axioms for it. These attempts fall into two broad classes. The first class of axioms, first proposed during the 1950s, include the Wightman, Osterwalder–Schrader, and Haag–Kastler systems. They attempted to formalize the physicists' notion of an "operator-valued field" within the context of functional analysis, and enjoyed limited success. It was possible to prove that any quantum field theory satisfying these axioms satisfied certain general theorems, such as the spin-statistics theorem and the CPT theorem. Unfortunately, it proved extraordinarily difficult to show that any realistic field theory, including the Standard Model, satisfied these axioms. Most of the theories that could be treated with these analytic axioms were physically trivial, being restricted to low-dimensions and lacking interesting dynamics. The construction of theories satisfying one of these sets of axioms falls in the field of constructive quantum field theory. Important work was done in this area in the 1970s by Segal, Glimm, Jaffe and others. During the 1980s, a second set of axioms based on geometric ideas was proposed. This line of investigation, which restricts its attention to a particular class of quantum field theories known as topological quantum field theories, is associated most closely with Michael Atiyah and Graeme Segal, and was notably expanded upon by Edward Witten, Richard Borcherds, and Maxim Kontsevich. However, most of the physically relevant quantum field theories, such as the Standard Model, are not topological quantum field theories; the quantum field theory of the fractional quantum Hall effect is a notable exception. The main impact of axiomatic topological quantum field theory has been on mathematics, with important applications in representation theory, algebraic topology, and differential geometry. Finding the proper axioms for quantum field theory is still an open and difficult problem in mathematics. One of the Millennium Prize Problems—proving the existence of a mass gap in Yang–Mills theory—is linked to this issue. Associated phenomena In the previous part of the article, we described the most general properties of quantum field theories. Some of the quantum field theories studied in various fields of theoretical physics possess additional special properties, such as renormalizability, gauge symmetry, and supersymmetry. These are described in the following sections. Early in the history of quantum field theory, it was found that many seemingly innocuous calculations, such as the perturbative shift in the energy of an electron due to the presence of the electromagnetic field, give infinite results. The reason is that the perturbation theory for the shift in an energy involves a sum over all other energy levels, and there are infinitely many levels at short distances that each give a finite contribution which results in a divergent series. Many of these problems are related to failures in classical electrodynamics that were identified but unsolved in the 19th century, and they basically stem from the fact that many of the supposedly "intrinsic" properties of an electron are tied to the electromagnetic field that it carries around with it. The energy carried by a single electron—its self energy—is not simply the bare value, but also includes the energy contained in its electromagnetic field, its attendant cloud of photons. The energy in a field of a spherical source diverges in both classical and quantum mechanics, but as discovered by Weisskopf with help from Furry, in quantum mechanics the divergence is much milder, going only as the logarithm of the radius of the sphere. The solution to the problem, presciently suggested by Stueckelberg, independently by Bethe after the crucial experiment by Lamb, implemented at one loop by Schwinger, and systematically extended to all loops by Feynman and Dyson, with converging work by Tomonaga in isolated postwar Japan, comes from recognizing that all the infinities in the interactions of photons and electrons can be isolated into redefining a finite number of quantities in the equations by replacing them with the observed values: specifically the electron's mass and charge: this is called renormalization. The technique of renormalization recognizes that the problem is essentially purely mathematical, that extremely short distances are at fault. In order to define a theory on a continuum, first place a cutoff on the fields, by postulating that quanta cannot have energies above some extremely high value. This has the effect of replacing continuous space by a structure where very short wavelengths do not exist, as on a lattice. Lattices break rotational symmetry, and one of the crucial contributions made by Feynman, Pauli and Villars, and modernized by 't Hooft and Veltman, is a symmetry-preserving cutoff for perturbation theory (this process is called regularization). There is no known symmetrical cutoff outside of perturbation theory, so for rigorous or numerical work people often use an actual lattice. On a lattice, every quantity is finite but depends on the spacing. When taking the limit of zero spacing, we make sure that the physically observable quantities like the observed electron mass stay fixed, which means that the constants in the Lagrangian defining the theory depend on the spacing. Hopefully, by allowing the constants to vary with the lattice spacing, all the results at long distances become insensitive to the lattice, defining a continuum limit. The renormalization procedure only works for a certain class of quantum field theories, called renormalizable quantum field theories. A theory is perturbatively renormalizable when the constants in the Lagrangian only diverge at worst as logarithms of the lattice spacing for very short spacings. The continuum limit is then well defined in perturbation theory, and even if it is not fully well defined non-perturbatively, the problems only show up at distance scales that are exponentially small in the inverse coupling for weak couplings. The Standard Model of particle physics is perturbatively renormalizable, and so are its component theories (quantum electrodynamics/electroweak theory and quantum chromodynamics). Of the three components, quantum electrodynamics is believed to not have a continuum limit, while the asymptotically free SU(2) and SU(3) weak hypercharge and strong color interactions are nonperturbatively well defined. The renormalization group describes how renormalizable theories emerge as the long distance low-energy effective field theory for any given high-energy theory. Because of this, renormalizable theories are insensitive to the precise nature of the underlying high-energy short-distance phenomena. This is a blessing because it allows physicists to formulate low energy theories without knowing the details of high energy phenomenon. It is also a curse, because once a renormalizable theory like the standard model is found to work, it gives very few clues to higher energy processes. The only way high energy processes can be seen in the standard model is when they allow otherwise forbidden events, or if they predict quantitative relations between the coupling constants. Haag's theorem From a mathematically rigorous perspective, there exists no interaction picture in a Lorentz-covariant quantum field theory. This implies that the perturbative approach of Feynman diagrams in QFT is not strictly justified, despite producing vastly precise predictions validated by experiment. This is called Haag's theorem, but most particle physicists relying on QFT largely shrug it off. Gauge freedom A gauge theory is a theory that admits a symmetry with a local parameter. For example, in every quantum theory the global phase of the wave function is arbitrary and does not represent something physical. Consequently, the theory is invariant under a global change of phases (adding a constant to the phase of all wave functions, everywhere); this is a global symmetry. In quantum electrodynamics, the theory is also invariant under a local change of phase, that is – one may shift the phase of all wave functions so that the shift may be different at every point in space-time. This is a local symmetry. However, in order for a well-defined derivative operator to exist, one must introduce a new field, the gauge field, which also transforms in order for the local change of variables (the phase in our example) not to affect the derivative. In quantum electrodynamics this gauge field is the electromagnetic field. The change of local gauge of variables is termed gauge transformation. In general, the gauge transformations of a theory consist of several different transformations, which may not be commutative. These transformations are together described by a mathematical object known as a gauge group. Infinitesimal gauge transformations are the gauge group generators. Therefore the number of gauge bosons is the group dimension (i.e. number of generators forming a basis). Multivalued gauge transformations The gauge transformations which leave the theory invariant involve, by definition, only single-valued gauge functions \Lambda(x_i) which satisfy the Schwarz integrability criterion \partial_{x_i x_j} \Lambda = \partial_{x_jx_i} \Lambda. An interesting extension of gauge transformations arises if the gauge functions \Lambda(x_i) are allowed to be multivalued functions which violate the integrability criterion. These are capable of changing the physical field strengths and are therefore no proper symmetry transformations. Nevertheless, the transformed field equations describe correctly the physical laws in the presence of the newly generated field strengths. See the textbook by H. Kleinert cited below for the applications to phenomena in physics. Since no superpartners have yet been observed, if supersymmetry exists it must be broken (through a so-called soft term, which breaks supersymmetry without ruining its helpful features). The simplest models of this breaking require that the energy of the superpartners not be too high; in these cases, supersymmetry is expected to be observed by experiments at the Large Hadron Collider. The Higgs particle has been detected at the LHC, and no such superparticles have been discovered. See also 1. ^ "Beautiful Minds, Vol. 20: Ed Witten".   See here. 2. ^ J. J. Thorn et al. (2004). Observing the quantum behavior of light in an undergraduate laboratory. . J. J. Thorn, M. S. Neel, V. W. Donato, G. S. Bergreen, R. E. Davies, and M. Beck. American Association of Physics Teachers, 2004.DOI: 10.1119/1.1737397. 3. ^ a b c d David Tong, Lectures on Quantum Field Theory, chapter 1. 4. ^ Srednicki, Mark. Quantum Field Theory (1st ed.). p. 19.  5. ^ Srednicki, Mark. Quantum Field Theory (1st ed.). pp. 25–6.  6. ^ Zee, Anthony. Quantum Field Theory in a Nutshell (2nd ed.). p. 61.  7. ^ David Tong, Lectures on Quantum Field Theory, Introduction. 8. ^ Zee, Anthony. Quantum Field Theory in a Nutshell (2nd ed.). p. 3.  9. ^ Abraham Pais, Inward Bound: Of Matter and Forces in the Physical World ISBN 0-19-851997-4. Pais recounts how his astonishment at the rapidity with which Feynman could calculate using his method. Feynman's method is now part of the standard methods for physicists. 10. ^ Newton, T.D.;   Further reading General readers Introductory texts Advanced texts • Gerard 't Hooft (2007) "The Conceptual Basis of Quantum Field Theory" in Butterfield, J., and John Earman, eds., Philosophy of Physics, Part A. Elsevier: 661–730. • Frank Wilczek (1999) "Quantum field theory", Reviews of Modern Physics 71: S83–S95. Also doi=10.1103/Rev. Mod. Phys. 71. External links
df3009aea0763094
Polarizable continuum model: some basic remarks The possibility to perform solvent calculations is currently under development in the pcm branch. We will exploit a Continuum Solvation Model (CSM) namely the Polarizable Continuum Model (PCM). PCM is a focused model: the solute (a single molecule or a cluster containing the solute and some relevant solvent molecules) is described quantum mechanically, while the solvent is approximated as a structureless continuum whose interaction with the solute is mediated by its permittivity, \(\varepsilon\). The solute is accomodated inside a molecular cavity, built as a set of interlocking spheres centered on the atoms constituting the molecule under investigation. The current implementation in DIRAC is limited to an SCF description of the solute. For a more in-depth presentation of the PCM, please refer to [Tomasi2005], [Mennucci2007] and references therein. For a presentation of the details of the implementation in DIRAC, please refer to [DiRemigio2015]. Basic Theory In CSMs we write the Schrödinger equation for the solute as: \[\left[ H_0 + V_{\sigma\rho}(\rho_{\mathrm{M}})\right] | \psi \rangle = E | \phi \rangle\] where \(H_0\) is the solute Hamiltonian in vacuo and \(V_{\sigma\rho}(\rho_{\mathrm{M}})\) is the solute-solvent interaction potential, which depends on the solute wavefunction through the first-order density matrix. This introduces a nonlinearity which must be dealt with appropriately. In standard molecular electronic-structure theory we write the energy as the expectation value of the Hamiltonian (\(| \phi \rangle\) being a trial wave function): \[\langle \phi | H_0 | \phi \rangle\] An upper-bound estimate to the exact energy of the system can then be obtained by a variational procedure from this functional. When introducing a nonlinearity of the type above, the standard functional does not lead to an upper-bound estimate to the exact energy upon minimization. The appropriate functional is instead given by: \[\langle \phi | H_0 + \frac{1}{2}V_{\sigma\rho} | \phi \rangle\] which has the status of a free energy (an extensive justification of this fact is given in [Tomasi1994]). Thus in the PCM framework, the basic energetic quantity is the free energy of solvation which is conveniently partitioned as follows ([Mennucci2007], [Tomasi2005], [Amovilli1998]): \[\Delta G_\mathrm{sol} = \Delta G_\mathrm{el} + G_\mathrm{cav} + G_\mathrm{dis} + G_\mathrm{rep} + \Delta G_\mathrm{Mm}\] • \(\Delta G_\mathrm{el}\) accounts for the electrostatic solute-solvent interaction, arising from mutual polarization in the charge distributions; • \(G_\mathrm{cav}\) is the cavitation energy, needed to form the molecular cavity inside the continuum representing the solvent; • \(G_\mathrm{dis}\) is the dispersion energy, due to the solute-solvent dispesion interactions; • \(G_\mathrm{rep}\) is the repulsion energy, which accounts for Pauli repulsioni; • \(\Delta G_\mathrm{Mm}\) is due to molecular motion and accounts for entropic contributions to the free energy. The electrostatic term is, usually, the largest contribution to the solvation energy. Thus we will exclusively be concerned with its calculation (see also [Amovilli1998] for a discussion of the other terms). The non-electrostatic terms are not implemented in DIRAC. In CSMs the calculation of this term requires the solution of the classical Poisson problem nested within the QM calculation. We will define the expectation value of the solvent operator \(V_{\sigma\rho}\) as the polarization energy: \[U_\mathrm{pol} = \langle \psi | V_{\sigma\rho} | \psi \rangle\] Electrostatic term in the Polarizable Continuum Model In the PCM, the solute-solvent electrostatic interaction is represented by an apparent surface function (ASC) \(\sigma\) spread on the cavity surface. Using the integral equation formulation of the Poisson problem, the apparent surface charge is obtained by solving an appropriate integral equation relating \(\sigma\) to the molecular electrostatic potential (MEP) \(\varphi\) evaluated on the cavity surface: \[\mathcal{T}(\varepsilon_\mathrm{r})\sigma = -\mathcal{R}\varphi\] The integral operators \(\mathcal{T}(\varepsilon_\mathrm{r})\) and \(\mathcal{R}\) are defined as follows: \[\mathcal{T}(\varepsilon_\mathrm{r}) = \left(2\pi\frac{\varepsilon + 1}{\varepsilon_r -1} - \mathcal{D}\right)\mathcal{S}\]\[\mathcal{R} = 2\pi - \mathcal{D}\] in terms of components of the Calderon projector (see [Tomasi2005]) This equation can be solved numerically by discretization of the cavity surface with a triangular mesh (the finite elements being called tesserae). The solution of the electrostatic problem then amounts to solving a linear system of equations: \[\mathbf{T}\mathbf{q} = -\mathbf{R}\mathbf{v} \rightarrow \mathbf{q} = \mathbf{K}\mathbf{v}\] where \(\mathbf{v}\) and \(\mathbf{q}\) are vectors of dimension equal to the number of finite elements. They contain the MEP and the ASC, respectively, sampled at the finite elements centroids. The polarization energy can be expressed as the scalar product of the MEP and ASC sampled at the cavity boundary: \[U_\mathrm{pol} = \mathbf{q}\cdot\mathbf{v}\] If we split the MEP into its nuclear and electronic parts \(\mathbf{v} = \mathbf{v}^\mathrm{N} + \mathbf{v}^\mathrm{e}\) , due to the linearity of Poisson equation, the same separation can be achieved for the ASC \(\mathbf{q} = \mathbf{q}^\mathrm{N} + \mathbf{q}^\mathrm{e}\). Exploiting this separation the polarization energy can be rewritten as: \[U_\mathrm{pol} = U_\mathrm{NN} + U_\mathrm{Ne} + U_\mathrm{eN} + U_\mathrm{ee} = U_\mathrm{NN} + 2U_\mathrm{eN} + U_\mathrm{ee}\] where \(U_{xy} (x, y = \mathrm{e}, \mathrm{N})\) is the interaction energy between the \(x\) charge distribution and the \(y\)-induced ASC. We also exploited the self-adjointedness of \(\mathbf{K}\) to get \(U_\mathrm{Ne} = U_\mathrm{eN}\). Nesting the PCM inside an SCF calculation requires the calculation of the MEP and ASC at cavity points at every SCF iteration and the update of the Fock matrix to account for the effect of the mutual solute-solvent polarization. The “solvated” Fock matrix is written as: \[f_{pq}= f_{pq}^\mathrm{vac} + \mathbf{q}\cdot\mathbf{v}_{pq}^\mathrm{e}\] The PCM matrix elements are more explicitly given as: \[\mathbf{q}\cdot\mathbf{v}_{pq}^\mathrm{e} = \sum_{I}^{N_\mathrm{ts}}q_Iv_{pq,I}^\mathrm{e}\]\[v_{pq,I}^\mathrm{e} = \left\langle \phi_p \left| \frac{-1}{|\mathbf{r} - \mathbf{s}_I|} \right| \phi_q \right\rangle\]
22dbe256e734f6d7
Are Parallel Universes Unscientific Nonsense? If you’re a multiverse skeptic, you should know that there are many potential weaknesses in the case for parallel universes, and I hope you’ll find my cataloging of these weaknesses below useful. To identify these weaknesses in the pro-multiverse arguments, we first need to review what the arguments are. via Are Parallel Universes Unscientific Nonsense? Insider Tips for Criticizing the Multiverse | Guest Blog, Scientific American Blog Network. Max Tegmark seems to be everywhere these days.  This is an interesting piece exploring the arguments for, and problems with, the various multiverse theories. I have to admit that I’m a multiverse skeptic.  I can appreciate that if a successful theory predicts something, we should take that prediction seriously.  However, when that prediction is extraordinary, we should be cautious until we have empirical support.  As Carl Sagan said, extraordinary claims require extraordinary evidence. But Tegmark omits what I think is the biggest flaw in the multiverse, namely that there isn’t just one multiverse prediction.  There are several.  Each of the theories he mentions predicts a different type of multiverse.  It’d be one thing if all of these theories converged on the same multiverse concept.  I think I’d find that compelling.  But they don’t. I’m not closed minded about the multiverse.  If someone found evidence that the multiverse was the simplest explanation for, I’d be much more willing to accept it.  Until then, while it’s fascinating speculation, I’m going to keep firmly in mind that it is speculation. 38 thoughts on “Are Parallel Universes Unscientific Nonsense? 1. A very interesting article, but I would pick on a few issues. His Level I parallel universes aren’t parallel universes as I understand them. It’s just one very large (possibly infinite) universe. His Level III universes aren’t dependent on the Schrödinger equation, but on a particular interpretation of the Schrödinger equation. He’s being naughty here. His Level IV universes are complete fantasy. I know that he says that “fantasy” is a a vague and unscientific criticism, but his level IV universes are precisely fantastic, so I feel justified in using the word. He is saying, as I understand it, that any universe we can imagine is just as real as the universe we live in. Worse than that, he is associating criticism of this idea with criticisms of the Level I-III multiverse theories, in an attempt to tar critics of his theory with the same brush as critics of inflation, etc. He even implies that critics of his fantasy universes are in the same boat as people who criticise the predictions of general relativity. That’s very naughty. In fact, it’s a very naughty article all told. Is naughty a scientific word? 1. Hi Steve, It’s a short article. Read his book. He’s a lot more humble and fothcoming than you might think. He very very clearly distinguishes what is mainstream from what is controversial, and in particular emphasises that his Level IV multiverse is extremely so, to the extent that he was advised not to pursue it for fear that it might destroy his career. The Many Worlds Interpretation is the only one which takes the Schrodinger equation at face value. The others generally do not, either positing hidden variables or objective wavefunction collapse in order to justify a unique universe. I do think it is not fair to call the Level IV multiverse a fantasy. There are very good reasons to suspect it is true. Indeed I became convinced it was true even before I heard of Max Tegmark. I explain why on my blog. I would genuinely be very excited if you could point out any particular flaw in my reasoning. 1. I will take a look at your blog with pleasure, but at this point I don’t see any way even in principle of demonstrating that parallel universes exist. Of course they *could* exist – anything *could* exist outside our universe, although that is stretching the meaning of the word “exist.” 2. Thanks Steve. You comment clarified some things for me. I agree on the Level I universe. It doesn’t even seem like the term “parallel” applies. I think for Level IV, I would use the term “speculative” rather than “fantasy”, since fantasy to me implies something that we know to be impossible. I don’t know if “naughty” is a scientific word, but it is a much friendlier one than others you might have used 🙂 2. Hi SAP, I disagree that the multiverse prediction is extraordinary. It’s an extremely simple idea that explains a lot. Indeed it would seem to me to be weirder if only one universe were possible. To me, it would be like if there were only one planet in the universe. I have a post on my blog which explains why I think an infinite multitude is actually simpler than a single member of a set. Even apart from the fact that multiverses are predicted from various theories, I think they have immense explanatory power when considered from the point of view of the anthropic principle. I don’t see a problem with the idea that there are different kinds of multiverses. All of these multiverses are complementary. In fact, in his book Tegmark presents an interesting argument which unifies the Level I (Distant Space) and Level 3 (Many Worlds Interpretation) multiverses into one consistent whole. Overall, I think the existence of the Level I (Distant Space), Level III (Many Worlds) and Level IV (MUH) multiverses is less extraordinary and simpler than their absence. I’m much less sure about the Level II (Endless Inflation) multiverse. The existence of the Level I multiverse is essentially beyond doubt, I think. All that is required for this to exist is for space to extend beyond our cosmic horizon, which it certainly does and in fact seems to do for a very great (potentially infinite) distance based on measurements of the flatness of space. The Level II multiverse is possibly correct, being predicted by some models of inflation. This is a scientific hypothesis because it could potentially be confirmed or falsified not by direct observation but by testing these various inflation models based on what we can glean about the early history of the universe. The Level III multiverse is simply the most parsimonious straightforward interpretation of quantum mechanics there is. The traditional Copenhagen interpretation postulates the collapse of the wavefunction, without having any test for this or prediction of how it might work. It’s becoming increasingly obvious that this is a desperate post-hoc rationalisation which seeks to preserve our intuition of a single universe from what the equations actually predict. If quantum computation ever becomes a practical reality, Tegmark argues that this would be direct evidence of the multiverse, because it can be interpreted as having all the copies of a quantum computer performing the same computation with different variables in parallel, thereby covering a massive amount of ground in the time it takes for a single iteration. It’s hard to see how this could work with the other interpretations of QM. I agree though that this is perhaps not strictly science, because since all of the interpretations agree on the mathematical fundamentals of QM, it’s hard to see how one interpretation could ever be proven. The Level IV multiverse is not scientific but neither is it nonsense, as you know I have argued on my blog. I very much recommend that you read Tegmark’s book. I’m about halfway through it currently, and it manages to be engaging, personal and informative without ever beating you over the head with his particular world-view. Even apart from the multiverse stuff, it’s a great way to brush up on your cosmology and physics in general. 1. Hi DM, I’m familiar with the standard arguments for why the multiverse is simpler than one universe, but it just feels like just-so reasoning to me, and attempts to unify them sound almost like apologetics. It just goes to show that complexity and simplicity are matters of judgment, and some of us are going to have different intuitions about it. Whose intuition is right? Only time (maybe) will tell. I have to admit that I don’t really understand how quantum computing is supposed to work, and I’ve been meaning to do some reading on it. It never made sense to me that, even if a qubit can be in superposition, how we can ever have access to more than one of those states? Wouldn’t the very act of accessing it cause a universe fork, wave function collapse, or whatever your favorite interpretation is? And if we can only ever have access to one, it seems like any work done by the other is lost, even if the many worlds interpretation is true. But I’m completely open that I may be missing some key fact here. I may take a look at Tegmark’s book, but my current reading list is pretty long (and getting longer), so it may be a while. It might well move up on my list if it gets a lot of acclaim in the reviews. 1. Quantum computing is based on the idea that we keep the qubits in superposition until the calculation is complete. We don’t fork the universe / collapse the wavefunction until the work is done. It’s a bit like juggling – you have to keep the balls in the air until the performance is over – if you drop one, you’ll have to start again! 1. Thanks, but I’m still left wondering how we access the results of the work done on the side that doesn’t “win” on measurement. For that matter, how do we work with the qubit, manipulate it, without causing decoherence / wave function collapse / universe forking? 1. The mechanics of accessing the results are complicated. Quantum computing algorithms are not like classical ones. Quantum calculations are reversible for one thing. Algorithm’s like Shor’s algorithm can be used to perform specific calculations. Keeping the system in a coherent quantum state until the calculation is done is one of the big challenges, which is why you don’t have a quantum computer sitting on your desk. But room temperature calculations are looking promising, I believe. That’s pretty much the limit of my expertise on the subject! 2. Hi SAP, Tegmark’s attempt to unify the Level I and Level III multiverse is interesting, but ultimately it doesn’t matter. They don’t need to be unified. I disagree that complexity and simplicity are matters of judgement. There are mathematical ways to look at it, such as Shannon entropy, where the amount of information in a sequence is related to how much “surprise” we feel on seeing each bit. Let me explain. I think we intuitively feel that two universes are more complex than one, and three are more complex than two. Our intuition is correct. However it is not correct to extrapolate this to assume that unlimited universes are more complex than one. The maximum amount of entropy (complexity) is actually If each possible universe has an equal chance of existing or not existing with no rule to say which is the case. This is like tossing a fair coin, where there is one bit of information per universe to specify if it exists or not. The only way to describe this situation is to list out all the possible universes and say whether they exist or not. As the probability moves away from 50%, the entropy goes down. If your coin is only 5% likely to show a heads, you will typically be able to express a sequence in much fewer bits than listing the whole thing out because you can just indicate the rare cases where the heads came up. The case where only one universe is impossible is exactly as complex as the case where only one universe is possible, and just as abritrary and prima facie unlikely. The case where no universes exist and the case where all universes exist are the simplest cases. And as it turns out, both of these are compatible with the MUH. The MUH says that that the concept of existence as applied to universes doesn’t really work. They exist from one point of view but not from another. The real point is that the distinction between those that exist and those that don’t is simply incoherent. As both of the simplest cases show no such distinction, they are both compatible with the MUH. 1. Thanks for the explanation. I can see an infinite number of universes being no more complex than 2 universes, but 2 through n universes still seems more complicated to me than the one we can observe. Of course, there are aspects to reality that we can’t observe yet (or possibly ever), and those aspects might include additional universes. My understanding is that we don’t have empirical evidence to rule them out, but we also don’t have evidence to make them mandatory. (I know a lot of people feel like there’s enough in the mathematics to justify certitude. Personally, I need more than mathematical manipulations to give me certitude.) 1. Hi SAP, I’d be interested to know why you think an infinite number of universes is no more complex than two but more complex than one. I would also like to know which of 2, 1,000,000 or infinite universes is more complex/unlikely. I agree that the MUH in particular is not science. But to me the philosophical and mathematical argument is convincing. From my point of view it can only be false if my reason has failed me in some way. And this is possible, but I see no problem in being very confident that it has not, for the same reasons that we feel confident about our capacity to reason in other contexts. Lack of empirical evidence does not mean we ought to be squarely on the fence, as you seem to be (or perhaps leaning towards MUH-denial). We have no direct evidence for the proposition that God does not exist, but I don’t think either of us would judge the probability of God’s existence at 50%. 2. Disagreeable Me, I left a comment on your blog about this. I questioned your assumption of mathematical platonism (the idea that mathematical objects have their own “existence” outside our universe. I put forward an argument for why I don’t think this can be true. Could you respond to that? 3. Hi DM, Well, to add a second universe requires positing a new dimension, a new parameter of existence. Once that dimension or parameter is there, once we have a second universe, then a third, fourth, etc, seems much more plausible. Actually, now that I think of it, once you have a second universe, stopping at a certain number would be an additional complication. But the whole realm of two or more universes is more complicated than just the one due to that additional dimension, parameters, or set of parameters. The God comparison is interesting. A deistic god cannot be proven or disproven. However, the only thing it really has going for it is that a lot of people would like to see it exist. There’s nothing that makes its existence mandatory, so you’re right, I don’t regard the probability as 50%. It can’t be ruled out, but the number of possible concepts that can’t be ruled out is vast, but the number of concepts that actually exists is far smaller. The chances that one of our cherished concepts falls within the exists category is minuscule (although a small number will occasionally). 4. Steve, do you have any follow up to the discussion on my blog? Hi SAP, I see where you are coming from now as you posit a new dimension to allow a second universe. You seem to conceptualise the current universe by analogy to a 3D Cartesian space, and the introduction of further universes as the extension of this to 4D. Points in a 3D space are defined by positional paramters (X,Y,Z), but now we need an extra parameter (W) to locate ourselves in this larger space. Our universe would then be the set of all points for which W=0, say. However I don’t see it like that at all. Unlike Cartesian points, universes in the MUH are not defined by their relative positions in some kind of space but by their laws of physics. In the Level IV multiverse, our universe is not above, under, before, after or beside any other. Everything required to “locate” it is there in its physics. This universe thus already has its “position” in the set of possible universes and no new dimension or parameter is required. It seems to me that it is the case of a limited number of universes that requires another parameter. In addition to all the laws of physics, we must include a non-mathematical parameter to represent whether the universe exists or not. The only way this might not be true of a single universe scenario is if there is only one possible universe, one specific way the laws of physics could have been arranged (which could in principle be derived from the armchair). This seems highly improbable to me, to put it mildly. 5. Hi DM, You read in detail that I actually tried to avoid in my comment. It’s why I said “dimension or parameter”. However you construct it, it seems like you’re adding an extra characteristic to reality. An added attribute, dimension, parameter, characteristic, or whatever you want to call it, that by its very addition adds complexity, at least in my mind. On your last point, in my mind, the complexity hierarchy, from high to low, is: 1. single universe 2. infinite universes (requires the extra (generic) dimension I mention above) 3. a finite number of universes (requires the extra dimension of 2. plus an additional constraint to limit the iterations) Hope this makes sense. 6. HI SAP, Sorry for banging on about this, and sorry for apparently misinterpreting what you meant by parameter or dimension, but then I am lost as to what you do mean. What is this new parameter that is needed for two universes and not for one and what kind of value could it have? I’m still left with the opinion that one universe requires an extra parameter that infinite universes do not. We presumably agree that there are effectively infinite possible universes, so if only one of those is actual, then does there not need to be a parameter to indicate which one of those it is? I wonder how you feel about the question about whether it would be more surprising if it turned out that life had only arisen once in the universe or if there were countless instances of it. I personally would find the former scenario to be much more strange (infinitely more so if there is only one universe!). Do you also feel that two instances of life life arising need an extra parameter, or is this different from the universe case? If so, why? Or we could make it more abstract. Suppose we found a very strange unique object of some kind floating in interstellar space (but that we had little reason to suspect that it was artificial). Perhaps it’s a lump of rock made of antimatter, for instance. Such a discovery would lead me to suspect that there were many other such objects in the universe. By your reasoning, this would seem to be unlikely, as it would mean the introduction of a new parameter or dimension to distinguish between them. This seems absurd to me, so I assume that this is not in fact your position, but that’s how it comes across to me. Liked by 1 person 7. Hi DM, I’m struggling to think of a way to explain the new parameter without repeating myself. Maybe this: Suppose on one hand you have X. Then suppose on the other you have a series of Xs. Isn’t the single X a simpler phenomena than the series of Xs? On life elsewhere, when we conclude it’s probable that there’s life elsewhere in the universe, it’s because we observe life here, we observe that it arises from the laws of nature, we observe that the laws of nature seem to exist everywhere, and we observe the immensity of the universe. There’s still a logical step in our conclusion, but it seems far less tenuous. We haven’t observed another universe. We’re evoking multiverses to explain (excuse) the fact that we don’t understand this one. Again, I see the multiverse as a possible explanation, but can’t go past that until we see something that makes it mandatory, or at least compelling. Here’s a question in the “turtles all the way down” tradition. If there were a multiverse, then why not a mult-multi-verse? Or a multi-multi-multi-verse? Where do we draw the line? 8. Hi SAP, I still don’t see the extra parameter in your series of X’s. Is one X simpler than a series of X’s? Perhaps. It depends on questions such as whether X has any other properties or on whether the series is infinite. If X is an arbitrary integer, chosen with each integer having equal probability, then it is likely to be very large (as most numbers are), effectively infinite. As such, it would require an effectively infinite number of bits to precisely specify a single X. An infinite series of Xs however can be defined with only the axioms needed to define the integers. On the analogy to life, I’m not exactly arguing that we ought to have precisely the same level of confidence that there is life elsewhere than that there are other universes, I’m just trying to understand how you perceive the relative complexity of the different scenarios. Perhaps you think that multiple origins for life is indeed more complex but that the evidence we have to bolster it still makes it the more reasonable position. Do we observe that life arises from the laws of nature or do we infer that it does? Do we observe that the laws of nature are the same everywhere or do we infer that they are? To paraphrase your comments about life, I would say that we observe that this universe exists, we infer that there is some explanation for how it is that the universe exists, and we infer that this same explanation could account for other universes. I would say it is also virtually certain that the set of logically possible universes is much vaster than what we directly observe. But where these inferences may be less straightforward than those you make about life, I think they are bolstered by a pretty robust argument that the MUH must be true given naturalism, computationalism and Platonism. How would you feel about a phenomenon for which we have no understanding? Such as a lump of antimatter? Would you feel an inference that there must be many such objects to be tenuous? Can you explain how your idea of the extra parameter or dimension fits in here, or indeed give any example where it would apart from universes? Well, in a way that’s what the MUH proposes, since the MUH encompasses lesser multiverses on multiple levels (e.g. Tegmark’s Levels I-III). But it’s not turtles all the way down, because the MUH is so fundamentally simple. It is at core the proposition that all possible universes exist. By definition, any possible universe therefore exists inside the MUH multiverse. There can be no greater multiverse. 3. Excellent book on this: “Why is there anything,” by fellow blogger, quantum physicist, and Many Worlds Theory advocate, Matthew Rave. It’s an easy read, a dialogue had between two characters, and details the argument quite convincingly. 4. Disagreeable Me, when you write this: I cannot help but notice that this idea of bits used to represent X sounds like X can only exist in a physical universe (that contains bits), not in an abstract sense. In other words, numbers and other mathematical objects are instantiated. 1. Hi Steve, Bits are a mathematical concept, not a physical concept, although they do have significance for certain physical concepts such as entropy and the Holographic principle etc. Bits are relevant here because most definitions of complexity hold structures with more information (measured in bits) to be more complex. 5. I’m no mathematician, but I’m comfortable with the idea of the infinite. If X = anything you like Then X needs no information to constrain it. It can be infinite. In fact it is. But if X = 1 then I need to write an equation to constrain it that way. So an infinite number of objects is simpler than 1 or some other integer. But no objects at all is the simplest possibility. Another example. An empty page contains no information. This is the simplest configuration of the page. It contains no stories. If I write on the page, “Once upon a time a King lived in a forest,” then I have constrained the possible stories. But this is not yet an interesting story. The more information I add, the more constraints there are, until eventually I arrive at a single story. And this story will be rich, detailed and interesting. Our universe seems to me like that. So a single universe seems to be quite an unlikely occurrence. Most likely of all is the blank page. So I can understand why (all possible universes) could be simpler than (our universe). But I don’t understand why we have any universes at all, if that is the logic. As I say, I’m no mathematician. This is probably complete garbage. 1. Hi Steve, I would say that no objects is exactly as simple as all possible objects existing. No objects: For all possible objects x; x does not exist. All objects: For all possible objects x; x does exist. The reason that all mathematical objects exist and not no mathematical objects is that it is logically impossible for no mathematical objects to exist. On a Platonist view of existence, a mathematical object exists if it can be defined consistently. It is possible to define mathematical objects therefore they exist. The laws of logic are not contingent, they are necessary, so it could not have been otherwise. The most apt comparison to nothing existing is not the blank page, because the set of all possible states of the page includes the blank page (and there is an analogous mathematical structure, the empty set). The comparison to absolutely nothing existing would be the view that it is impossible for the page to be in a non-blank state (i.e. to compose a block of text containing more than zero characters), which is patently absurd. By the way, I’d love to know if I have given you any reason to reconsider Platonism at all, or if you still feel comfortable dismissing Tegmark as a naughty fantasist. 1. Yes, you have answered my questions well, and although I cannot say that I really believe in MUH or Platonism, I cannot say that they are clearly not true. I think that by definition, Tegmark is a naughty fantasist, even if what he says turns out to be true! He conflates ideas that should not be conflated. Your way of reasoning is not naughty at all, but equally fantastic (fantastic can be a good or bad word, depending on your outlook). 6. DM, On the antimatter question, my initial thoughts would be that there may be more in the universe, but given the properties of antimatter and its reaction to matter, I would judge that it would have to be rare. But you’re asking me about my thoughts of it existing within the universe, which is vast and observable, and equating that with the existence of other universes. If we could observe a vast multiverse and you then asked me if there were antimatter universes out there, I would give it a much higher probability than I can now, not having observed any other universes. I also have to admit to being skeptical about Platonism, at least in any ontological sense. 1. Again though, how would you use your thoughts about the complexity parameter or dimension to treat the antimatter question? Are you beginning to think that maybe that’s not the right way to think about such questions? Yes, we can observe the universe and see that it is vast. For me, this is just like appreciating that there are many other ways the universe could be, even if we are only modestly tweaking the fundamental constants (or do you entertain the notion that there is something logically necessary about the way the constants are?). If there are many ways the universe could have been, it seems to me to be very strange that only one of those should be realised, like having only one sizeable lump of antimatter in a massive universe. Rejecting the MUH on the grounds of not accepting Platonism makes a lot more sense to me than your arguments from complexity, although I still think it is wrong to reject Platonism! 1. Hmmm. I thought I had addressed that question. Asking what the complexity of one or multiple lumps of antimatter in the universe, versus one or more universes, aren’t equivalent questions. That said, would it be more or less complex if there were only one lump of antimatter in existence? I would think only one instance of an antimatter lump in the universe (assuming we had some way to know that) would be more complicated (probably related to our way of knowing about it). On fundamental constants, I don’t know the reason why they are what they are. Multiverses are one possibility. Another is logical constraints of some kind. (Of course, then we’d be asking why those logical constraints? Turtles again.) Until we have reasons to isolate one particular explanation, I don’t think we should stop looking. 1. Hi DM, Another name for what I’m talking about is Occam’s razor. Any situation where we multiply the assumptions beyond necessity would apply. So a light in the night sky is either an airplane or an alien from outer space. I’m sure you’d agree that we should assume it’s an airplane or some other mundane thing before we assume aliens, spirits, etc. Now, I do know many people insist that multiple universes are simpler than one. But I don’t think we know enough about how this universe works yet to make firm conclusions. Maybe it is simpler, or maybe we just don’t yet understand why the laws and constants are the way they are. 2. Sure, Occam’s razor is why I like the MUH. We have this concept of existence, and we have this concept of mathematical consistency. The MUH says that these are the same concept, or alternatively that we can do away with the concept of physical existence because it is meaningless. 7. Disagreeable Me, I think that’s one of the jumps I can’t make yet. Dismissing the concept of existence strikes me as violating Einstein’s advice that, “Things should be as simple as possible, but no simpler.” Our experience seems to show that existence or non-existence matters. This seems like a brute fact that we need more justification to dismiss. 8. there are many great books that should be read. there are many great theory, and although they may seem sometimes ridiculous, you can not reject them before, at least not try to understand them, so you need something to read on this topic. however, in the case of this theory, one can completely abandon attempts to understand. it is a theory completely detached from reality. the problem is not that this theory doesn’t make sense. simply put, there can be no such reality. also very funny are evidence adduced to prove the existence of p.u; multiverse. Your thoughts? You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
0eedf6e0e31aba77
"The Ultimate Math Sheer Scoopneck Tee" Wear this sheer scoopneck tee for the Math is Fun icons Sheldon Cooper, Stephen Hawking, Jason Padgett, Archimedes, and Issac Newton. Math is BACK! #DoMath  We are not talking about the hexadecimal method, The Callan-Symanzik equation, Quantum Mechanics, or the Schrödinger equation. You do not have to be a genius to enjoy math, with things like Algebra, Geometry, Calculus, and Trigonometry. We have a healthy respect for math, as long as we do not have to do anything complicated.
c2557c57ce28d887
Center for Computational Materials (CCM) Research projects within the Center for Computational Materials (CCM) focus on quantum mechanical theory applied to materials. The application of quantum mechanics to materials is a relatively new and exciting field that combines a number of different disciplines. It is often asserted that the most vibrant and dynamic fields of science and engineering are characterized by a combination of traditional disciplines. Our research exemplifies this characterization by combining “physical science and computer science.” Materials science itself resides at the intersection of physics, chemistry and engineering, whereas computer science involves applied math and the transcription of physical problems to computational platforms. Work at the Center is based on developing new science ideas and the algorithms for implementing these ideas. The grand challenge of our research program is to answer the following question: Can one predict and understand the structure and properties of materials solely from knowledge of the atomic constituents? An affirmative answer to this question would permit one to design and understand materials without resort to experiment. This is particularly important for newly discovered materials such as carbon nanotubes, semiconductor or metal nanowires, nanoparticles and nanocrystals, organic semiconductors, new dielectrics, spintronic and photonic materials, just to name a few. These new materials have inspired research programs aimed at fabricating novel electronic and magnetic devices. Materials for these applications will involve phenomena and properties in new size regimes, e.g., the nanoscale, or combinations of matter, which previously have not been implemented for electronic or magnetic applications. The physical basis for predicting properties of materials and answering our “grand challenge question” exists within the quantum mechanical theory of matter. An implementation of quantum theory would in principle allow one to predict properties from knowledge of the atomic constituents. The chief impediment to a successful implementation of quantum theory is the difficulty involved in a direct solution of the Schrödinger equation. The equation is computationally intensive and involves numerous electronic and nuclear degrees of freedom for all but the simplest systems, e.g., a small molecule or a few-electron atom. The Center employs the study of algorithms targeted at solving the quantum problem using the extensive computational facilities available to us.
c3e07072f42c19e9
Medicinal Chemistry Course Descriptions General Pharmaceutical & Inorganic Chemistry (2 Units)                                   PMC 231 Atomic and Molecular structure: A short review of electronic structure of atoms and molecules including introduction of quantum theory, application of Schrödinger equations to simple systems (e.g. the Hydrogen atom) to show the origin of the: n, 1, m, s, nomenclature. The relationship between the electronic structure of elements and the formation of covalent, ionic and coordinate (dative) bonds leading to complexation and chelation. Their nature and pharmaceutical important. Chemistry of Organometallic compounds: Organo Lithium; Organo Magnesium (Grignard reactions). Their reactions and applications in organic synthesis. Application of co-ordination compounds, metal complexes and chelating agents will be discussed. Pharmaceutical Inorganic Chemistry: A comparative study of the physico-chemical properties, preparation and uses of the elements of the periodic table and their compounds of pharmaceutical importance, including the transition elements. The chemical basis for the pharmaceutical uses will be emphasized. General Physical, Organic and Radio Chemistry (2Units)                                    PMC 232 Physical Chemistry: Review of principles of thermodynamics and chemical kinetics relevant to pharmacy. Effect of these on the feasibility of drug synthesis, mixing, solubility, biological redox systems. Review of principles of chemical and ionic equilibria kinetics relevant to pharmacy. Physical properties of drug molecules viz: dipole moment, optical activity, surface tension, viscosity, adsorption, melting point. A brief review of fundamental concepts in organic chemistry such as bonding and reactivity of organic compounds, hybridization, resonance theory, inductive, mesomeric hyper conjugative and electromeric effects. General review of organic reactions leading to interconversion and modification of functional groups through nucleophilic and electrophilic substitution, elimination addition and rearrangement reactions. Utility of these reactions for isolation, characterization, elucidation of structure and synthesis of medicinal products. Radiochemistry: Introduction to radiochemistry. Types, sources and measurements of radioactive particles. Biological effects of radiations. Photochemistry: general principles, characteristics of photochemical reactions and applications both in the synthesis and spoilage of drugs. Radio pharmacy: Pharmaceutical applications of radioisotopes and radiopharmaceuticals. Applications of radiopharmacy: Examples of radiopharmaceuticals. Handling, dosing and disposal of radiopharmaceuticals. Pharmaceutical application of radio isotopes. Practical Pharmaceutical Physical Chemistry (1Unit)                                           PMC 234 Experiments to cover the physical properties of drug molecules and chemical kinetics, which include: Adsorption from solutions; The use of polarimeter; The use of tensiometer; The use of viscometer; Chemical kinetics, Determination of melting point; Gravimetry. Pharmaceutical Organic Chemistry (3 Units)                                                         PMC 331 Types of organic reaction mechanisms taken in relation to types of organic functional groups, effects on their stability, use in pharmacy, other physiochemical properties, solubility, absorption, distribution and excretion when found in drug molecules. Alcohols and phenols, carboxylic acids and their derivatives (amides, esters, acid anhydrides, acyl halides) and sulphonic acid, also to be treated are amines and imines, nitriles, nitro and nitroso group and azo-compounds. Chemistry of Heterocyclic compounds. General introduction, structure, nomenclature and synthesis of five memered ring heterocycles viz: pyrol, furan thiophene, imidazole etc. Examples of drugs in current use containing such heterocycles should be cited. Stereochemistry: Review of total concept of stereoisomerism as distinct from isomerisms of other types-optical and geometrical isomerism chiral and achiral molecules, stereoisomerism and molecular conformation in examples, Determination of configuration – spectroscopic methods Resolution of racemic mixture and importance in Pharmacy using named medicinal examples. Optical rotator dispersion and its uses. Importance of stereochemistry in terpenes. Organic Synthesis on medicinal compounds involving several stages, e.g. preparation of benzocaine (Ethyl-p-aminobenzoate); Preparation of Aspirin; Preparation of sulphanilamide etc. Synthesis of polypeptides and immunologicals. General introduction, structure, nomenclature and synthesis of six member ring heterocycles viz: pyridine, piperidine, quinoline and isoquinoline. Examples of drugs in current use containing such heterocycles should be cited. Pharmaceutical Specification and Standardization (2 Units)                                PMC 332 Official standards for pharmaceutical chemical and formulated products which are designed primarily to set limit of tolerance for the product at the time it reaches the patient. Such quality criteria which are specified in official monographs for pharmaceutical chemicals include: A description of the drug or product, Solubility, Test for identity, Physical constants, quantitative assay of pure chemical entity in the case of pharmaceutical chemicals, or of the principal active constituents in the case of formulated product, The methods mentioned above should include: complexometric titrations, gravimetry and limit tests. The sources of impurities in pharmaceutical products. Limit test to exclude excessive contamination, and Storage condition. Quantitative assay of pure chemical entity in the case of pharmaceutical chemicals, or of the principal active constituents in the case of formulated product. The methods mentioned above should include: Acid-base titrations, non aqueous, oxidation-reduction titrations. Practical Pharmaceutical Qualitative Analysis (1Unit)                                          PMC 334 Experiments to cover organic and inorganic pharmaceutical qualitative analysis, which include: Qualitative organic analysis; Chemical tests; Physical and chemical properties of individual members of functional group classes; Cation analysis; Identification of acid radicals. Instrumental Methods in Pharmaceutical Analysis (3 Units)                                PMC 431 Instrumental Methods for quantitative Analysis of pharmaceuticals: UV- Visible spectrophotometry; Atomic Absorption spectroscopy, Flame photometry. Instrumental Methods for structure elucidation of drugs and natural products: Infra-red spectroscopy; 1D and 2D N.M.R. Spectrometry; Mass Spectrometry; Gas-liquid chromatography; HPLC, Hyphenated techniques like GC-MS, LC-MS and LC-NMR. Medicinal Chemistry I (3 Units)                                                                               PMC432 A study of the following classes of drugs in respect of their nomenclature, physical and chemical properties, structure-activity relationship, synthesis (where necessary), assay, metabolism and uses: General and Local anaesthetics; Antidepressants; Antipsychotics; Anticonvulsants; Sedative-hypnotics; Analgesics, antipyretics and anti-inflammatory agents; Antihypertensive, diuretics, steroids including steroidal hormones. Practical Pharmaceutical Analysis (1 unit)                                                 PMC 433 Experiments to cover volumetric and instrumental analysis of pharmaceuticals, which include: Determination of mixture containing borax and boric acid; Determination of the percentage of sodium salicylate and acetyl salicylic acid (Aspirin); Iodometric titration, Determination of ferrous and ferric ion in a mixture; Complexiometric titration using EDTA; Thin layer chromatography, Ultra-violet and visible spectrophotometry; NMR and MS spectra interpretation; Assay of calcium lactate tablet; Assay of sodium chloride in Dextrose/Saline. Practical Pharmaceutical Synthesis (1 unit)                                                            PMC 435 Experiments to cover the synthesis of compounds of pharmaceutical importance, which include: Experiments on crystallization and melting points; Synthesis of aspirin; Synthesis of acetanilide; Synthesis of phenacetin; Synthesis of phenyl benzoate; Synthesis of m-dinitrobenzens; Synthesis of benzoin; Preparation of glucose from cane sugar; Preparation of caffeine from tea bags, Preparation of cysteine from hair; Preparation of oleoresins. Pharmaceutical Analysis and Good Laboratory Principles (2 units)                   PMC531 Principles of good laboratory practice. Drug quality assurance system: Monographs and specification for drugs and drug products. Analysis of drugs in biological samples. Applications of chemical and physicochemical analytical methods in purity determinations identification of pharmaceuticals, radio-pharmaceuticals and medicinal products; Basic tests methodology for essential drugs. Equivalence and bioequivalence of drug products, biopharmaceutical methods in purity determination. Other Instrumental Methods; e.g. Fluorimetry; Polarimetry, Polarography, Potentiometry. Medicinal Chemistry, Drug Design and Development (3 Units)                           PMC532 Drug design: Physico-chemical approaches to drug design. Historical, Free-Wilson and Hansch approaches. The concept of isosterism. Bioisosterism as a tool in drug design. SAR in drug design. Anti-metabolite and pro-drug approach to design a new drug. Drug development: Stages in drug development. Paradigms in drug discovery and development. Computational drug design: Chemoinformatics and bioinformatics: applications in drug design and development, molecular modeling, virtual screening, Docking, molecular dynamics and Monte Carlos quantum methods. A study of the following classes of drugs in respect of their nomenclature, physical and chemical properties, structure-activity relationship, synthesis (where necessary), assay, metabolism and uses: chemotherapeutic agents such as antibiotics including sulphonamides, penicillins, cephalosporins, anti-malarials, anthelmintics, trypanocides, schistosomicides, amoebicides. Photochemistry, applications of radiopharmacy.
7d1b63caad36f3a9
Sun in February (black version).jpg The Sun is the source of energy for most of life on Earth. It derives its energy mainly from nuclear fusion in its core, converting mass to energy as protons are combined to form helium. This energy is transported to the sun's surface then released into space mainly in the form of radiant (light) energy. Common symbols SI unit joule Other units kW⋅h, BTU, calorie, eV, erg, foot-pound In SI base units J = kg m2 s−2 Extensive? yes Conserved? yes Dimension M L 2 T −2 Mass and energy are closely related. Due to mass–energy equivalence, any object that has mass when stationary (called rest mass) also has an equivalent amount of energy whose form is called rest energy, and any additional energy (of any form) acquired by the object above that rest energy will increase the object's total mass just as it increases its total energy. For example, after heating an object, its increase in energy could be measured as a small increase in mass, with a sensitive enough scale. Living organisms require energy to stay alive, such as the energy humans get from food. Human civilization requires energy to function, which it gets from energy resources such as fossil fuels, nuclear fuel, or renewable energy. The processes of Earth's climate and ecosystem are driven by the radiant energy Earth receives from the sun and the geothermal energy contained within the earth. The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the components of an object – and potential energy reflects the potential of an object to have motion, and generally is a function of the position of an object within a field or may be stored in the field itself. While these two categories are sufficient to describe all forms of energy, it is often convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, macroscopic mechanical energy is the sum of translational and rotational kinetic and potential energy in a system neglects the kinetic energy due to temperature, and nuclear energy which combines utilize potentials from the nuclear force and the weak force), among others.[citation needed] Some forms of energy (that an object or system can have as a measurable property) Type of energy Description Mechanical the sum of macroscopic translational and rotational kinetic and potential energies Electric potential energy due to or stored in electric fields Magnetic potential energy due to or stored in magnetic fields Gravitational potential energy due to or stored in gravitational fields Chemical potential energy due to chemical bonds Ionization potential energy that binds an electron to its atom or molecule Nuclear potential energy that binds nucleons to form the atomic nucleus (and nuclear reactions) Chromodynamic potential energy that binds quarks to form hadrons Elastic potential energy due to the deformation of a material (or its container) exhibiting a restorative force Mechanical wave kinetic and potential energy in an elastic material due to a propagated deformational wave Sound wave kinetic and potential energy in a fluid due to a sound propagated wave (a particular form of mechanical wave) Radiant potential energy stored in the fields of propagated by electromagnetic radiation, including light Rest potential energy due to an object's rest mass Thermal kinetic energy of the microscopic motion of particles, a form of disordered equivalent of mechanical energy Thomas Young, the first person to use the term "energy" in the modern sense. The word energy derives from the Ancient Greek: ἐνέργεια, romanizedenergeia, lit. 'activity, operation',[1] which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure. In the late 17th century, Gottfried Leibniz proposed the idea of the Latin: vis viva, or living force, which defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the random motion of the constituent parts of matter, although it would be more than a century until this was generally accepted. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two. In 1807, Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense.[2] Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". The law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat. Units of measure In 1843, Joule independently discovered the mechanical equivalent in a series of experiments. The most famous of them used the "Joule apparatus": a descending weight, attached to a string, caused rotation of a paddle immersed in water, practically insulated from heat transfer. It showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle. Scientific use Classical mechanics Work, a function of energy, is force times distance. Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction). In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is invariably accompanied by an increase or decrease of energy of the substances involved. Some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light; thus the products of a reaction may have more or less energy than the reactants. A reaction is said to be exothermic or exergonic if the final state is lower on the energy scale than the initial state; in the case of endothermic reactions the situation is the reverse. Chemical reactions are almost invariably not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at given temperature T) is related to the activation energy E by the Boltzmann's population factor eE/kT – that is the probability of molecule to have energy greater than or equal to E at the given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction can be provided in the form of thermal energy. Basic overview of energy and human life. In biology, energy is an attribute of all biological systems from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or an organelle of a biological organism. Energy used in respiration is mostly stored in molecular oxygen [5] and can be unlocked by reactions with molecules of substances such as carbohydrates (including sugars), lipids, and proteins stored by cells. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, assuming an average human energy expenditure of 12,500 kJ per day and a basal metabolic rate of 80 watts. For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum.[6] The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount of energy.[7] Sunlight's radiant energy is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into carbohydrates, lipids, and proteins and high-energy compounds like oxygen [5] and ATP. Carbohydrates, lipids, and proteins can release the energy of oxygen, which is utilized by living organisms as an electron acceptor. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark, in a forest fire, or it may be made available more slowly for animal or human metabolism, when organic molecules are ingested, and catabolism is triggered by enzyme action. Any living organism relies on an external source of energy – radiant energy from the Sun in the case of green plants, chemical energy in some form in the case of animals – to be able to grow and reproduce. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as a combination of oxygen and food molecules, the latter mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidised to carbon dioxide and water in the mitochondria and some of the energy is used to convert ADP into ATP. ADP + HPO42− → ATP + H2O The rest of the chemical energy in O2[8] and the carbohydrate or fat is converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:[note 2] gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3 kJ Daily food intake of a normal adult: 6–8 MJ Earth sciences In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior,[10] while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations brought about by solar energy on the atmosphere of the planet Earth. Quantum mechanics In quantum mechanics, energy is defined in terms of the energy operator as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation: (where is Planck's constant and the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons. When calculating kinetic energy (work to accelerate a massive body from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest energy: energy which every massive body must possess even when being at rest. The amount of energy is directly proportional to the mass of the body: m is the mass of the body, c is the speed of light in vacuum, is the rest energy. For example, consider electronpositron annihilation, in which the rest energy of these two individual particles (equivalent to their rest mass) is converted to the radiant energy of the photons produced in the process. In this system the matter and antimatter (electrons and positrons) are destroyed and changed to non-matter (the photons). However, the total mass and total energy do not change during this interaction. The photons each have no rest mass but nonetheless have radiant energy which exhibits the same inertia as did the two original particles. This is a reversible process – the inverse process is called pair creation – in which the rest mass of particles is created from the radiant energy of two (or more) annihilating photons. Energy and mass are manifestations of one and the same underlying physical property of a system. This property is responsible for the inertia and strength of gravitational interaction of the system ("mass manifestations"), and is also responsible for the potential ability of the system to perform work or heating ("energy manifestations"), subject to the limitations of other physical laws. In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector).[11] In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of space-time (= boosts). Some forms of transfer of energy ("energy in transit") from one object or system to another Type of transfer process Description Heat that amount of thermal energy in transit spontaneously towards a lower-temperature object Work that amount of energy in transit due to a displacement in the direction of an applied force Transfer of material that amount of energy carried by matter that is moving from one system to another A turbo generator transforms the energy of pressurised steam into electrical energy There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces. Energy is also transferred from potential energy () to kinetic energy () and then back to potential energy constantly. This is referred to as conservation of energy. In this closed system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following: The equation can then be simplified further since (mass times acceleration due to gravity times the height) and (half mass times velocity squared). Then the total amount of energy can be found by adding . Conservation of energy and mass in transformation Part of the rest energy (equivalent to rest mass) of matter may be converted to other forms of energy (still exhibiting mass), but neither energy nor mass can be destroyed; rather, both remain constant during any process. However, since is extremely large relative to ordinary human scales, the conversion of an everyday amount of rest mass (for example, 1 kg) from rest energy to other forms of energy (such as kinetic energy, thermal energy, or the radiant energy carried by light and other radiation) can liberate tremendous amounts of energy (~ joules = 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of an everyday amount energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure on a weighing scale, unless the energy loss is very large. Examples of large transformations between rest energy (of matter) and other forms of energy (e.g., kinetic energy into particles with rest mass) are found in nuclear physics and particle physics. Reversible and non-reversible transformations Conservation of energy The fact that energy can be neither created nor be destroyed is called the law of conservation of energy. In the form of the first law of thermodynamics, this states that a closed system's energy is constant unless energy is transferred in or out by work or heat, and that no energy is lost in transfer. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant.[12] Richard Feynman said during a 1961 lecture:[14] Most kinds of energy (with gravitational energy being a notable exception)[15] are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.[13][14] which is similar in form to the Heisenberg Uncertainty Principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics). Energy transfer Closed systems Energy transfer can be considered for the special case of systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work during the transfer is called heat.[note 4] Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy,[note 5] and the conductive transfer of thermal energy. Energy is strictly conserved and is also locally conserved wherever it can be defined. In thermodynamics, for closed systems, the process of energy transfer is described by the first law:[note 6] where is the amount of energy transferred,   represents the work done on the system, and represents the heat flow into the system. As a simplification, the heat term, , is sometimes ignored, especially when the thermal efficiency of the transfer is high. Open systems Beyond the constraints of closed systems, open systems can gain or lose energy in association with matter transfer (both of these process are illustrated by fueling an auto, a system which gains in energy thereby, without addition of either work or heat). Denoting this energy by , one may write Internal energy First law of thermodynamics The first law of thermodynamics asserts that energy (but not necessarily thermodynamic free energy) is always conserved[18] and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as where is the heat supplied to the system and is the work applied to the system. Equipartition of energy The energy of a mechanical harmonic oscillator (a mass on a spring) is alternatively kinetic and potential energy. At two points in the oscillation cycle it is entirely kinetic, and at two points it is entirely potential. Over the whole cycle, or over many cycles, net energy is thus equally split between kinetic and potential. This is called equipartition principle; total energy of a system with many degrees of freedom is equally split among all available degrees of freedom. This principle is vitally important to understanding the behaviour of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is called the second law of thermodynamics. The second law of thermodynamics is valid only for systems which are near or in equilibrium state. For non-equilibrium systems, the laws governing system's behavior are still debatable. One of the guiding principles for these systems is the principle of maximum entropy production.[19][20] It states that nonequilibrium systems behave in such a way to maximize its entropy production.[21] See also 1. ^ The second law of thermodynamics imposes limitations on the capacity of a system to transfer energy by performing work, since some of the system's energy might necessarily be consumed in the form of heat instead. See e.g. Lehrman, Robert L. (1973). "Energy Is Not The Ability To Do Work". The Physics Teacher. 11 (1): 15–18. Bibcode:1973PhTea..11...15L. doi:10.1119/1.2349846. ISSN 0031-921X. 6. ^ There are several sign conventions for this equation. Here, the signs in this equation follow the IUPAC convention. 1. ^ Harper, Douglas. "Energy". Online Etymology Dictionary. Archived from the original on October 11, 2007. Retrieved May 1, 2007. 3. ^ Lofts, G; O'Keeffe D; et al. (2004). "11 – Mechanical Interactions". Jacaranda Physics 1 (2 ed.). Milton, Queensland, Australia: John Willey & Sons Australia Ltd. p. 286. ISBN 978-0-7016-3777-4. 4. ^ The Hamiltonian MIT OpenCourseWare website 18.013A Chapter 16.3 Accessed February 2007 5. ^ a b Schmidt-Rohr, K. (2020). "Oxygen Is the High-Energy Molecule Powering Complex Multicellular Life: Fundamental Corrections to Traditional Bioenergetics” ACS Omega 5: 2221-2233. 6. ^ "Retrieved on May-29-09". Archived from the original on 2010-06-04. Retrieved 2010-12-12. 7. ^ Bicycle calculator – speed, weight, wattage etc. "Bike Calculator". Archived from the original on 2009-05-13. Retrieved 2009-05-29.. 9. ^ Ito, Akihito; Oikawa, Takehisa (2004). "Global Mapping of Terrestrial Primary Productivity and Light-Use Efficiency with a Process-Based Model. Archived 2006-10-02 at the Wayback Machine" in Shiyomi, M. et al. (Eds.) Global Environmental Change in the Ocean and on Land. pp. 343–58. 10. ^ "Earth's Energy Budget". Archived from the original on 2008-08-27. Retrieved 2010-12-12. 11. ^ a b Misner, Thorne, Wheeler (1973). Gravitation. San Francisco: W.H. Freeman. ISBN 978-0-7167-0344-0. CS1 maint: multiple names: authors list (link) 13. ^ a b The Laws of ThermodynamicsArchived 2006-12-15 at the Wayback Machine including careful definitions of energy, free energy, et cetera. 14. ^ a b Feynman, Richard (1964). The Feynman Lectures on Physics; Volume 1. U.S.A: Addison Wesley. ISBN 978-0-201-02115-8. 15. ^ "E. Noether's Discovery of the Deep Connection Between Symmetries and Conservation Laws". 1918-07-16. Archived from the original on 2011-05-14. Retrieved 2010-12-12. 16. ^ "Time Invariance". Archived from the original on 2011-07-17. Retrieved 2010-12-12. 18. ^ Kittel and Kroemer (1980). Thermal Physics. New York: W.H. Freeman. ISBN 978-0-7167-1088-2. 19. ^ Onsager, L. (1931). "Reciprocal relations in irreversible processes". Phys. Rev. 37 (4): 405–26. Bibcode:1931PhRv...37..405O. doi:10.1103/PhysRev.37.405. 20. ^ Martyushev, L.M.; Seleznev, V.D. (2006). "Maximum entropy production principle in physics, chemistry and biology". Phys. Rev. 426 (1): 1–45. Bibcode:2006PhR...426....1M. doi:10.1016/j.physrep.2005.12.001. 21. ^ Belkin, A.; et., al. (2015). "Self-Assembled Wiggling Nano-Structures and the Principle of Maximum Entropy Production". Sci. Rep. 5: 8323. Bibcode:2015NatSR...5E8323B. doi:10.1038/srep08323. PMC 4321171. PMID 25662746. Further reading External links What is this?
7472b669b66ebff3
Monday, March 11, 2019 Why Cosmologists hate Copenhagen James B. Hartle explains: Textbook (Copenhagen) formulations of quantum mechanics are inadequate for cosmology for at least four reasons: 1) They predict the outcomes of measurements made by observers. But in the very early universe no measurements were being made and no observers were around to make them. 2) Observers were outside of the system being measured. But we are interested in a theory of the whole universe where everything, including observers, are inside. 3) Copenhagen quantum mechanics could not retrodict the past. But retrodicting the past to understand how the universe began is the main task of cosmology. 4) Copenhagen quantum mechanics required a fixed classical spacetime geometry not least to give meaning to the time in the Schrödinger equation. But in the very early universe spacetime is fluctuating quantum mechanically (quantum gravity) and without definite value. There is some merit to this reasoning, but jumping to Everett many-worlds is still bizarre, and does not help. The decoherence and consistent histories interpretations of quantum mechanics are really just minor variations of Copenhagen. While Copenhagen says that observers notice quantum states settling into eigenstates, these newer interpretations say it can happen before the observer notices. Many-worlds just says that anything can happen, and it is completely useless for cosmology. Sean M. Carroll has announced that he is writing a new book on many-worlds theory. He will presumably take the position that it is a logical necessity for cosmology. Or that it is simpler for cosmology. However, I very much doubt that any benefit for cosmology can be found. 1 comment: 1. In quantum mechanics in practice, we prepare many states and measure them in many ways, however in quantum mechanics as a global metaphysics we more think of there being a single state measured in many ways, in which case in solving the equation A_i = Tr[\hat M_i\hat\rho], with the A_i being all summary statistics of all the experimental raw data we have, the \hat M_i being an operator that represents the measurement that results in that summary statistic, and \hat\rho being the density matrix that represents the state, there is at least one basis in which \hat\rho is a diagonal matrix. If there is only one state then all measurement operators effectively commute because off–diagonal entries in such a basis make no contribution to the trace, so the one–state metaphysics of quantum mechanics is equivalent to the one–state metaphysics of classical (statistical) physics. I make this argument (with LaTeX, so easier to read!) as a small part of my arXiv:1901.00526. I'd like to know if this argument appears elsewhere.
4e837b7b4c2486aa
Hidden Symmetries of the Hydrogen Atom Here’s the math colloquium talk I gave at Georgia Tech this week: Hidden symmetries of the hydrogen atom. Abstract. A classical particle moving in an inverse square central force, like a planet in the gravitational field of the Sun, moves in orbits that do not precess. This lack of precession, special to the inverse square force, indicates the presence of extra conserved quantities beyond the obvious ones. Thanks to Noether’s theorem, these indicate the presence of extra symmetries. It turns out that not only rotations in 3 dimensions, but also in 4 dimensions, act as symmetries of this system. These extra symmetries are also present in the quantum version of the problem, where they explain some surprising features of the hydrogen atom. The quest to fully understand these symmetries leads to some fascinating mathematical adventures. I left out a lot of calculations, but someday I want to write a paper where I put them all in. This material is all known, but I feel like explaining it my own way. In the process of creating the slides and giving the talk, though, I realized there’s a lot I don’t understand yet. Some of it is embarrassingly basic! For example, I give Greg Egan’s nice intuitive argument for how you can get some ‘Runge–Lenz symmetries’ in the 2d Kepler problem. I might as well just quote his article: • Greg Egan, The ellipse and the atom. He says: Now, one way to find orbits with the same energy is by applying a rotation that leaves the sun fixed but repositions the planet. Any ordinary three-dimensional rotation can be used in this way, yielding another orbit with exactly the same shape, but oriented differently. But there is another transformation we can use to give us a new orbit without changing the total energy. If we grab hold of the planet at either of the points where it’s travelling parallel to the axis of the ellipse, and then swing it along a circular arc centred on the sun, we can reposition it without altering its distance from the sun. But rather than rotating its velocity in the same fashion (as we would do if we wanted to rotate the orbit as a whole) we leave its velocity vector unchanged: its direction, as well as its length, stays the same. Since we haven’t changed the planet’s distance from the sun, its potential energy is unaltered, and since we haven’t changed its velocity, its kinetic energy is the same. What’s more, since the speed of a planet of a given mass when it’s moving parallel to the axis of its orbit depends only on its total energy, the planet will still be in that state with respect to its new orbit, and so the new orbit’s axis must be parallel to the axis of the original orbit. Rotations together with these ‘Runge–Lenz transformations’ generate an SO(3) action on the space of elliptical orbits of any given energy. But what’s the most geometrically vivid description of this SO(3) action? Someone at my talk noted that you could grab the planet at any point of its path, and move to anywhere the same distance from the Sun, while keeping its speed the same, and get a new orbit with the same energy. Are all the SO(3) transformations of this form? I have a bunch more questions, but this one is the simplest! 16 Responses to Hidden Symmetries of the Hydrogen Atom 1. John Baez says: Okay, here’s a guess about my puzzle, which may be implicit in Göransson’s work. I’ll talk about the 2d Kepler problem, but everything should generalize to the 3d problem if it works at all. Take an ellipse in the plane with its focus at the origin. Draw a point on it. This describes a possible orbit of a planet in the plane, with the point indicating where the planet is at time zero. We can think of this ellipse as the 2d projection of some great circle on a sphere in 3 dimensions. The diameter of this sphere is the semi-major axis of the ellipse. The center of this sphere is not the ellipse’s focus, but instead its “center”. The point on the ellipse gives a point on this great circle, so we get a “pointed great circle”. Now take all pointed great circles on the sphere. Their projections give pointed ellipses, all having semi-major axes of the same length. All these ellipses have the same center, too, but they have different foci. Translate them so their focus is always at the origin! Now we have all possible pointed ellipses with the same semi-major axis and with focus at the origin. These are all possible orbits of a planet with a given fixed energy. Why? Because the energy of a planet in a given orbit is a function of the semi-major axis (together with the planet’s mass, the Sun’s mass and the gravitational mass). So, we’ve gotten a one-to-one correspondence between elliptical orbits of a planet with a given energy and pointed great circles on the sphere. This gives a way for the rotation group SO(3) to act on this set of elliptical orbits! The claim is that these SO(3) symmetries are the ones that, via Noether’s theorem, give the angular momentum and Runge–Lenz vector as conserved quantities! 2. Tali says: Hi John, Enjoyed the post. I can’t see why the group acting here is SO(3). Also, if you grab a planet at any point of its path, and move to anywhere the same distance from the sun you will most probably get a non periodic orbit, or am I wrong? • John Baez says: Tali wrote: Yes, it’s not obvious without calculations, and not obvious from my talk. I believe it can be made more obvious, and my comment, above yours, is my guess as to how. This contains a link to an earlier post here on Azimuth, about Göransson’s work on this issue. Check that out! If you do what I said in my post you’ll get a periodic orbit, because the new orbit will have the same energy as the old one, and if an orbit for the inverse square force law is periodic, so are all other orbits of that energy. (They will all be ellipses with the same semi-major axis.) 3. Greg Egan says: I just want to add some background that might be helpful. I know John already knows all this, but people reading about these higher-dimensional symmetries for the first time might not. The Laplace–Runge–Lenz vector ( http://en.wikipedia.org/wiki/Laplace–Runge–Lenz_vector ) is usually defined as: A = m v \times L - m k r/\left| r\right| where k is the force constant for the Kepler problem. A points along the axis of symmetry of the orbit, from the centre of attraction towards the point of closest approach. It is conserved for any given orbit, and its length depends on the eccentricity of the orbit, going to zero for a perfectly circular orbit. There is a rescaled version that has some nice properties. If we define: \displaystyle{ M = \frac{A}{\sqrt{-2 E m}} } where E is the total energy (negative for a bound orbit), then M and the angular momentum vector L have a conserved sum of squares: \displaystyle{L^2 + M^2 = \frac{k^2 m}{-2 E}} There is a direct parallel between this equation and the Pythagorean equation between the traditional measures of an ellipse, a,b,c: b^2 + c^2 = a^2 Here a is the semi-major axis of the ellipse, b is the semi-minor axis, and c is half the distance between the foci. These are related to physical quantities in the Kepler problem by: \displaystyle{a = -\frac{k}{2 E}} \displaystyle{b^2 = -\frac{L^2}{2 E m}} \displaystyle{c^2 = -\frac{M^2}{2 E m}} So a is fixed by energy alone, and then, for fixed energy, b is proportional to L, while c is proportional to the scaled LRL vector M. This sum of squares relationship between L and M suggests that they are acting like two parts of some higher dimensional object of a fixed magnitude. And there is such a higher-dimensional object, in four dimensions (or three, for the planar Kepler problem). Probably the simplest version of it is the bivector: B = L + e_w \wedge M where the angular momentum vector L is construed as a bivector giving the plane of the rotation, and e_w is a unit vector orthogonal to the 2 or 3 of ordinary space. What does this bivector mean? There are two ways we can associate the elliptical orbits of the planet with great circles on a sphere in one higher dimension. John has already described one, used by Göransson, where we project great circles on a higher-dimensional sphere orthogonally into ordinary space. The other way, with an older pedigree, is to consider the “hodogram” of the orbit: the curve traced out by the velocity of the planet. For the Kepler problem, this will always be a circle, but its centre will be displaced from the origin if the orbit itself is non-circular. All of these velocity circles, for a fixed total energy, can be constructed by stereographically projecting great circles from a higher-dimensional sphere into velocity space. In either case, the plane in which these higher-dimensional great circles lie is precisely that defined by our bivector, B. So we get an action of the rotations in the higher-dimensional space on B, which conserves the bivector’s overall norm, and hence the sum-of-squares relationship between the associated L and M vectors for a fixed energy. In other words, rotating the bivector B in the higher-dimensional space gives us an action on pairs of vectors (L,M) that are angular momentum and Laplace–Runge–Lenz vectors for orbits of a particular energy. • John Baez says: Thanks for the overview, Greg! One question is whether the two ways of relating elliptical orbits of a certain fixed energy to great circles on the 3-sphere are ‘the same’, at least up to some normalizations. I think you’re answering in the affirmative, since you’re saying in either approach the great circle coming from an orbit lies in the plane defined by the bivector B = L + e_w \wedge M. Another way to get at the answer is to directly relate the circle traced out by the velocity vector of the elliptical orbit to the great circle in Göransson’s approach. Have you tried that? The tricky thing here is that Görannson (and Souriau, and Moser) use a reparametrization of time to convert elliptical orbits into simple harmonic motion, so his velocity is not the true velocity. I’ll explain this for the 3d Kepler problem, though the dimension doesn’t really affect much. Let \mathbf{r} = (x,y,z) be the position of our planet at time t. If we use this inverse square force law \displaystyle{\ddot{\mathbf{r}} = - \frac{\mathbf{r}}{r^3} } and choose E = -1/2, conservation of energy says \displaystyle{ \frac{\dot{\mathbf{r}} \cdot \dot{\mathbf{r}}}{2} - \frac{1}{r} = -\frac{1}{2} } If we choose a new parameter s so that \displaystyle{ \frac{d s}{d t} = \frac{1}{r} } and use a prime for the derivative with respect to s, we get \displaystyle{ t' = \frac{dt}{ds} = r } \displaystyle{ \mathbf{r}' = \frac{dr}{ds} = \frac{dt}{ds}\frac{dr}{dt} = r \dot{\mathbf{r}} } Then conservation of energy can be rewritten as \displaystyle{ (t' - 1)^2 + \mathbf{r}' \cdot \mathbf{r}' = 1 } which is the equation of a 3-sphere in 4-dimensional space! It’s a sphere of radius one centered at the point With some further calculation we can show some other wonderful facts: \mathbf{r}''' = -\mathbf{r}' t''' = -(t' - 1) so the velocity 4-vector (t',x',y',z') moves in simple harmonic motion, following a great circle on the 3-sphere… if velocity is computed using our new time coordinate, s. But I have not thought enough about how this is related to the circle traced out by the original velocity 3-vector (\dot{x}, \dot{y}, \dot{z}). If we take (t',x',y',z') and do a stereographic projection down to 3 dimensions, do we get (\dot{x}, \dot{y}, \dot{z})? • Greg Egan says: I’m confident of two things: (1) the semi-minor axis of an ellipse that is the orthogonal projection of a great circle is proportional to the cosine of the angle between the plane of the great circle and the plane it is projected on, and (2) in the usual stereographic projection of the velocity circle from a great circle, there is the same relationship between the angle of the plane of the great circle and the semi-minor axis of the elliptical orbit. As far as I understand Göransson’s approach, both the point (t-s,x,y,z) and the velocity wrt s move in great circles, and unless I’m confused, the two planes will necessarily be parallel. So the great circle traced out by (t',x',y',z') will also be parallel to the great circle in the standard approach, and up to some choice of scaling, both ought to project to the same circles in velocity space. • John Baez says: Okay, great! I think you’re right: as a function of s, the point (t-s,x,y,z) moves around a great circle on the unit 4-sphere at unit speed. So, its velocity vector does the same, and in the same 2-plane. 4. Wolfgang says: As a chemist I wonder: Is this a symmetry taking one orbital type into another one? I once saw a picture about it in a book of Peter W. Atkins. Imagine a sphere which surface is marked such that one hemisphere represents the + sign of a wavefunction and the other hemisphere the – sign. Now stereographically project it to the plane. If the sphere is oriented such that its polar axis is perpendicular to the plane you will get only one sign in the plane (similar to a s-orbital), if the axis is parallel to the plane you will get + and – separated by a nodal line (like a p-orbital). Thus, in the plane you create some very distinct “orbitals”, while in space you only have to rotate the sphere. Of course the analogy to the hydrogen atom would set the stage one dimension higher, so it would be a 4D rotation that interchanges orbitals of different types. Unfortunately I never really looked for the mathematical details of the analogy, and I think, it is yet another kind of hidden symmetry of the mathematical treatment of this kind of problem? • John Baez says: Wolfgang wrote: Yes, exactly! All states of a given energy lie in a single irreducible representation of SO(4). What this means is that the hidden 4-dimensional rotation symmetries of the hydrogen atom can do things like take a 4s state to a 4p state or 4d state or 4f state. So, all 16 states here are related by 4-dimensional rotation symmetries: while 3d rotation symmetries only suffice to relate states in a given row. I don’t understand the pictorial approach you’re describing well enough to see if it’s secretly a lower-dimensional analogue of what I’m talking about. I do know that what I’m talking about works in every dimension—if we posit atoms in other dimensions that are still governed by the inverse square force law, which is a bit odd. • Wolfgang says: I looked up the source: It’s “Galileo’s finger – The ten great ideas of science” by Peter Atkins, chapter six “Symmetry” and Fig. 6.7 on page 175. Unfortunately, since it is really a text addressed to laymen, it does neither contain any mathematics nor references where this idea would have been made more precise. I can only guess that Atkins might have written it elsewhere, e.g. in his book on “Molecular Quantum Mechanics”, but I did not checked it so far. • Greg Egan says: John wrote: I’m not sure if I’m taking you more literally than you intended, or if I’m nitpicking unnecessarily, but although a generic element of SO(4) will certainly take a 4s state to a superposition of states that includes a 4p state (and a 4d state, and a 4f state), I’m not convinced that any element of SO(4) can take a 4s state to a 4p state. That is, given an eigenfunction of the 3D orbital angular momentum operator L^2 with \ell=0, I don’t believe there is an element of SO(4) that takes it to an eigenfunction of L^2 with \ell=1. It’s not inconceivable that there could be such an element, but it seems like it would take something much stronger than having an irreducible representation of SO(4) to make this true. • John Baez says: You’re right; I was being pretty sloppy. It would be fun to dig into this issue, and see exactly which vectors you can get by applying an SO(4) element to an ns state, but I have too many other puzzles on my mind to tackle this one! • John Baez says: Here’s a fun thing that’s sort of related. It’s true that irreducible unitary representation of a group doesn’t usually carry any unit vector to any other. But sometimes the group \mathrm{Spin}(n) acts transitively on the unit sphere of its irreducible spinor representations; this happens up to n = 6 but not beyond. One exciting case is n = 5, where \mathrm{Spin}(5) \cong \mathrm{Sp}(2) is the group of 2 × 2 quaternionic unitary matrices, and the spinors are \mathbb{H}^2: the quaternionic unitary matrices act transitively on the unit sphere in \mathbb{H}^2. Another is n = 6, where \mathrm{Spin}(6) \cong \mathrm{SU}(4) is the group of 4 × 4 complex unitary matrices with determinant 1, and the spinors are \mathbb{C}^4: these matrices act transitively on the unit sphere in \mathbb{C}^4. When you get beyond n = 6 there are pure spinors which are different, and ‘better’, than the rest. 5. John Baez says: I got an email from someone who has dug deeper into the quantum mechanics of the 2d Kepler problem: • Eyal Subag, Symmetries of the hydrogen atom and algebraic families. Abstract. We show how the Schrödinger equation for the hydrogen atom in two dimensions gives rise to an algebraic family of Harish-Chandra pairs that codifies hidden symmetries. The hidden symmetries vary continuously between SO(3), SO(2,1) and the Euclidean group O(2)⋉R2. We show that solutions of the Schrödinger equation may be organized into an algebraic family of Harish-Chandra modules. Furthermore, we use Jantzen filtration techniques to algebraically recover the spectrum of the Schrödinger operator. This is a first application to physics of the algebraic families of Harish-Chandra pairs and modules developed in the work of Bernstein et al. [Int. Math. Res. Notices, rny147 (2018); rny146 (2018)]. One interesting thing about this paper is that it constructs a ‘space of 2d hydrogen atom state’ for any complex energy. The physical meaning of these seems open for exploration. 6. Thy Boy Who Lived says: “Same velocity, AND SAME DISTANCE from the Sun at these points, so same total energy” This is not correct according to the figure drawn. The four points shown are not at the same distance. All you have to do is draw a straight line from the Sun through the points on the ellipse, and you can clearly see that the points on the ellipse are much closer. Don’t worry though, the error is only in the drawing because they didn’t draw the ellipse and the circle with the same exact total area. At that point the dotted arc would overlay the circle and should be eliminated because it would be redundant. Sorry to be picky like that but … you know … it’s mathematics. • Greg Egan says: You’re mistaken about the figure, unless you’re viewing the web page on a device that distorts the aspect ratio; the dashed arc is perfectly circular and it is centred on the sun. I checked the image file itself, and all four points’ centres are 231 pixels from the centre of the sun. I have no idea why you think the two orbits should have the same area (or why you think one of them is circular). The two orbits have the same semi-major axis (as required) and different semi-minor axes, so they do not have the same area. You could choose the new position for the planet so that its orbit was circular, and then the dashed arc would overlay it, but that would make for a far more confusing image because the generality of the construction would not be apparent. WordPress.com Logo Google photo Twitter picture Facebook photo Connecting to %s
43add81a4ffaad22
Generative Models for Automatic Chemical Design • 2019-07-02 20:57:23 • Daniel Schwalbe-Koda, Rafael Gómez-Bombarelli • 45 Materials discovery is decisive for tackling urgent challenges related toenergy, the environment, health care and many others. In chemistry,conventional methodologies for innovation usually rely on expensive andincremental strategies to optimize properties from molecular structures. On theother hand, inverse approaches map properties to structures, thus expeditingthe design of novel useful compounds. In this chapter, we examine the way inwhich current deep generative models are addressing the inverse chemicaldiscovery paradigm. We begin by revisiting early inverse design algorithms.Then, we introduce generative models for molecular systems and categorize themaccording to their architecture and molecular representation. Using thisclassification, we review the evolution and performance of important moleculargeneration schemes reported in the literature. Finally, we concludehighlighting the prospects and challenges of generative models as cutting edgetools in materials discovery. Quick Read (beta) Generative Models for Automatic Chemical Design Daniel Schwalbe-Koda and Rafael Gómez-Bombarelli Daniel Schwalbe-Koda Department of Materials Science and Engineering, Massachusetts Institute of Technology, 22email: [email protected]Rafael Gómez-Bombarelli Department of Materials Science and Engineering, Massachusetts Institute of Technology, 44email: [email protected] Materials discovery is decisive for tackling urgent challenges related to energy, the environment, health care and many others. In chemistry, conventional methodologies for innovation usually rely on expensive and incremental strategies to optimize properties from molecular structures. On the other hand, inverse approaches map properties to structures, thus expediting the design of novel useful compounds. In this chapter, we examine the way in which current deep generative models are addressing the inverse chemical discovery paradigm. We begin by revisiting early inverse design algorithms. Then, we introduce generative models for molecular systems and categorize them according to their architecture and molecular representation. Using this classification, we review the evolution and performance of important molecular generation schemes reported in the literature. Finally, we conclude highlighting the prospects and challenges of generative models as cutting edge tools in materials discovery. 1 Introduction Innovation in materials is the key driver for many recent technological advances. From clean energy Tabor2018Accelerating to the aerospace industry Gibson2010review or drug discovery Chen2018rise , research in chemical and materials science is constantly pushed forward to develop compounds and formulae with novel applications, lower cost and better performance. Conventional methods for the discovery of new materials start from a well-defined set of substances from which properties of interest are derived. Then, intensive research on the relationship between structures and properties is performed. The gained insights from this procedure lead to incremental improvements in the compounds and the cycle is restarted with a new search space to be explored. This trial-and-error approach to innovation often leads to costly and incremental steps towards the development of new technologies and in occasion relies on serendipity for leap progress. Materials development may require billions of dollars in investments DiMasi2016Innovation and up to 20 years to be deployed to the market DiMasi2016Innovation ; Tabor2018Accelerating . Despite the challenges associated with such direct approaches, they have not prevented data-driven discovery of materials from happening. High-throughput materials screening Shoichet2004Virtual ; Greeley2006Computational ; Alapati2006Identification ; Setyawan2011High ; Subramaniam2008Virtual ; Armiento2011Screening ; Jain2011high ; Curtarolo2013high ; Pyzer-Knapp2015What ; Gomez-Bombarelli2016Design and data mining Morgan2004High ; Ortiz2009Data ; Yu2012Identification ; Yang2012search ; Lin2012silico ; Mounet2018Two have been responsible for several breakthroughs in the last two decades Potyrailo2011Combinatorial ; Jain2016Computational , leading to the establishment of the Materials Genome Initiative NSTC2011Materials and multiple collaborative projects around the world build around databases and analysis pipelines Curtarolo2012AFLOWLIB.ORG ; Calderon2015AFLOW ; Jain2013Commentary ; Saal2013Materials . Automated, scalable approaches leverage from data sets in the thousands to millions of simulations to offer a cornucopia of insights on materials composition, structure and synthesis. Developing materials with the inverse perspective departs from these traditional methods. Instead of exhaustively deriving properties from structures, the performance parameters are chosen beforehand and unknown materials satisfying these requirements are inferred. Hence, innovation in this setting is achieved by reverting the mapping between structures and their properties. Unfortunately, this approach is even harder than the conventional one. Inverting a given Hamiltonian is not a well-defined problem, and the absence of a systematic exploratory methodology may result in delays, or outright failure, of the discovery cycle of materials Sanchez-Lengeling2018Inverse . Furthermore, another major obstacle to the design of arbitrary compounds is the dimensionality of the missing data for known and unknown compounds Zunger2018Inverse . As an example, the breadth of accessible drug-like molecules can be on the order of 1060 Polishchuk2013Estimation ; Virshup2013Stochastic , rendering manual searches or enumerations through the chemical space an intractable problem. In addition, molecules and crystal structures are discrete objects, which hinders automated optimization, and computer-generated candidates must follow a series of hard (valence rules, thermal stability) and soft (synthetic accessibility, cost, safety) constraints that may be difficult to state in explicit form. As the inverse chemical design holds great promise for economic, environmental and societal progress, one can ask how to rationalize the exploration of unknown substances and accelerate the discovery of new materials. 1.1 Early inverse design strategies for materials The inverse chemical design is usually posed as an optimization problem in which molecular properties are extremized with respect to given parameters Joback1989Designing . This concept splits the inverse design problem in two parts: (i) efficiently sampling materials from an enormous configuration space, and (ii) searching for global maxima in their properties Kuhn1996Inverse corresponding to minima in their potential energy surface Wales1999Global ; Schoen2001Determination . Early approaches towards the inverse materials design used chemical intuition to address (i), narrowing down and navigating the space of structures under investigation with probabilistic methods Gani1983Molecular ; Marder1991Approaches ; Holmblad1996Designing ; Kuhn1996Inverse ; Sigmund1997Design ; Wolverton1997Invertible . Nevertheless, even constrained spaces can be too large to be exhaustively enumerated. Especially in the absence of an efficient exploratory policy, this discovery process demands considerable computational resources and time. Several different strategies are required to simultaneously navigate the chemical space and evaluate the properties of the materials under investigation. Monte Carlo methods resort to statistical sampling to avoid enumerating a space of interest. When combined with simulated annealing Metropolis1953Equation , for example, they become adequate to locate extrema within property spaces. In physics, reverse Monte Carlo methods have long been developed to determine structural information from experimental data Kaplow1968Atomic ; Gerold1987determination ; McGreevy1988Reverse . However, the popularization of similar methods to de novo design of materials is more recent. Wolverton et al.Wolverton1997Invertible employed such methods to aid the design of alloys and avoid expensive enumeration of compositions and Franceschetti and Zunger Franceschetti1999inverse improved the idea to design AlxGa1-xAs and GaxIn1-xP superlattices with a tailored band gaps. They started with configurations sampled using Monte Carlo, relaxed the atomic positions using valence-force-field methods and calculated their band gap by fast diagonalization of pseudopotential hamiltonians. Through this practical process, they predicted superlattices with optimal band gaps after analyzing less than 104 compounds among 1014 structures Franceschetti1999inverse . Other popular techniques for multidimensional optimization that also involve a stochastic component are genetic algorithms (GAs) Holland1992Adaptation . Based on evolution principles, GAs refine specific parameters of a population that improve a targeted property. In materials design, GAs have been vastly employed in the inverse design of small molecules Judson1993Conformational ; Glen1995genetic , polymers Venkatasubramanian1994Computer ; Venkatasubramanian1995Evolutionary , drugs Parrill1996Evolutionary ; Schneider2000De , biomolecules Gordon1999Branch ; Reetz2004Asymmetric , catalysts Wolf2000evolutionary , alloys Johannesson2002Combined ; Dudiy2006Searching , semiconductors Piquini2008Band ; dAvezac2012Genetic ; Zhang2013Genetic , and photovoltaic materials Yu2012Inverse . Furthermore, evolution-inspired approaches have been used as a general modeling tool to predict stable structures Brodmeier1994Application ; Woodley1999prediction ; Glass2006USPEXEvolutionary ; Oganov2006Crystal ; Froemming2009Optimizing ; Vilhelmsen2014genetic and Hamiltonian parameters Hart2005Evolutionary ; Blum2005Using . Many more applications of GAs in materials design are still being demonstrated after decades of its inception Virshup2013Stochastic ; Rupakheti2015Strategy ; Reymond2015Chemical ; Le2016Discovery ; Jennings2019Genetic . Monte Carlo and evolutionary algorithms are interpretable and often produce powerful implementations. The combination of sampling and optimization is a great improvement over random searches or full enumeration of a chemical space. Nonetheless, they still correspond to discrete optimization techniques in a combinatorial space, and require individual evaluation of their properties at every step. This discrete form hinders chemical interpolations and the definition of property gradients during optimization processes, thus retaining a flavor of “trial-and-error” in the computational design of materials, rather than an invertible structure-property mapping. One of the first attempts to use a continuous representation on the molecular design was performed by Kuhn and Beratan Kuhn1996Inverse . The authors varied coefficients in linear combination of atomic orbitals while keeping the energy eigenvalues fixed to optimize linear chains of atoms. Later, Lilienfeld et al.Lilienfeld2005Variational generalized the discrete nature of atoms by approximating atomic numbers by continuous functions and defining property gradients with respect to this “alchemical potential”. They used this theory to design ligands for proteins Lilienfeld2005Variational and tune electronic properties of derivatives of benzene Marcon2007Tuning . A similar strategy was proposed by Wang et al.Wang2006Designing around the same time. Instead of atomic numbers, a linear combination of atomic potentials was used as a basis for optimizations in property landscapes. Following the bijectiveness between potential and electronic density in the Hohenberg-Kohn theory Hohenberg1964Inhomogeneous , nuclei-electrons interaction potentials were employed as quasi-invertible representations of molecules. Potentials resulting from optimizations with property gradients can be later interpolated or approximated by a discrete molecular structure whose atomic coordinates give rise to a similar potential. Over the years, the approach was further refined within the tight-binding framework Xiao2008Inverse ; Balamurugan2008Exploring and gradient-directed Monte Carlo method Keinan2007Designing ; Hu2008gradient , its applicability demonstrated in the design of molecules with improved hyperpolarizability Wang2006Designing ; Keinan2007Designing ; Xiao2008Inverse and acidity Vleeschouwer2012Inverse . Despite these promising approaches, many challenges in inverse chemical design remain unsolved. Monte Carlo and genetic algorithms share the complexity of discrete optimization methods over graphs, particularly exacerbated by the rugged property surfaces. They rely on stochastic steps that struggle to capture the interrelated hard and soft constraints of chemical design: converting a single into a double bond may produce a formally valid, but impractical and unacceptable molecule depending on chemical context. On the other hand, a compromise between validity and diversity of the chemical space is difficult to achieve with continuous representations. Lastly, finding optimal points in the 3D potential energy surface that produce a desired output is still not the same as molecular optimization, since the generated “atom cloud” may not be a local minimum, stable enough in operating conditions, or synthetically attainable. An ideal inverse chemical design tool would offer the best of the two worlds: an efficient way to sample valid and acceptable regions of the chemical space; a fast method to calculate properties from given structures; a differentiable representation for a wide spectrum of materials; and the capacity to optimize them using property gradients. Furthermore, it should operate on the manifold of synthetically accessible, stable compounds. This is where modern machine learning (ML) algorithms come into play. 1.2 Deep learning and generative models Deep learning (DL) is emerging as a promising tool to address the inverse design of many different applications. Particularly through generative models, algorithms in DL push forward how machines understand real data. Roughly speaking, the role of a generative model is to capture the underlying rules of a data distribution. Given a collection of (training) data points {Xi} in a space 𝒳, a model is trained to match the data distribution PX by means of a generative process PG in such a way that generated data YPG resembles the real data XPX. Earlier generative models such as Boltzmann Machines Hinton1986Learning ; Hinton1983Optimal , Restricted Boltzmann Machines Smolensky1986Information , Deep Belief Networks Hinton2006Fast or Deep Boltzmann Machines Salakhutdinov2009Deep were the first to tackle the problem of learning probability distributions based on training examples. Their lack of flexibility, tractability and generalizing ability, however, rendered them obsolete in favor of more modern ones Goodfellow2016Deep . Current generative models have been successful in learning and generating novel data from different types of real-world examples. Deep neural networks trained on image datasets are able to produce realistic-looking house interiors, animals, buildings, objects and human faces Karras2017Progressive ; Goodfellow2014Generative , as well as embed pictures with artistic style Gatys2015Neural or enhance it with super-resolution Ledig2016Photo . Other examples include convincing text Bowman2015Generating ; Xu2015Show , music Mehri2016SampleRNN , voices Oord2016WaveNet and videos Vondrick2016Generating synthesized by such networks. Most interesting is the creation of novel data conditioned on latent features, which allows tuning models with vector and arithmetic operations in a property space Radford2015Unsupervised ; Engel2017Latent . The adaptable architectures of these models also enable straightforward training procedures based on backpropagation LeCun2015Deep . Within the DL framework, a proper loss function drives gradients so that the generative model, typically parameterized by a neural network, learns to minimize the distance between the two distributions. Among the popular architectures for generating data from deep neural networks, the Variational Auto-Encoder (VAE) Kingma2013Auto is a particularly robust architecture. It couples inference and generation by mapping data to a manifold conditioned to implicit data descriptors. To do so, the model is trained to learn the identity function while constrained by a dimensional bottleneck called latent space (see Fig. 1a). In this scheme, data is first encoded to a probability distribution Qϕ(𝐳|X) matching a given prior distribution Pz(𝐳), where 𝐳 is called latent vector. Then, a sample from the latent space is reconstructed with the generative algorithm Pθ(X|𝐳). In the VAE Kingma2013Auto , outcomes of both processes are parameterized by ϕ and θ to maximize a lower bound for the log-likelihood of the output with respect to the input data distribution. The VAE objective is, therefore, (θ,ϕ)=-DKL(Qϕ(𝐳|X)||Pz(𝐳))+𝔼zQϕ[logPθ(X|𝐳)]. (1) The encoder is regularized with a divergence term DKL, while the decoder is penalized by a reconstruction error logPθ(X|𝐳), usually in the form of mean-squared or cross entropy losses. This maximization can then be performed by stochastic gradient ascent. The probabilistic nature of VAE manifolds approximately accounts for many complex interactions between data points. Although functional in many cases, the modeled data distribution does not always converge to real data distributions Arjovsky2017Wasserstein . Furthermore, Kullback-Leibler or Jensen-Shannon divergences cannot be analytically computed for an arbitrary prior, and most works are restricted to Gaussian distributions. Avoiding high-variance methods to determine this regularizing term is also an important concern. Recently, this limitation was simplified by employing the Wasserstein distance as a penalty for the encoder regularization Arjovsky2017Wasserstein ; Tolstikhin2017Wasserstein . As a result, richer latent representations are computed more efficiently within Wasserstein Auto-Encoders, resulting in disentanglement, latent shaping, and improved reconstruction Rubenstein2018Latent ; Arjovsky2017Wasserstein ; Tolstikhin2017Wasserstein . Another approach to generative models are the Generative Adversarial Networks (GANs) Goodfellow2014Generative . Recognized by their sharp reconstructions, GANs are constructed by making two neural networks compete against each other until a Nash equilibrium is found. One of the networks is a deterministic generative model. It applies a non-linear set of transformations to a prior probability distribution Pz in order to match the real data distribution PX. Interestingly, the generator (or actor) only receives the prior distribution as input, and has no contact with the real data whatsoever. It can only be trained through a second network, called discriminator or critic. The latter tries to distinguish real data XPX from fake data Y=G(𝐳)PG, as depicted in Fig. 1b. The objective of the critic is to perfectly distinguish between PX and PG, thus maximizing the prediction accuracy. On the other hand, the generator tries to fool the discriminator by creating data points that look like real data points, minimizing the prediction accuracy of the critic. Consequently, the complete GAN objective is written as Goodfellow2014Generative minGmaxDV(D,G)=𝔼XPX[logD(X)]+𝔼𝐳Pz[log(1-D(G(𝐳)))]. (2) Despite the impressive results from GANs, their training process is highly unstable. The min-max problem requires a well-balanced training from both networks to ensure non-vanishing gradients and convergence to a successful model. Furthermore, GANs do not reward diversity of generated samples and the system is prone to mode collapse. There is no reason why the generated distribution PG should have the same support of the original data PX, and the actor produces only a handful of different examples which are realistic enough. This does not happen for the VAE, since the log-likelihood term gives an infinite loss for a generated data distribution with a disjoint support with respect to the original data distribution. Several different architectures have been proposed to address these issues among GANs Mirza2014Conditional ; Chen2016InfoGAN ; Arjovsky2017Wasserstein ; Che2016Mode ; Odena2016Conditional ; Mao2016Least ; Hjelm2017Boundary ; Zhao2016Energy ; Nowozin2016f ; Donahue2016Adversarial ; Berthelot2017BEGAN ; Gulrajani2017Improved ; Yi2017DualGAN . Although many of them may be equivalent to a certain extent Lucic2017Are , steady progress is being made in this area, especially through more complex ways of approximating data distributions, such as with f-divergence Nowozin2016f or optimal transport Arjovsky2017Wasserstein ; Gulrajani2017Improved ; Berthelot2017BEGAN . Other models such as the auto-regressive PixelRNN Oord2016Pixel and PixelCNN Oord2016Conditional ; Salimans2017PixelCNN++ have also been successful as generators of images Oord2016Pixel ; Oord2016Conditional ; Salimans2017PixelCNN++ , video Kalchbrenner2016Video , text Kalchbrenner2016Neural and sound Oord2016WaveNet . Differently from VAE and GANs, these models approximate the data distribution by a tractable factorization PX. For example, in an n×n image, the generative model P(X) is written as Oord2016Pixel P(X)=i=1n2P(xi|x1,,xi-1), (3) where each xi is a pixel generated by the model (see Fig. 1c). These models with explicit distributions yield samples with very good negative log-likelihood and diversity Oord2016Pixel . The model evaluation is also straightforward, given the explicit computation of P(X). As a drawback, however, these models rely on the sequential generation of data, which is a slow process. A diagram of the architectures of the three generative models here discussed is seen in Fig. 1. Figure 1: Schematic diagrams for three popular generative models: (a) VAE, (b) GAN, and (c) auto-regressive. 1.3 Generative models meet chemical design Apart from their numerous aforementioned applications, generative models are also attracting attention in chemistry and materials science. DL is being employed not only for the prediction and identification of properties of molecules, but also to generate new chemical compounds LeCun2015Deep . In the context of inverse design, generative models provide benefits such as: generating complex samples from simple probability distributions; providing meaningful latent representations, over which optimizations can be performed; and the ability to perform inference when coupled to supervised models. Therefore, unifying generative models with chemical design is a promising venue to accelerate innovation in chemistry and related fields. To go beyond the limitations of traditional inverse design strategies, an ideal way to discover new materials should satisfy some requisites Gomez-Bombarelli2018Automatic . To be a completely hands-free model, the model should be data-driven, thus avoiding fixed libraries and expensive labeling. It is also desirable that it outputs as many potential molecules as possible under a subset of interest, which means that the model needs a powerful generator coupled with a continuous representation for molecules. Furthermore, such a representation should be interpretable, allowing a correct description of structure-property relationships within molecules. If, additionally, the model is differentiable, it would be possible to optimize certain properties using gradient techniques and, later, look for molecules satisfying such constraints. The development of such a tool is currently a priority for ML models in chemistry and for the inverse chemical design. It relies primarily on two decisions: which model to use and how to represent a molecule in a computer-friendly way. Following our brief introduction to the early inverse design strategies and main generative models in the literature, we describe which molecular representations are possible. In quantum mechanics, a molecular system is represented by a wavefunction that is a solution of the Schrödinger equation for that particular molecule. To derive most properties of interest, the spatial wavefunction is enough. Computing such a representation, however, is equivalent to solving an (approximate) version of the Schrödinger equation itself. Many methods for theoretical chemistry, such as Hartree-Fock Hartree1928Wave ; Fock1930Naherungsmethode or Density Functional Theory Hohenberg1964Inhomogeneous ; Kohn1965Self , represent molecules using wavefunctions or electronic densities and obtain other properties from it. Solving quantum chemical calculations is computationally demanding in many cases, though. The idea with many ML methods is not only to avoid these calculations, but also to make a generalizable model that highlight different aspects of chemical intuition. Therefore, we should look for other representations for chemical structures. Thousands of different descriptors are available for chemical prediction methods Todeschini2000Handbook . Several relevant features for ML have demonstrated their capabilities for predicting properties of molecules, such as fingerprints Rogers2010Extended , bag-of-bonds Hansen2015Machine , Coulomb matrices Rupp2012Fast , deep tensor neural networks train on the distance matrixSchuett2017Quantum , many-body tensor representation Huo2017Unified , SMILES strings Weininger1988SMILES , and graphs Kearnes2016Molecular ; Duvenaud2015Convolutional ; Gilmer2017Neural . Not all representations are invertible for human interpretation, however. To teach a generative model how to create a molecule, it may suffice for it to produce a fingerprint, for example. However, how can one map any possible fingerprint to a molecule is an extra step of complexity equivalent to the generation of libraries. This is undesirable in a practical generative model. In this chapter, we focus on two easily interpretable representations, SMILES strings and molecular graphs, and how generative models perform with these representations. Examples of these two forms of writing a molecule are shown in Fig. 2. Figure 2: Two popular ways of representing a molecule using: (a) SMILES strings converted to one-hot encoding; or (b) a graph derived from the Lewis structure. 2 Chemical generative models 2.1 SMILES representation SMILES (Simplified Molecular Input Line Entry System) strings have been widely adopted as representation for molecules Weininger1988SMILES . Through graph-to-text mapping algorithms, it determines atoms by atomic number and aromaticity, and can capture branching, cycles, ionization, etc. The same molecule can be represented by multiple SMILES strings, and thus a canonical representation is typically chose, although some works leverage non-canonical strings as a data augmentation and regularization strategy. Although SMILES are inferior to the more modern InChI (International Chemical Identifier) representation in their ability to address key challenges in representing molecules as strings such as tautomerism, mesomerism and some forms of isomerism, SMILES follow a much simpler syntax that has proven easier to learn for ML models. Since SMILES rely on a sequence-based representation, natural language processing (NLP) algorithms in deep learning can be naturally extended to them. This allows the transferability of several architectures from the NLP community to interpret the chemical world. Mostly, these systems make use of recurrent neural networks (RNNs) to condition the generation of the next character on the previous ones, creating arbitrarily long sequences character by character Goodfellow2016Deep . The order of the sequence is very relevant to generate a valid molecule, and observation of such restrictions can be typically incorporated in RNNs with long short-term memory cells (LSTM) Hochreiter1997Long , gated recurrent units (GRUs) Chung2014Empirical , or stack-augmented memory Popova2018Deep . A simple form of generating molecules using only RNN architectures is to extensively train them with valid SMILES from a database of molecules. This requires post-processing analyses, as it resembles traditional library generation. As a proof of concept, Ikebata et al.Ikebata2017Bayesian used SMILES strings to design small organic molecules by employing Bayesian sampling with sequential Monte Carlo. Ertl et al.Ertl2017silico instead generated molecules using LSTM cells and later employed them in a virtual screening for properties. Generating libraries, however, is not enough for the automatic discovery of chemical compounds. Asking an RNN-based model to simply create SMILES strings does not improve on the rational exploration of the chemical space. In general, the design of new molecules is also oriented towards certain properties, like solubility, toxicity and drug-likeness Gomez-Bombarelli2018Automatic , which are not necessarily incorporated in the training process of RNNs. In order to skew the generation of molecules and better investigate a subset of the chemical space, Segler et al.Segler2018Generating used transfer-learning to first train the RNN on a whole dataset of molecules and later fine-tune the model towards the generation of molecules with physico-chemical properties of interest. This two-part approach allows the model to first learn the grammar inherent to SMILES to then create new molecules based only on the most interesting ones. In line with this depth-search, Gupta et al.Gupta2017Generative demonstrated the application of transfer learning to grow molecules from fragments. This technique is particularly useful for drug discovery Chen2018rise ; Ching2018Opportunities , in which the search of the chemical space usually begins from a known substructure with certain desired functionalities. Recently, the usage of reinforcement learning (RL) to generate molecules with certain properties became popular among generative models. Since the representation of a molecule using SMILES requires the generator to output a sequence of characters, each decision can be considered as an action. The successful completion of a valid SMILES string is associated with a reward, for example, and undesired features in the sequence are penalized. Jaques et al.Jaques2017Sequence used RL to impose a structure on sequence generation, avoiding repeating patterns not only in SMILES strings but also in text and music. By penalizing large rings, short sequences of characters and long, monotonous carbon chains, they were able to increase the number of valid molecules their model produced. Olivecrona et al.Olivecrona2017Molecular demonstrated the usage of augmented episodic likelihood and traditional policy gradient methods to tune the generation of molecules from an RNN. Their method achieved 94% of validity on generating molecules sampled from a prior distribution. It was also taught to avoid functional groups containing sulfur and to generate structures similar to a given structure or with certain target activities. Similarly, Popova et al.Popova2018Deep designed molecules for drugs using a stack-augmented RNN. It demonstrated improved capacity to capture the grammar of SMILES while using RL to tune their synthetic accessibility, solubility, inhibition and other properties. As the degree of abstraction grows in the molecule design, more complex generative models are proposed to explore the chemical space. VAEs, for example, can include a direct mapping between structures and properties and vice-versa. Its joint training with an encoder and a decoder is capable of approximating very complex data distributions using a real-valued and compressed representation, which is essential for improving the search for chemical compounds. Since the latent space is meaningful, the generator learns to associate patterns in the latent space with properties of the real data. After both the encoding and the decoding networks are jointly trained, the generative model can be decoupled from the inference step and latent variables then become the field for exploration. Therefore, VAEs map the original chemical space to a continuous, differentiable space conveying all the information about the original molecules, over which optimization can be performed. Additionally, conditional generation of molecules based on properties is made possible without hand-made constraints in SMILES, semi-supervised methods can be used to tune the model with relevant properties. This approach is closer to the model of an ideal, automatic, chemical generative model as discussed earlier. Constructed over RNNs as both encoder and decoder, Gómez-Bombarelli et al.Gomez-Bombarelli2018Automatic trained a VAE on prediction and reconstruction tasks for molecules extracted from the QM9 and ZINC datasets. The latent space allowed not only sampling of molecules but also interpolations, reconstruction, and optimization using a Gaussian process predictor trained on the latent space (Fig. 3). Kang and Cho Kang2018Conditional used partial annotation on molecules to train a semi-supervised VAE to decrease the error for property prediction and to generate molecules conditioned on targets. It can also be enhanced in combination with other dimensionality reduction algorithms Sattarov2019De . Within the chemical world, VAEs based on sequences also show promise for investigating proteins Sinai2017Variational , learning chemical interactions between molecules Kwon2017DeepCCI , designing organic light-emitting diodes Kim2018Deep and generating ligands Mallet2019Leveraging ; Lim2018Molecular . Figure 3: Variational Auto-Encoder for chemical design. The architecture in (a) allows for property optimization in the latent space, as depicted in (b). Figure reproduced from Gomez-Bombarelli2018Automatic . In the field of molecule generation, GANs usually appear associated with RL. To fine-tune the generation of long SMILES strings, Guimaraes et al.Guimaraes2017Objective employed a Wasserstein GAN Arjovsky2017Wasserstein with a stochastic policy that increased the diversity, optimized the properties and maintained the drug-likeness of the generated samples. Sanchez-Langelin et al.Sanchez-Lengeling2017Optimizing and Putin et al.Putin2018Reinforced further improved upon this work to bias the distribution of generated molecules towards a goal. In addition, Mendez-Lucio et al.Mendez-Lucio2018De used a GAN to generate molecules conditioned on gene expression signatures, which is particularly useful to create active compounds towards a certain target. Similarly to what is done with molecules, Killoran et al.Killoran2017Generating employed a GAN to create realistic samples of DNA sequences from a small subset of configurations. The model was also tuned to design DNA chains adapted to protein binding and look for motifs representing functional roles. Adversarial training was also employed in the discovery of drugs for using molecular fingerprints as opposed to a reversible representation Kadurin2017druGAN ; Kadurin2017cornucopia ; Blaschke2018Application and SMILES Polykovskiy2018Entangled . However, avoiding the unstable training and mode collapse while generating molecules is still a hindrance for the usage of GANs in chemical design. Although SMILES have proved to be a reliable representation for molecule generation, their sequential nature imposes some constraints to the architectures being learned. Forcing an RNN to implicitly learn their linguistic rules poses additional difficulties to the model under training. Additionally, decoding a sequence of generated characters into a valid molecule is especially difficult. In Gomez-Bombarelli2018Automatic , the rate of success when decoding molecules depended on the proximity of the latent point to the valid molecule, and could be as low as 4% for random points on the latent space. Although RL is as an alternative to reward the generation of valid molecules Jaques2017Sequence ; Guimaraes2017Objective ; Sanchez-Lengeling2017Optimizing , other architecture changes can also circumvent this difficulty. Techniques to generate valid sequences imported from NLP studies include: using revision to improve the outcome of sequences Mueller2017Sequence ; adding a validator to the decoder to generate more valid samples Janet2018Accelerating ; introducing a grammar within the VAE to teach the model the fundamentals of SMILES strings Kusner2017Grammar ; using compiler theory to constrain the decoder to produce syntactically and semantically correct data Dai2018Syntax ; and using machine translation methods to convert between representations of sequences and/or grammar Winter2019Learning . Validity of generated sequences, however, is not the only thing that makes working with SMILES difficult. The sequential representation cannot represent similarity between molecules within edit distances Jin2018Junction and a single molecule may have several different SMILES strings Bjerrum2017SMILES ; Alperstein2019All . The trade-off between processing this representation with text-based algorithms and discarding its chemical intuition calls for other approaches in the study and design of molecules. 2.2 Molecular graphs An intuitive way of representing molecules is by means of its Lewis structure, computationally translated as a molecular graph. Given a graph 𝒢=(𝒱,), the atoms are represented as nodes vi𝒱 and chemical bonds as edges (vi,vj). Then, nodes and edges are decorated with labels indicating the atom type, bond type and so on. Many times, hydrogen atoms are treated implicitly for simplicity, since their presence can be inferred from traditional chemistry rules. One of the first usages of graphs with DL for property prediction treated molecules as undirected cyclic graphs further processed using RNNs Lusci2013Deep . Using graph convolutional networks Bruna2013Spectral , Duvenaud et al.Duvenaud2015Convolutional demonstrated the usage of machine-learned fingerprints to achieve better prediction of properties on neural networks. This approach started with a molecular graph and led to fixed-size fingerprints after several graph convolutions and a graph pooling layers. Kearnes et al.Kearnes2016Molecular and Coley et al.Coley2017Convolutional also evaluated the flexibility and promise of learned fingerprints from graph structures, especially because models could learn how to associate its chemical structure to their properties. Later, Gilmer et al.Gilmer2017Neural unified graph convolutions as message-passing neural networks for quantum chemistry predictions, achieving DFT accuracy within their predictions of quantum properties, interpreting molecular 3D geometries as graphs with distance-labelled edges. Many more studies have explored the representative power of graphs within prediction tasks Hop2018Geometric ; Yang2019Are . These frameworks paved the way for using graph-based representations of molecules, especially because of their proximity with chemistry and geometrical interpretation. The generation of graphs is, however, non-trivial, especially because of the challenges imposed by graph isomorphism. As in SMILES strings, one way to generate molecular graphs is by sequentially adding nodes and edges to the graph. The sequential nature of decisions over graphs have already been implemented using an RNN You2018GraphRNN for arbitrary graphs. Specifically for a small subset of graphs corresponding to valid molecules, Li et al.Li2018Multi used a decoder policy to improve the outcomes of the model. The conditional generation of graphs allowed for molecules to be created with improved drug-likeness, synthetic accessibility, as well as allowed scaffold-based generations from a template (Fig. 4a). Similar procedure was adopted by Li et al.Li2018Multi , in which a graph-generating decision process using RNNs was proposed for molecules. These node-by-node generation rely on the ordering of nodes in the molecular graph and thus suffer with random permutations of the nodes. Figure 4: Generative models for molecules using graphs. (a) Decision process for sequential generation of molecules from Li2018Multi . (b) Junction Tree VAE for molecular graphs Jin2018Junction . Figures reproduced from Li2018Multi ; Jin2018Junction . In the VAE world, several methods have been proposed to deal with the problem of directly generating graphs from a latent code Kipf2016Variational ; Simonovsky2018GraphVAE ; Grover2018Graphite ; Samanta2018Designing ; Liu2018Constrained . However, when working with reconstructions, the problem of graph isomorphism cannot be addressed without expensive calculations Simonovsky2018GraphVAE . Furthermore, graph reconstructions suffer from validity and accuracy Simonovsky2018GraphVAE , except when these constraints are enforced in the graph generation process Samanta2018Designing ; Liu2018Constrained ; Ma2018Constrained . Currently, one of the most successful approaches to translate molecular graphs into a meaningful latent code while avoiding node-by-node generation is the Junction Tree Variational Auto-Encoder (JT-VAE) Jin2018Junction . In this framework, the molecular graph is first decomposed into a vocabulary of subpieces extracted from the training set, which include rings, functional groups and atoms (see Fig. 4b). Then, the model is trained to encode the full graph and the tree structure resulting from the decomposition into two latent spaces. A two-part reconstruction process is necessary to recover the original molecule from the two vector representations. Remarkably, the JT-VAE achieves 100% of validity when generating small molecules, as well as 100% of novelty when sampling the latent code from a prior. Moreover, a meaningful latent space is also seen for this method, which is essential for optimization and the automatic design of molecules. The authors later improve over the JT-VAE with graph-to-graph translation and auto-regressive methods towards molecular optimization tasks Jin2019Learning ; Jin2019Multi . Other auto-regressive approaches combining VAE and sequential graph generation have been proposed to generate and optimize molecules. Assouel et al.Assouel2018DEFactor introduced a decoding strategy to output arbitrarily large molecules based on their graph representation. The model, named DEFactor, is end-to-end differentiable, dispenses retraining during the optimization procedure and achieved high reconstruction accuracy (>80%) even for molecules with about 25 heavy atoms. Despite the restrictions on node permutations, DEFactor allows the direct optimization of the graph conditioned to properties of interest. This and other similar models also allow the generation of molecules based on given scaffolds Lim2019Scaffold . Auto-regressive methods for molecules have also been reported with the use of RL. Zhou et al.Zhou2018Optimization created a Markov decision process to produce molecules with targeted properties through multi-objective RL. Similarly to what is done with graphs, this strategy adds bonds and atoms sequentially. However, as the actions are restricted to chemically valid ones, the model scores 100% of validity in the generated compounds. The optimization process forgoes pre-training and allows flexibility in the choice of the importances for each objectives. As a follow-up to this work, the same group reports the usage of this generation scheme as a decoder in a RL-enhanced VAE for molecules Kearnes2019Decoding . In line with the usage of sequences of actions to create graphs, several groups have been working on different ways to represent and generate graphs through sequences. One approach is to split a graph in permutation-invariant N-gram path sets Liu2018N , in analogy with NLP with atoms as words and molecules as sentences. This representation performs competitively with message-passing neural networks in classification and regression tasks. The combination of strings and graph methods is also seen in the work of Krenn et al.Krenn2019SELFIES , which developed a sequence representation for general-purpose graphs. Their scheme shows high robustness against mutations in sequences and outperforms other representations (including SMILES strings) in terms of diversity, validity, and reconstruction accuracy when employed in sequence-based VAEs. The adversarial generation of graphs is still very incipient, and few models of GANs with graphs have been demonstrated Guo2018Deep ; Bojchevski2018NetGAN ; Xiong2019DynGraphGAN . De Cao and Kipf DeCao2018MolGAN demonstrated MolGAN, a GAN trained with RL for generating molecular graphs, but their system is too prone to mode collapse. The output structure can be made discrete by differentiable processes such as Gumbel-softmax Jang2016Categorical ; Kusner2016GANS , but balancing the adversarial training with molecular constraints requires more study. Pölsterl and Wachinger Poelsterl2019Likelihood builds on MolGAN by adding an adversarial training to avoid calculating the reconstruction loss and extending the graph isomorphism network Xu2018How to multigraphs. Further improvements include the approach from Maziarka et al.Maziarka2019Mol , which relies on the latent space of a pretrained JT-VAE to produce and optimize molecules, and the work of Fan and Huang Fan2019Labeled , which aims to generate labeled graphs. While the combination of DL with graph theory and molecular design seems promising, large room for improvement is available in the field of graph generation. Outputting an arbitrary graph is still an open problem and scalability to larger graphs is still an issue for graphs Gilmer2017Neural . Comparing graph isomorphism is a class-NP problem, and the measure of similarity between two graphs usually resort to expensive kernels or edit distances Neuhaus2007Bridging , as are other problems with reconstruction, ordering and so on Li2018Learning . In some cases, a distance metric can be defined for such data structures Schieber2017Quantification ; Choi2018Comparing or a set of networks can be trained to recognize similarity patterns within graphs Ktena2017Distance . Furthermore, adding attention to graphs could also help in classification tasks Do2018Attentional or in the extraction of structure-property relationships Ryu2018Deeply , and specifying grammar rules for graph reconstruction may lead to improved results in molecular validity and stereochemistry Kajino2018Molecular . 3 Challenges and outlook for generative models The use of deep generative models is a powerful approach for teaching computers to observe and understand the real world. Far from being just a big-data crunching tool, DL algorithms can provide insights that augment human creativity Kalchbrenner2016Neural . Completely evaluating a generative model is difficult Theis2015note , since we lack an expression for the statistical distribution being learned. Nevertheless, by approximating real-life data with an appropriate representation, we are embedding intuition in the machine’s understanding. In a sense, this is what we do, as human beings, when formulating theoretical concepts on chemistry, physics and many other fields of study. Furthering our limited ability to probe the inner workings of deep neural networks will allow us to transform learned embeddings into logical rules. In the field of chemical design, generative models are still in their infancy (see timeline summary in Fig. 5). While many achievements have been reported for such models, all of them share many challenges before a “closed loop” approach can be effectively implemented. Some of the trials are still inherent to all generative models: the generalization capability of a model, its power to make inferences on the real world, and capacity to bring novelty to it. In the chemical space, originality can be translated as the breadth and quality of possible molecules that the model can generate. To push forward the development of new technologies, we want our generative models to explore further regions of the chemical space in search of new solutions to current problems and extrapolate the training set, avoiding mode collapses or naïve interpolations. At the same time, we want it to capture rules inherent to the synthetically accessible space. Finally, we want to critically evaluate the performance of such models. Several benchmarks are being developed to assess the evolution of chemical generative models, providing quantitative comparisons beyond the mere prediction of solubility or drug-likeness Preuer2018Frechet ; Polykovskiy2018Molecular ; Wu2018MoleculeNet ; Brown2019GuacaMol . Figure 5: Summary and timeline of current generative models for molecules. Newer models are located in the bottom of the diagram. Figures reproduced from Guimaraes2017Objective ; Li2018Learning ; DeCao2018MolGAN ; Gomez-Bombarelli2018Automatic ; Kusner2017Grammar ; Jin2018Junction . The ease of navigation throughout the chemical space alone is not enough to determine a good model, however. Tailoring the generation of valid molecules for certain applications such as drug design Segler2018Generating is also an important task. It reflects how well a generative model focus on the structure-property relationships for certain applications. This interpretation leads to even more powerful understandings of chemistry, and is closely tied to Gaussian processes Gomez-Bombarelli2018Automatic , Bayesian optimization Haese2018PHOENICS , and virtual screening. In the generation process, outputting an arbitrary molecule is still an open problem and is closely conditioned to the representation. While SMILES have been demonstrated useful to represent molecules, graphs are able to convey real chemical features in it, which is useful for learning properties from structures. However, three-dimensional atomic coordinates should be considered for decoding as well. Recent works are going well beyond the connectivity of a molecule to provide equilibrium geometries of molecules using generative models Gebauer2018Generating ; Noe2018Boltzmann ; Gebauer2019Symmetry ; Joergensen2019Atomistic ; Mansimov2019Molecular . This is crucial to bypass expensive sampling of low-energy configurations from the potential energy surface of molecules. We should expect advances not only on decoding and generating graphs from latent codes, but also in invertible molecular representations in terms of sequences, connectivity and spatial arrangement. Finally, as the field of generative models advances, we should expect even more exciting models to design molecules. The normalizing-flow based Boltzmann Generator Noe2018Boltzmann and GraphNVP Madhawa2019GraphNVP are examples of models based on more recent strategies. Furthermore, the use of generative models to understand molecules in an unsupervised way advances along with the inverse design, from coarse-graining Wang2018Machine ; Wang2018Coarse and synthesizability of small molecules Bradshaw2019Generative ; Bradshaw2019Model to genetic variation in complex biomolecules Riesselman2018Deep . In summary, generative models hold promise to revolutionize the chemical design. Not only they allow optimizations or learn directly from data, but also bypass the necessity of a human supervising the generation of materials. Facing the challenges among these models is essential for accelerating the discovery cycle of new materials and, perhaps, improvement of the human understanding of the nature. D.S.-K. acknowledges the MIT Nicole and Ingo Wender Fellowship and the MIT Robert Rose Presidential Fellowship for financial support. R.G.-B. thanks MIT DMSE and Toyota Faculty Chair for support. • (1) D.P. Tabor, L.M. Roch, S.K. Saikin, C. Kreisbeck, D. Sheberla, J.H. Montoya, S. Dwaraknath, M. Aykol, C. Ortiz, H. Tribukait, C. Amador-Bedolla, C.J. Brabec, B. Maruyama, K.A. Persson, A. Aspuru-Guzik, Nat. Rev. Mater. 3(5), 5 (2018) • (2) R.F. Gibson, Compos. Struct. 92(12), 2793 (2010) • (3) H. Chen, O. Engkvist, Y. Wang, M. Olivecrona, T. Blaschke, Drug Discov. Today 23(6), 1241 (2018) • (4) J.A. DiMasi, H.G. Grabowski, R.W. Hansen, J. Health Econ. 47, 20 (2016) • (5) B.K. Shoichet, Nature 432(7019), 862 (2004) • (6) J. Greeley, T.F. Jaramillo, J. Bonde, I. Chorkendorff, J.K. Nørskov, Nat. Mater. 5(11), 909 (2006) • (7) S.V. Alapati, J.K. Johnson, D.S. Sholl, J. Phys. Chem. B 110(17), 8769 (2006) • (8) W. Setyawan, R.M. Gaume, S. Lam, R.S. Feigelson, S. Curtarolo, ACS Comb. Sci. 13(4), 382 (2011) • (9) S. Subramaniam, M. Mehrotra, D. Gupta, Bioinformation 3(1), 14 (2008) • (10) R. Armiento, B. Kozinsky, M. Fornari, G. Ceder, Phys. Rev. B 84(1) (2011) • (11) A. Jain, G. Hautier, C.J. Moore, S.P. Ong, C.C. Fischer, T. Mueller, K.A. Persson, G. Ceder, Comput. Mater. Sci. 50(8), 2295 (2011) • (12) S. Curtarolo, G.L.W. Hart, M.B. Nardelli, N. Mingo, S. Sanvito, O. Levy, Nat. Mater. 12(3), 191 (2013) • (13) E.O. Pyzer-Knapp, C. Suh, R. Gómez-Bombarelli, J. Aguilera-Iparraguirre, A.A.A. Aspuru-Guzik, R. Gomez-Bombarelli, J. Aguilera-Iparraguirre, A.A.A. Aspuru-Guzik, D.R. Clarke, Annu. Rev. Mater. Res. 45(1), 195 (2015) • (14) R. Gómez-Bombarelli, J. Aguilera-Iparraguirre, T.D. Hirzel, D. Duvenaud, D. Maclaurin, M.A. Blood-Forsythe, H.S. Chae, M. Einzinger, D.G. Ha, T. Wu, G. Markopoulos, S. Jeon, H. Kang, H. Miyazaki, M. Numata, S. Kim, W. Huang, S.I. Hong, M. Baldo, R.P. Adams, A. Aspuru-Guzik, Nat. Mater. 15(10), 1120 (2016) • (15) D. Morgan, G. Ceder, S. Curtarolo, Meas. Sci. Technol. 16(1), 296 (2004) • (16) C. Ortiz, O. Eriksson, M. Klintenberg, Comput. Mater. Sci. 44(4), 1042 (2009) • (17) L. Yu, A. Zunger, Phys. Rev. Lett. 108(6) (2012) • (18) K. Yang, W. Setyawan, S. Wang, M.B. Nardelli, S. Curtarolo, Nat. Mater. 11(7), 614 (2012) • (19) L.C. Lin, A.H. Berger, R.L. Martin, J. Kim, J.A. Swisher, K. Jariwala, C.H. Rycroft, A.S. Bhown, M.W. Deem, M. Haranczyk, B. Smit, Nat. Mater. 11(7), 633 (2012) • (20) N. Mounet, M. Gibertini, P. Schwaller, D. Campi, A. Merkys, A. Marrazzo, T. Sohier, I.E. Castelli, A. Cepellotti, G. Pizzi, et al., Nat. Nanotechnol. 13(3), 246 (2018) • (21) R. Potyrailo, K. Rajan, K. Stoewe, I. Takeuchi, B. Chisholm, H. Lam, ACS Comb. Sci. 13(6), 579 (2011) • (22) A. Jain, Y. Shin, K.A. Persson, Nat. Rev. Mater. 1(1) (2016) • (23) National Science and Technology Council (US), Materials genome initiative for global competitiveness (Executive Office of the President, National Science and Technology Council, 2011) • (24) S. Curtarolo, W. Setyawan, S. Wang, J. Xue, K. Yang, R.H. Taylor, L.J. Nelson, G.L. Hart, S. Sanvito, M. Buongiorno-Nardelli, N. Mingo, O. Levy, Comput. Mater. Sci. 58, 227 (2012) • (25) C.E. Calderon, J.J. Plata, C. Toher, C. Oses, O. Levy, M. Fornari, A. Natan, M.J. Mehl, G. Hart, M.B. Nardelli, S. Curtarolo, Comput. Mater. Sci. 108, 233 (2015) • (26) A. Jain, S.P. Ong, G. Hautier, W. Chen, W.D. Richards, S. Dacek, S. Cholia, D. Gunter, D. Skinner, G. Ceder, K.A. Persson, APL Materials 1(1), 011002 (2013) • (27) J.E. Saal, S. Kirklin, M. Aykol, B. Meredig, C. Wolverton, JOM 65(11), 1501 (2013) • (28) B. Sanchez-Lengeling, A. Aspuru-Guzik, Science 361(6400), 360 (2018) • (29) A. Zunger, Nat. Rev. Chem. 2(4), 0121 (2018) • (30) P.G. Polishchuk, T.I. Madzhidov, A. Varnek, J. Comput.-Aided Mol. Des. 27(8), 675 (2013) • (31) A.M. Virshup, J. Contreras-García, P. Wipf, W. Yang, D.N. Beratan, J. Am. Chem. Soc. 135(19), 7296 (2013) • (32) K.G. Joback, Designing molecules possessing desired physical property values. Ph.D. thesis, Massachusetts Institute of Technology (1989) • (33) C. Kuhn, D.N. Beratan, J. Phys. Chem. 100(25), 10595 (1996) • (34) D.J. Wales, H.A. Scheraga, Science 285(5432), 1368 (1999) • (35) J. Schön, M. Jansen, Z. Kristallogr. Cryst. Mater. 216(6) (2001) • (36) R. Gani, E. Brignole, Fluid Phase Equilib. 13, 331 (1983) • (37) S.R. Marder, D.N. Beratan, L.T. Cheng, Science 252(5002), 103 (1991) • (38) P.M. Holmblad, J.H. Larsen, I. Chorkendorff, L.P. Nielsen, F. Besenbacher, I. Stensgaard, E. Lægsgaard, P. Kratzer, B. Hammer, J.K. Nøskov, Catal. Lett. 40(3-4), 131 (1996) • (39) O. Sigmund, S. Torquato, J. Mech. Phys. Solids 45(6), 1037 (1997) • (40) C. Wolverton, A. Zunger, B. Schönfeld, Solid State Commun. 101(7), 519 (1997) • (41) N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, E. Teller, J. Chem. Phys. 21(6), 1087 (1953) • (42) R. Kaplow, T.A. Rowe, B.L. Averbach, Phys. Rev. 168(3), 1068 (1968) • (43) V. Gerold, J. Kern, Acta Metall. 35(2), 393 (1987) • (44) R.L. McGreevy, L. Pusztai, Mol. Simul. 1(6), 359 (1988) • (45) A. Franceschetti, A. Zunger, Nature 402(6757), 60 (1999) • (46) J.H. Holland, Adaptation in Natural and Artificial Systems (MIT Press Ltd, 1992) • (47) R. Judson, E. Jaeger, A. Treasurywala, M. Peterson, J. Comput. Chem. 14(11), 1407 (1993) • (48) R.C. Glen, A.W.R. Payne, J. Comput.-Aided Mol. Des. 9(2), 181 (1995) • (49) V. Venkatasubramanian, K. Chan, J. Caruthers, Computers & Chemical Engineering 18(9), 833 (1994) • (50) V. Venkatasubramanian, K. Chan, J.M. Caruthers, J. Chem. Inf. Model. 35(2), 188 (1995) • (51) A.L. Parrill, Drug Discov. Today 1(12), 514 (1996) • (52) G. Schneider, M.L. Lee, M. Stahl, P. Schneider, J. Comput.-Aided Mol. Des. 14(5), 487 (2000) • (53) D.B. Gordon, S.L. Mayo, Structure 7(9), 1089 (1999) • (54) M.T. Reetz, Proceedings of the National Academy of Sciences 101(16), 5716 (2004) • (55) D. Wolf, O. Buyevskaya, M. Baerns, Appl. Catal., A 200(1-2), 63 (2000) • (56) G.H. Jóhannesson, T. Bligaard, A.V. Ruban, H.L. Skriver, K.W. Jacobsen, J.K. Nørskov, Phys. Rev. Lett. 88(25) (2002) • (57) S.V. Dudiy, A. Zunger, Phys. Rev. Lett. 97(4) (2006) • (58) P. Piquini, P.A. Graf, A. Zunger, Phys. Rev. Lett. 100(18) (2008) • (59) M. d’Avezac, J.W. Luo, T. Chanier, A. Zunger, Phys. Rev. Lett. 108(2) (2012) • (60) L. Zhang, J.W. Luo, A. Saraiva, B. Koiller, A. Zunger, Nat. Commun. 4(1) (2013) • (61) L. Yu, R.S. Kokenyesi, D.A. Keszler, A. Zunger, Adv. Energy Mater. 3(1), 43 (2012) • (62) T. Brodmeier, E. Pretsch, J. Comput. Chem. 15(6), 588 (1994) • (63) S.M. Woodley, P.D. Battle, J.D. Gale, C.R.A. Catlow, Phys. Chem. Chem. Phys. 1(10), 2535 (1999) • (64) C.W. Glass, A.R. Oganov, N. Hansen, Comput. Phys. Commun. 175(11-12), 713 (2006) • (65) A.R. Oganov, C.W. Glass, J. Chem. Phys. 124(24), 244704 (2006) • (66) N.S. Froemming, G. Henkelman, J. Chem. Phys. 131(23), 234103 (2009) • (67) L.B. Vilhelmsen, B. Hammer, J. Chem. Phys. 141(4), 044711 (2014) • (68) G.L.W. Hart, V. Blum, M.J. Walorski, A. Zunger, Nat. Mater. 4(5), 391 (2005) • (69) V. Blum, G.L.W. Hart, M.J. Walorski, A. Zunger, Phys. Rev. B 72(16) (2005) • (70) C. Rupakheti, A. Virshup, W. Yang, D.N. Beratan, J. Chem. Inf. Model. 55(3), 529 (2015) • (71) J.L. Reymond, Acc. Chem. Res. 48(3), 722 (2015) • (72) T.C. Le, D.A. Winkler, Chem. Rev. 116(10), 6107 (2016) • (73) P.C. Jennings, S. Lysgaard, J.S. Hummelshøj, T. Vegge, T. Bligaard, npj Comput. Mater. 5(1) (2019) • (74) O.A. von Lilienfeld, R.D. Lins, U. Rothlisberger, Phys. Rev. Lett. 95(15) (2005) • (75) V. Marcon, O.A. von Lilienfeld, D. Andrienko, J. Chem. Phys. 127(6), 064305 (2007) • (76) M. Wang, X. Hu, D.N. Beratan, W. Yang, J. Am. Chem. Soc. 128(10), 3228 (2006) • (77) P. Hohenberg, W. Kohn, Phys. Rev. 136(3B), B864 (1964) • (78) D. Xiao, W. Yang, D.N. Beratan, J. Chem. Phys. 129(4), 044106 (2008) • (79) D. Balamurugan, W. Yang, D.N. Beratan, J. Chem. Phys. 129(17), 174105 (2008) • (80) S. Keinan, X. Hu, D.N. Beratan, W. Yang, J. Phys. Chem. A 111(1), 176 (2007) • (81) X. Hu, D.N. Beratan, W. Yang, J. Chem. Phys. 129(6), 064102 (2008) • (82) F.D. Vleeschouwer, W. Yang, D.N. Beratan, P. Geerlings, F.D. Proft, Phys. Chem. Chem. Phys. 14(46), 16002 (2012) • (83) G.E. Hinton, T.J. Sejnowski, in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1, ed. by D.E. Rumelhart, J.L. McClelland, C. PDP Research Group (MIT Press, Cambridge, MA, USA, 1986), pp. 282–317 • (84) G.E. Hinton, T.J. Sejnowski, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (1983) • (85) P. Smolensky, in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1, ed. by D.E. Rumelhart, J.L. McClelland, C. PDP Research Group (MIT Press, Cambridge, MA, USA, 1986), chap. Informatio, pp. 194–281 • (86) G.E. Hinton, S. Osindero, Y.W. Teh, Neural Comput. 18(7), 1527 (2006) • (87) R. Salakhutdinov, G. Hinton, 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (2), 2735 (2009) • (88) I. Goodfellow, Y. Bengio, A. Courville, Deep Learning (MIT Press, 2016) • (89) T. Karras, T. Aila, S. Laine, J. Lehtinen, arXiv:1710.10196 (2017) • (90) I.J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, arXiv:1406.2661 (2014) • (91) L.A. Gatys, A.S. Ecker, M. Bethge, arXiv:1508.06576 (2015) • (92) C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, W. Shi, arXiv:1609.04802 (2016) • (93) S.R. Bowman, L. Vilnis, O. Vinyals, A.M. Dai, R. Jozefowicz, S. Bengio, G. Brain, arXiv:1511.06349 pp. 1–15 (2015) • (94) K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. Zemel, Y. Bengio, arXiv:1502.03044 (2015) • (95) S. Mehri, K. Kumar, I. Gulrajani, R. Kumar, S. Jain, J. Sotelo, A. Courville, Y. Bengio, arXiv:1612.07837 (2016) • (96) A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, K. Kavukcuoglu, arXiv:1609.03499 (2016) • (97) C. Vondrick, H. Pirsiavash, A. Torralba, arXiv:1609.02612 (2016) • (98) A. Radford, L. Metz, S. Chintala, arXiv:1511.06434 (2015) • (99) J. Engel, M. Hoffman, A. Roberts, arXiv:1711.05772 (2017) • (100) Y. LeCun, Y. Bengio, G. Hinton, Nature 521(7553), 436 (2015) • (101) D.P. Kingma, M. Welling, arXiv:1312.6114 (2013) • (102) M. Arjovsky, S. Chintala, L. Bottou, arXiv:1701.07875 (2017) • (103) I. Tolstikhin, O. Bousquet, S. Gelly, B. Schölkopf, B. Schoelkopf, arXiv:1711.01558 (2017) • (104) P.K. Rubenstein, B. Schoelkopf, I. Tolstikhin, B. Schölkopf, I. Tolstikhin, arXiv:1802.03761 (2018) • (105) M. Mirza, S. Osindero, arXiv:1411.1784 (2014) • (106) X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, P. Abbeel, arXiv:1606.03657 (2016) • (107) T. Che, Y. Li, A.P. Jacob, Y. Bengio, W. Li, arXiv:1612.02136 (2016) • (108) A. Odena, C. Olah, J. Shlens, arXiv:1610.09585 (2016) • (109) X. Mao, Q. Li, H. Xie, R.Y.K. Lau, Z. Wang, S.P. Smolley, arXiv:1611.04076 (2016) • (110) R.D. Hjelm, A.P. Jacob, T. Che, A. Trischler, K. Cho, Y. Bengio, arXiv:1702.08431 (2017) • (111) J. Zhao, M. Mathieu, Y. LeCun, arXiv:1609.03126 (2016) • (112) S. Nowozin, B. Cseke, R. Tomioka, arXiv:1606.00709 (2016) • (113) J. Donahue, P. Krähenbühl, T. Darrell, arXiv:1605.09782 (2016) • (114) D. Berthelot, T. Schumm, L. Metz, arXiv:1703.10717 (2017) • (115) I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, A. Courville, arXiv:1704.00028 (2017) • (116) Z. Yi, H. Zhang, P. Tan, M. Gong, arXiv:1704.02510 (2017) • (117) M. Lucic, K. Kurach, M. Michalski, S. Gelly, O. Bousquet, arXiv:1711.10337 (2017) • (118) A. van den Oord, N. Kalchbrenner, K. Kavukcuoglu, arXiv:1601.06759 (2016) • (119) A. van den Oord, N. Kalchbrenner, O. Vinyals, L. Espeholt, A. Graves, K. Kavukcuoglu, arXiv:1606.05328 (2016) • (120) T. Salimans, A. Karpathy, X. Chen, D.P. Kingma, arXiv:1701.05517 (2017) • (121) N. Kalchbrenner, A. van den Oord, K. Simonyan, I. Danihelka, O. Vinyals, A. Graves, K. Kavukcuoglu, arXiv:1610.00527 (2016) • (122) N. Kalchbrenner, L. Espeholt, K. Simonyan, A. van den Oord, A. Graves, K. Kavukcuoglu, arXiv:1610.10099 (2016) • (123) R. Gómez-Bombarelli, J.N. Wei, D. Duvenaud, J.M. Hernández-Lobato, B. Sánchez-Lengeling, D. Sheberla, J. Aguilera-Iparraguirre, T.D. Hirzel, R.P. Adams, A. Aspuru-Guzik, ACS Cent. Sci. 4(2), 268 (2018) • (124) D.R. Hartree, Math. Proc. Cambridge Philos. Soc. 24(01), 89 (1928) • (125) V. Fock, Z. Phys. A At. Nucl. 61(1-2), 126 (1930) • (126) W. Kohn, L.J. Sham, Phys. Rev. 140(4A), A1133 (1965) • (127) R. Todeschini, V. Consonni, Handbook of Molecular Descriptors. Methods and Principles in Medicinal Chemistry (Wiley-VCH Verlag GmbH, Weinheim, Germany, 2000) • (128) D. Rogers, M. Hahn, J. Chem. Inf. Model. 50(5), 742 (2010) • (129) K. Hansen, F. Biegler, R. Ramakrishnan, W. Pronobis, O.A. von Lilienfeld, K.R. Müller, A. Tkatchenko, The Journal of Physical Chemistry Letters 6(12), 2326 (2015) • (130) M. Rupp, A. Tkatchenko, K.R. Müller, O.A. von Lilienfeld, Phys. Rev. Lett. 108(5), 058301 (2012) • (131) K.T. Schütt, F. Arbabzadah, S. Chmiela, K.R. Müller, A. Tkatchenko, Nat. Commun. 8, 13890 (2017) • (132) H. Huo, M. Rupp, arXiv:1704.06439 (2017) • (133) D. Weininger, J. Chem. Inf. Model. 28(1), 31 (1988) • (134) S. Kearnes, K. McCloskey, M. Berndl, V. Pande, P. Riley, J. Comput.-Aided Mol. Des. 30(8), 595 (2016) • (135) D.K. Duvenaud, D. Maclaurin, J. Aguilera-Iparraguirre, R. Gómez-Bombarelli, T. Hirzel, A. Aspuru-Guzik, R.P. Adams, in Advances in Neural Information Processing Systems (2015), pp. 2215–2223 • (136) J. Gilmer, S.S. Schoenholz, P.F. Riley, O. Vinyals, G.E. Dahl, arXiv:1704.01212 (2017) • (137) S. Hochreiter, J. Schmidhuber, Neural Comput. 9(8), 1735 (1997) • (138) J. Chung, C. Gulcehre, K. Cho, Y. Bengio, arXiv:1412.3555 (2014) • (139) M. Popova, O. Isayev, A. Tropsha, Sci. Adv. 4(7), eaap7885 (2018) • (140) H. Ikebata, K. Hongo, T. Isomura, R. Maezono, R. Yoshida, J. Comput.-Aided Mol. Des. 31(4), 379 (2017) • (141) P. Ertl, R. Lewis, E. Martin, V. Polyakov, arXiv:1712.07449 (2017) • (142) M.H.S. Segler, T. Kogej, C. Tyrchan, M.P. Waller, ACS Cent. Sci. 4(1), 120 (2018) • (143) A. Gupta, A.T. Müller, B.J.H. Huisman, J.A. Fuchs, P. Schneider, G. Schneider, Mol. Inf. 37(1-2), 1700111 (2017) • (144) T. Ching, D.S. Himmelstein, B.K. Beaulieu-Jones, A.A. Kalinin, B.T. Do, G.P. Way, E. Ferrero, P.M. Agapow, M. Zietz, M.M. Hoffman, W. Xie, G.L. Rosen, B.J. Lengerich, J. Israeli, J. Lanchantin, S. Woloszynek, A.E. Carpenter, A. Shrikumar, J. Xu, E.M. Cofer, C.A. Lavender, S.C. Turaga, A.M. Alexandari, Z. Lu, D.J. Harris, D. DeCaprio, Y. Qi, A. Kundaje, Y. Peng, L.K. Wiley, M.H.S. Segler, S.M. Boca, S.J. Swamidass, A. Huang, A. Gitter, C.S. Greene, J. R. Soc. Interface 15(141), 20170387 (2018) • (145) N. Jaques, S. Gu, D. Bahdanau, J.M. Hernández-Lobato, R.E. Turner, D. Eck, in Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 70, ed. by D. Precup, Y.W. Teh (PMLR, International Convention Centre, Sydney, Australia, 2017), Proceedings of Machine Learning Research, vol. 70, pp. 1645–1654 • (146) M. Olivecrona, T. Blaschke, O. Engkvist, H. Chen, J. Cheminf. 9(1), 48 (2017) • (147) S. Kang, K. Cho, J. Chem. Inf. Model. 59(1), 43 (2018) • (148) B. Sattarov, I.I. Baskin, D. Horvath, G. Marcou, E.J. Bjerrum, A. Varnek, J. Chem. Inf. Model. 59(3), 1182 (2019) • (149) S. Sinai, E. Kelsic, G.M. Church, M.A. Nowak, arXiv:1712.03346 pp. 1–6 (2017) • (150) S. Kwon, S. Yoon, in Proceedings of the 8th ACM International Conference on Bioinformatics, Computational Biology,and Health Informatics - ACM-BCB ’17 (ACM Press, New York, New York, USA, 2017), pp. 203–212 • (151) K. Kim, S. Kang, J. Yoo, Y. Kwon, Y. Nam, D. Lee, I. Kim, Y.S. Choi, Y. Jung, S. Kim, W.J. Son, J. Son, H.S. Lee, S. Kim, J. Shin, S. Hwang, npj Comput. Mater. 4(1) (2018) • (152) V. Mallet, C.G. Oliver, N. Moitessier, J. Waldispuhl, arXiv:1905.12033 (2019) • (153) J. Lim, S. Ryu, J.W. Kim, W.Y. Kim, J. Cheminf. 10(1) (2018) • (154) G.L. Guimaraes, B. Sanchez-Lengeling, C. Outeiral, P.L.C. Farias, A. Aspuru-Guzik, C. Outeiral, P.L.C. Farias, A. Aspuru-Guzik, arXiv:1705.10843 (2017) • (155) B. Sanchez-Lengeling, C. Outeiral, G.L.L. Guimaraes, A.A. Aspuru-Guzik, chemRxiv:5309668 pp. 1–18 (2017) • (156) E. Putin, A. Asadulaev, Y. Ivanenkov, V. Aladinskiy, B. Sanchez-Lengeling, A. Aspuru-Guzik, A. Zhavoronkov, J. Chem. Inf. Model. 58(6), 1194 (2018) • (157) O. Mendez-Lucio, B. Baillif, D.A. Clevert, D. Rouquié, J. Wichard, chemrXiv:7294388 (2018) • (158) N. Killoran, L.J. Lee, A. Delong, D. Duvenaud, B.J. Frey, arXiv:1712.06148 (2017) • (159) A. Kadurin, S. Nikolenko, K. Khrabrov, A. Aliper, A. Zhavoronkov, Mol. Pharmaceutics 14(9), 3098 (2017) • (160) A. Kadurin, A. Aliper, A. Kazennov, P. Mamoshina, Q. Vanhaelen, K. Khrabrov, A. Zhavoronkov, Oncotarget 8(7), 10883 (2017) • (161) T. Blaschke, M. Olivecrona, O. Engkvist, J. Bajorath, H. Chen, Mol. Inf. 37(1-2), 1700123 (2018) • (162) D. Polykovskiy, A. Zhebrak, D. Vetrov, Y. Ivanenkov, V. Aladinskiy, P. Mamoshina, M. Bozdaganyan, A. Aliper, A. Zhavoronkov, A. Kadurin, Mol. Pharmaceutics 15(10), 4398 (2018) • (163) J. Mueller, D. Gifford, T. Jaakkola, in Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 70, ed. by D. Precup, Y.W. Teh (PMLR, International Convention Centre, Sydney, Australia, 2017), Proceedings of Machine Learning Research, vol. 70, pp. 2536–2544 • (164) J.P. Janet, L. Chan, H.J. Kulik, The Journal of Physical Chemistry Letters 9(5), 1064 (2018) • (165) M.J. Kusner, B. Paige, J.M. Hernández-Lobato, arXiv:1703.01925 (2017) • (166) H. Dai, Y. Tian, B. Dai, S. Skiena, L. Song, arXiv:1802.08786 (2018) • (167) R. Winter, F. Montanari, F. Noé, D.A. Clevert, Chem. Sci. 10(6), 1692 (2019) • (168) W. Jin, R. Barzilay, T. Jaakkola, arXiv:1802.04364 (2018) • (169) E.J. Bjerrum, arXiv:1703.07076 (2017) • (170) Z. Alperstein, A. Cherkasov, J.T. Rolfe, arXiv:1905.13343 (2019) • (171) A. Lusci, G. Pollastri, P. Baldi, J. Chem. Inf. Model. 53(7), 1563 (2013) • (172) J. Bruna, W. Zaremba, A. Szlam, Y. LeCun, arXiv:1312.6203 (2013) • (173) C.W. Coley, R. Barzilay, W.H. Green, T.S. Jaakkola, K.F. Jensen, J. Chem. Inf. Model. 57(8), 1757 (2017) • (174) P. Hop, B. Allgood, J. Yu, Mol. Pharmaceutics 15(10), 4371 (2018) • (175) K. Yang, K. Swanson, W. Jin, C. Coley, P. Eiden, H. Gao, A. Guzman-Perez, T. Hopper, B. Kelley, M. Mathea, A. Palmer, V. Settels, T. Jaakkola, K. Jensen, R. Barzilay, arXiv:1904.01561 (2019) • (176) J. You, R. Ying, X. Ren, W.L. Hamilton, J. Leskovec, arXiv:1802.08773 (2018) • (177) Y. Li, L. Zhang, Z. Liu, arXiv:1801.07299 (2018) • (178) T.N. Kipf, M. Welling, arXiv:1611.07308 (2016) • (179) M. Simonovsky, N. Komodakis, arXiv:1802.03480 (2018) • (180) A. Grover, A. Zweig, S. Ermon, arXiv:1803.10459 (2018) • (181) B. Samanta, A. De, N. Ganguly, M. Gomez-Rodriguez, arXiv:1802.05283 (2018) • (182) Q. Liu, M. Allamanis, M. Brockschmidt, A.L. Gaunt, arXiv:1805.09076 (2018) • (183) T. Ma, J. Chen, C. Xiao, arXiv:1809.02630 (2018) • (184) W. Jin, K. Yang, R. Barzilay, T. Jaakkola, in International Conference on Learning Representations (2019) • (185) W. Jin, R. Barzilay, T.S. Jaakkola, chemrXiv:8266745 (2019) • (186) R. Assouel, M. Ahmed, M.H. Segler, A. Saffari, Y. Bengio, arXiv:1811.09766 (2018) • (187) J. Lim, S.Y. Hwang, S. Kim, S. Moon, W.Y. Kim, arXiv:1905.13639 (2019) • (188) Z. Zhou, S. Kearnes, L. Li, R.N. Zare, P. Riley, arXiv:1810.08678 (2018) • (189) S. Kearnes, L. Li, P. Riley, arXiv:1904.08915 (2019) • (190) S. Liu, T. Chandereng, Y. Liang, arXiv:1806.09206 (2018) • (191) M. Krenn, F. Häse, A. Nigam, P. Friederich, A. Aspuru-Guzik, arXiv:1905.13741 (2019) • (192) X. Guo, L. Wu, L. Zhao, arXiv:1805.09980 (2018) • (193) A. Bojchevski, O. Shchur, D. Zügner, S. Günnemann, arXiv:1803.00816 (2018) • (194) Y. Xiong, Y. Zhang, H. Fu, W. Wang, Y. Zhu, P.S. Yu, in Database Systems for Advanced Applications (Springer International Publishing, 2019), pp. 536–552 • (195) N. De Cao, T. Kipf, arXiv:1805.11973 (2018) • (196) E. Jang, S. Gu, B. Poole, arXiv:1611.01144 (2016) • (197) M.J. Kusner, J.M. Hernández-Lobato, arXiv:1611.04051 (2016) • (198) S. Pölsterl, C. Wachinger, arXiv:1905.10310 (2019) • (199) K. Xu, W. Hu, J. Leskovec, S. Jegelka, arXiv:1810.00826 (2018) • (200) Łukasz Maziarka, A. Pocha, J. Kaczmarczyk, K. Rataj, M. Warchoł, arXiv:1902.02119 (2019) • (201) S. Fan, B. Huang, arXiv:1906.03220 (2019) • (202) M. Neuhaus, H. Bunke, Bridging the Gap Between Graph Edit Distance and Kernel Machines (World Scientific Publishing Co., Inc., River Edge, NJ, USA, 2007) • (203) Y. Li, O. Vinyals, C. Dyer, R. Pascanu, P. Battaglia, arXiv:1803.03324 (2018) • (204) T.A. Schieber, L. Carpi, A. Díaz-Guilera, P.M. Pardalos, C. Masoller, M.G. Ravetti, Nat. Commun. 8, 13928 (2017) • (205) H. Choi, H. Lee, Y. Shen, Y. Shi, arXiv:1807.00252 (2018) • (206) S.I. Ktena, S. Parisot, E. Ferrante, M. Rajchl, M. Lee, B. Glocker, D. Rueckert, arXiv:1703.02161 (2017) • (207) K. Do, T. Tran, T. Nguyen, S. Venkatesh, arXiv:1804.00293 (2018) • (208) S. Ryu, J. Lim, W.Y. Kim, arXiv:1805.10988 (2018) • (209) H. Kajino, arXiv:1809.02745 (2018) • (210) L. Theis, A. van den Oord, M. Bethge, arXiv:1511.01844 (2015) • (211) K. Preuer, P. Renz, T. Unterthiner, S. Hochreiter, G. Klambauer, J. Chem. Inf. Model. 58(9), 1736 (2018) • (212) D. Polykovskiy, A. Zhebrak, B. Sanchez-Lengeling, S. Golovanov, O. Tatanov, S. Belyaev, R. Kurbanov, A. Artamonov, V. Aladinskiy, M. Veselov, A. Kadurin, S. Nikolenko, A. Aspuru-Guzik, A. Zhavoronkov, arXiv:1811.12823 (2018) • (213) Z. Wu, B. Ramsundar, E.N. Feinberg, J. Gomes, C. Geniesse, A.S. Pappu, K. Leswing, V. Pande, Chem. Sci. 9(2), 513 (2018) • (214) N. Brown, M. Fiscato, M.H. Segler, A.C. Vaucher, J. Chem. Inf. Model. 59(3), 1096 (2019) • (215) F. Häse, L.M. Roch, C. Kreisbeck, A. Aspuru-Guzik, arXiv:1801.01469 (2018) • (216) N.W.A. Gebauer, M. Gastegger, K.T. Schütt, arXiv:1810.11347 (2018) • (217) F. Noé, H. Wu, arXiv:1812.01729 (2018) • (218) N.W.A. Gebauer, M. Gastegger, K.T. Schütt, arXiv:1906.00957 (2019) • (219) M.S. Jørgensen, H.L. Mortensen, S.A. Meldgaard, E.L. Kolsbjerg, T.L. Jacobsen, K.H. Sørensen, B. Hammer, arXiv:1902.10501 (2019) • (220) E. Mansimov, O. Mahmood, S. Kang, K. Cho, arXiv:1904.00314 (2019) • (221) K. Madhawa, K. Ishiguro, K. Nakago, M. Abe, arXiv:1905.11600 (2019) • (222) J. Wang, S. Olsson, C. Wehmeyer, A. Perez, N.E. Charron, G. de Fabritiis, F. Noe, C. Clementi, arXiv:1812.01736 (2018) • (223) W. Wang, R. Gómez-Bombarelli, arXiv:1812.02706 (2018) • (224) J. Bradshaw, M.J. Kusner, B. Paige, M.H.S. Segler, J.M. Hernández-Lobato, in International Conference on Learning Representations (2019) • (225) J. Bradshaw, B. Paige, M.J. Kusner, M.H.S. Segler, J.M. Hernández-Lobato, arXiv:1906.05221 (2019) • (226) A.J. Riesselman, J.B. Ingraham, D.S. Marks, Nat. Methods 15(10), 816 (2018)
df8fe8ddc9979dcb
Computational chemistry Computational chemistry is a branch of chemistry that uses computer simulation to assist in solving chemical problems. It uses methods of theoretical chemistry , incorporated into effective computer programs , to calculate the structures and properties of molecules and solids. It is necessary because of the recent relative results concerning the hydrogen molecular ion (the dihydrogen cation , see references therein for more details), the quantum many-body problem can not be solved analytically, much less in closed form. While computational results Normally complement the information Obtained by chemical experimentspredictable hitherto unobserved chemical phenomena . It is widely used in the design of new drugs and materials. Examples of such properties are structure, absolute and relative (interaction) energies , electronic charge density distributions, dipoles and higher multipole moments , vibrational frequencies , reactivity , or other spectroscopicquantities, and cross sections for collision with other particles. The methods used both static and dynamic situations. In all cases, the computer time and other resources (such as the memory and the disk space) increase rapidly with the size of the system being studied. That system can be a molecule, a group of molecules, or a solid. Computational chemistry methods range from very close to highly accurate; the latter are usually possible for small systems only. Ab initio methods are based entirely on quantum mechanics and basic physical constants . Other methods are called empirical or semi-empirical because they use additional empirical parameters. Both ab initio and semi-empirical approaches involve approximations. These methods are easier to overcome, to approximate the limitations of the system (for example, periodic boundary conditions ), to the fundamental approximations to the underlying equations that are required to achieve any solution. to them at all. For example, most ab initio calculations make the Born-Oppenheimer approximation , which greatly simplifies the underlying Schrödinger equation by assuming that the nuclei remain in place during the calculation. In principle, ab initio methodseventually converges to the exact solution of the underlying equations as the number of approximations is reduced. In practice, however, it is impossible to eliminate all approximations, and residual error inevitably remains. The goal of computational chemistry is to minimize this residual error. In some cases, the details of electronic structure are less important than the long-time phase space behavior of molecules. This is the case in conformational studies of proteins and protein-ligand binding thermodynamics. Classical approximations to the energy field are used, they are computationally less intensive than electronic calculations, to enable longer simulations of molecular dynamics . Furthermore, pathformatics uses even more empirical (and computationally cheaper) methods like machine learning based on physicochemical properties. A typical problem in pathology is to predict the binding affinity of drug molecules to a given target. • 4.4Molecular mechanics • 4.5Methods for solids • 4.6Chemical dynamics • 4.7Molecular dynamics • 4.8Quantum Mechanics / Molecular Mechanics (QM / MM) • 5Interpreting molecular wave functions • 6Software packages • 7See also • 8Notes and references • 9Bibliography • 10Specialized journals on computational chemistry • 11External links Building on the founding and theories in the history of quantum mechanics , the first theoretical calculations in chemistry were those of Walter Heitler and Fritz London in 1927. The books that were influential in the early development of computational quantum chemistry include Linus Pauling and E. Bright Wilson’s 1935 Introduction to Quantum Mechanics – with Applications to Chemistry , Eyring , Walter and Kimball’s 1944 Quantum Chemistry , Heitler’s 1945 Elementary Wave Mechanics – with Applications to Quantum Chemistry , and later Coulson’s 1952 textbookValencia , each of which served as primary references for chemists in the decades to follow. With the development of efficient computer technology in the 1940s, the solutions of elaborate wave equations for complex atomic systems In the early 1950s, the first semi-empirical atomic orbital calculations were performed. Theoretical chemists have become extensive users of the early digital computers. One major advance came with the 1951 paper in Modern Physics by Clemens Roothaan CJ in 1951, largely on the “LCAO MO” approach (Linear Combination of Atomic Orbitals Molecular Orbitals), for many years the second-most cited paper in that journal . A very detailed account of such use in the United Kingdom is given by Smith and Sutcliffe. [1] The first ab initioHartree-Fock method calculations on diatomic molecules were performed in 1956 at MIT, using a basis set of Slater orbitals . For diatomic molecules, a systematic study using a minimum basis set and the first calculation with a larger basis set Were published by Ransil and Nesbet respectivement in 1960. [2] The first polyatomic calculations using Gaussian orbitals Were Performed in the late 1950s. The first configuration interaction interactions were performed in Cambridge on the EDSAC in the 1950s using Gaussian orbitals by Boys and coworkers. [3] By 1971, when a bibliography of ab initioWas published calculations, [4] the Largest molecules included Were naphthaleneand azulene . [5] [6] Abstracts of many earlier developments in ab initio theory by Schaefer. [7] In 1964 Huckel method calculations (using a single linear combination of atomic orbitals (LCAO) method to determine electron energies of molecular orbitals of π electrons in conjugated hydrocarbon systems) of molecules, ranging in complexity from butadiene and benzene to ovalene , Were generated one computers at Berkeley and Oxford. [8] These empirical methods were replaced in the 1960s by semi-empirical methods such as CNDO . [9] In the early 1970s, effective ab initio computer programs such as ATMOL, Gaussian , IBMOL, and POLYAYTOM, began to be used to speed ab initio calculations of molecular orbitals. Of these four programs, only Gaussian, now vastly expanded, is still in use, but many other programs are now in use. At the same time, the methods of molecular mechanics , such as MM2 force field , were developed, primarily by Norman Allinger . [10] One of the first mentions of the term computational chemistry can be found in the 1970 book Computers and Their Role in the Physical Sciences by Sidney Fernbach and Abraham Haskell Taub, where they state “It seems, therefore, that ‘computational chemistry’ can finally be more and more of a reality. ” [11] During the 1970s, widely different methods were introduced as part of a new emerging discipline of computational chemistry . [12] The Journal of Computational Chemistry was first published in 1980. Compoundational chemistry has featured in several Nobel Prize awards, most notably in 1998 and 2013. Walter Kohn , “for his development of density-functional theory,” and John Pople , “for his development of computational methods in quantum chemistry,” received the 1998 Nobel Prize in Chemistry. [13] Martin Karplus , Michael Levitt and Arieh Warshel received the 2013 Nobel Prize in Chemistry for “the development of multiscale models for complex chemical systems”. [14] Fields of application The term theoretical chemistry can be defined as a mathematical description of chemistry, whereas computational chemistry is usually used when it is possible for it to be automated for implementation on a computer. In theoretical chemistry, chemists, physicists, and mathematicians Develop algorithms and computer programs to predict atomic and molecular properties and reaction paths for chemical reactions . Computational chemists, in contrast, may simply apply existing computer programs and methodologies to specific chemical questions. Computational chemistry has two different aspects: • Computational studies, used to find a starting point for a study of spectroscopic peaks. • Computational studies, used to predict the possibility of being investigated by experiments. Thus, computational chemistry can assist the experimental chemist or it can challenge the experimental chemist to find completely new chemical objects. Several major areas can be distinguished within computational chemistry: • The prediction of the molecular structure of molecules by the use of the simulation of forces, or more accurate quantitative chemical methods, to find stationary points on the energy surface as the position of the nuclei is varied. • Storing and searching for data are chemical entities (see chemical databases ). • Identifying correlations between chemical structures and properties (see quantitative structure-property relationship (QSPR) and quantitative structure-activity relationship (QSAR)). • Computational approaches to help in the efficient synthesis of compounds. • Computational approaches to design molecules that interact in specific ways with other molecules (eg drug design and catalysis ). The words exact and perfect do not apply here, as very few aspects of chemistry can be computed exactly. However, almost every aspect of chemistry can be described in a qualitative or approximate quantitative computational scheme. Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation , with relativistic corrections added, une actual progress has been made in solving the fully relativistic Dirac equation . In principle, it is possible to solve the Schrödinger equation in its time-dependent or time-independent form, as appropriate for the problem in hand; in practice, this is not possible except for very small systems. Therefore, a great number of methods to achieve the best trade-off between accuracy and computational cost. Accuracy can always be improved with greater computational cost. Significant errors can present themselves in ab initio models of many electrons, due to the computational cost of full relativistic-inclusive methods. This complicates the study of molecules interacting with high atomic mass units, such as transitional metals and their catalytic properties. Present algorithms in computational chemistry with the ability to calculate electrons with sufficient accuracy. Errors for energies can be less than a few kJ / mol. For geometries, bound lengths can be predicted within a few picometers and bound angles within 0.5 degrees. The treatment of larger molecules than a few electron donations is computationally tractabledensity functional theory (DFT). There is some dispute in the field of whether or not these methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated by classical mechanics that are called molecular mechanics (MM). In QM-MM methods, small parts of large complexes are treated quantum mechanically (QM), and the remainder is treated approximately (MM). One molecular formula can represent more than one molecular isomer: a set of isomers. Each isomer is a local minimum on the energy surface (called the potential energy area ) created from the total energy (ie, the electronic energy, plus the repulsion energy between the nuclei) as a function of the coordinates of all nuclei. A stationary point is a geometry such that the derivative of the energy with respect to all displacements of the nuclei is zero. A local (energy) minimum is a stationary point where all such displacements lead to an increase in energy. The local minimum is called the global minimum and corresponds to the most stable isomer. If there is one particular co-ordinate, the stationary point is atransition structure and the coordinate is the reaction coordinate . This process of determining stationary points is called geometry optimization . The determination of molecular structure by geometry optimization has become routine only after efficient methods for calculating the first derivative of the energy with respect to all atomic coordinates becomes available. The harmonic motion is estimated. More importantly, it allows for the characterization of stationary points. The frequencies are related to the eigenvalues ​​of the Hessian matrix, which contains second derivatives. If the eigenvalues ​​are all positive, then the frequencies are all real and the stationary point is a local minimum. If one is negative (ie, an imaginary frequency), then the stationary point is a transition structure. If more than one eigenvalue is negative, then the stationary point is a more complex one, and is usually of little interest. When one of these is found, it is necessary to move from one to the next, and to experiment with it. The total energy is determined by approximate solutions of the time-dependent Schrödinger equation, usually with no relativistic terms included, and by making use of the Born-Oppenheimer approximation , which allows for the separation of electronic and nuclear motions, thus simplifying the Schrödinger equation . This leads to the evaluation of the total energy as a sum of the electronic energy at fixed nuclei positions and the repulsion energy of the nuclei. A notable exception are certain approaches called direct quantum chemistry, which deals with electrons and nuclei on a common jog. Density functional methods and semi-empirical methods are variants on the major theme. For very large systems, the relative total energies can be compared using molecular mechanics. The ways of determining the total energy to predict molecular structures are: Ab initio methods Main article: Ab initio quantum chemistry methods The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the Hamiltonian molecular . Methods that do not include any empirical or semi-empirical parameters in their equations – being derived from the principles of ab initio methods. This does not imply that the solution is an exact one; they are all approximate quantum mechanical calculations. It means that a particular approximation is rigorously defined on the first principles (quantum theory) and then is solved within an error margin that is qualitatively known beforehand. If numerical iterative methods must be used, the aim is to iterate up to full Machine accuracy is Obtained (the best That Is as possible with a finite word length on the computer, and dans le mathematical and / or physical approximations made). Diagram illustrating various ab initio electronic structure methods in terms of energy. Spacings are not to scale. The first type of ab initio electronic structure is the Hartree-Fock method (HF), an extension of molecular orbital theory , in which the correlated electron-electron repulsion is not specifically taken into account; only its average effect is included in the calculation. As the basis of size is increased, the energy and wave function tends towards a limit called the Hartree-Fock limit. Many types of calculations (termed post-Hartree-Fock methods) and a correct calculation for the electron-electron repulsion, referred to also as electronic correlation. As these methods are pushed to the limit, they approach the exact solution of the non-relativistic Schrödinger equation. To obtain exact agreement with experiment, it is necessary to include relativistic and spin orbit terms, both of which are more important for heavy atoms. In all of these approaches, it is necessary to choose a basis set . This is a set of functions, usually centered on the different atoms in the molecule, which are used to expand the molecular orbitals with the linear combination of atomic orbitals (LCAO) molecular orbital method ansatz . Ab initio methods need to define a level of theory and a basis set. The Hartree-Fock wave function is a single configuration or determinant. In some cases, particularly for bond breaking processes, this is inadequate, and several configurationsmust be used. Here, the coefficients of the configurations, and of the basis functions, are optimized together. The total molecular energy can be evaluated as a function of the molecular geometry ; in other words, the potential energy surface . Such a surface can be used for reaction dynamics. The stationary points of the surface leads to predictions of different isomers and the transition structures for conversion between isomers, but these can be determined without a full knowledge of the complete surface. A particularly important objective, called computational thermochemistry , is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Kcal / mol or 4 kJ / mol. To reach a degree of accuracy in a post-Haringe-Fock methods and combines the results. These methods are called quantum chemistry composite methods . Density functional methods Main article: Density functional theory Density functional theory (DFT) methods are often considered to be the first methods for determining the molecular electronic structure, even though many of the most common functions are derived from empirical data, or from more complex calculations. In DFT, the total energy is expressed in terms of the total one- electron density rather than the wave function. In this type of calculation, there is an approximate Hamiltonian and an approximate expression for the total electron density. DFT methods can be very accurate for little computational cost. Some methods combine the functional exchange function with the Hybrid exchange method . Semi-empirical and empirical methods Main article: Semi-empirical quantum chemistry methods Semi-empirical quantum chemistry methods are based on the Hartree-Fock methodformalism , but make many approximations and obtain some parameters from empirical data. They are very important in computational chemistry for large molecules where the full Hartree-Fock method without approximations is too costly. The use of empirical parameters appears to allow some inclusion of correlation effects into the methods. Semi-empirical methods follow what are often called empirical methods, where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel , and for all valence electron systems, the extended Hückel method proposed by Roald Hoffmann . Molecular mechanics Main article: Molecular mechanics In many cases, large molecular systems can be modeled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, a classical expression for the energy of a compound, for instance the harmonic oscillator . All constants appearing in the equations must be obtained beforehand from experimental data or ab initio calculations. The database of compounds used for parameterization, ie, the resulting set of parameters and functions is called the force field , is crucial to the success of molecular mechanics calculations. A force field parameterized against a specific class of molecules, for instance proteins, would be expected to only be relevant when describing other molecules of the same class. These methods can be applied to a wide range of biological molecules and allow studies of the approach and interaction of potential drug molecules. [15] [16] Methods for solids Main article: Computational chemical methods in solid state physics Computational chemical methods can be applied to solid state physics problems. The electronic structure of a crystal is in general described by a band structure , which defines the energies of the electron orbitals for each point in the Brillouin zone . Ab initio and semi-empirical calculations yield orbital energies; therefore, they can be applied to band structure calculations. Since it is time-consuming to calculate the energy for a molecule, it is even more time-consuming to calculate the area in the Brillouin zone. Chemical dynamics Once the electronic and nuclear variables are separated (dans le Born-Oppenheimer representation), in the time-dependent approach, the wave packet Corresponding to the nuclear degrees of freedom is propagated via the time Evolution operator (physics) associated to the time-dependent Schrödinger equation (for the Hamiltonian full molecular ). In the complementary energy-dependent approach, the time-independent Schrödinger equation is solved using scattering theory theory . The potential of the interatomic interaction is given by the potential energy surfaces. In general, the potential energy surfaces are coupled via the vibronic coupling terms. The most popular methods for propagating the wave packet associated with molecular geometry are: • the technical split operator , • the Chebyshev (real) polynomial , • the multi-configuration time-dependent Hartree method (MCTDH), • the semiclassical method. Molecular dynamics Main article: Molecular dynamics Molecular dynamics (MD) uses either quantum mechanics , Newton’s laws of motion or a mixed model to examine the time-dependent behavior of systems, including vibrations or Brownian motion and reactions. MD combined with density functional theory leads to hybrid models . Quantum Mechanics / Molecular Mechanics (QM / MM) Main article: QM / MM QM / MM is a hybrid method that attempts to combine the accuracy of quantum mechanics with the speed of molecular mechanics. It is useful for simulating very large molecules such as enzymes . Interpreting molecular wave functions The atoms in molecules (QTAIM) model of Richard Bader was developed to effectively link the quantum mechanical model of a molecule, as an electronic wavefunction, to chemically useful concepts such as atoms in molecules, functional groups, bonding, the theory of Lewis peers , and the valence bond model . Bader HAS Demonstrated That thesis empirically Useful chemistry concepts can be related to the topology of the observable load density distribution, whether gold Measured Calculated from a quantum mechanical wavefunction. QTAIM analysis of molecular wavefunctions is implemented, for example, in the AIMAll software package. Software packages Many self-sufficient computational chemistry software packages exist. Some include many methods covering a wide range, while others Details of most of them can be found in: • Biomolecular modeling programs: proteins , nucleic acid . • Molecular mechanics programs. • Quantum chemistry and solid state physics software supporting several methods. • Molecular software design • Semi-empirical programs. • Valencia bond programs . See also • List of computational chemists • Bioinformatics • Computational biology • Computational Chemistry List • Efficient code generation by computer algebra • Comparison of force field implementations • Important publications in computational chemistry • In silico • International Academy of Quantum Molecular Science • Mathematical Chemistry • Molecular graphics • Molecular modeling • Molecular modeling on GPUs • Monte Carlo molecular modeling • Protein dynamics • Scientific computing • Statistical mechanics • Solvent models Notes and references 1. Jump up^ Smith, SJ; Sutcliffe, BT (1997). “The development of Computational Chemistry in the United Kingdom”. Reviews in Computational Chemistry . 10 : 271-316. 2. Jump up^ Schaefer, Henry F. III (1972). The electronic structure of atoms and molecules . Reading, Massachusetts: Addison-Wesley Publishing Co. p. 146. 3. Jump up^ Boys, SF; Cook, GB; Reeves, CM; Shavitt, I. (1956). “Automatic fundamental calculations of molecular structure”. Nature . 178 (2): 1207.Bibcode : 1956Natur.178.1207B . doi : 10.1038 / 1781207a0 . 4. Jump up^ Richards, WG; Walker, TEH; Hinkley RK (1971). A bibliography of ab initio molecular wave functions . Oxford: Clarendon Press. 5. Jump up^ Preuss, H. (1968). “DasSCF-MO-P (LCGO) -Verfahren und seine Varianten”. International Journal of Quantum Chemistry . 2 (5): 651.Bibcode : 1968IJQC …. 2.651P . doi : 10.1002 / qua.560020506 . 6. Jump up^ Buenker, RJ; Peyerimhoff, SD (1969). “Ab initio SCF calculations for azulene and naphthalene”. Chemical Physics Letters . 3 : 37. Bibcode :1969CPL ….. 3 … 37B . doi : 10.1016 / 0009-2614 (69) 80014-X . 7. Jump up^ Schaefer, Henry F. III (1984). Quantum Chemistry . Oxford: Clarendon Press. 8. Jump up^ Streitwieser, A .; Brauman, JI; Coulson, CA (1965). Supplementary Tables of Molecular Orbital Calculations . Oxford: Pergamon Press. 9. Jump up^ Pople, John A .; Beveridge, David L. (1970). Approximate Molecular Orbital Theory . New York: McGraw Hill. 10. Jump up^ Allinger, Norman (1977). 130. MM2 A hydrocarbon force field utilizing V1 and V2 torsional terms “. Journal of the American Chemical Society . 99(25): 8127-8134. doi : 10.1021 / ja00467a001 . 11. Jump up^ Fernbach, Sidney; Taub, Abraham Haskell (1970). Computers and Their Role in the Physical Sciences . Routledge. ISBN  0-677-14030-4 . 12. Jump up^ “vol 1, preface”. Reviews in Computational Chemistry . doi : 10.1002 / 9780470125786 . 13. Jump up^ “The Nobel Prize in Chemistry 1998” . 14. Jump up^ “The Nobel Prize in Chemistry 2013” (Press release). Royal Swedish Academy of Sciences. October 9, 2013 . Retrieved October 9, 2013 . 15. Jump up^ Rubenstein, Lester A .; Zauhar, Randy J .; Lanzara, Richard G. (2006). “Molecular dynamics of a biophysical model for β2-adrenergic and G protein-coupled receptor activation” (PDF) . Journal of Molecular Graphics and Modeling . 25 (4): 396. doi : 10.1016 / j.jmgm.2006.02.008. PMID  16574446 . 16. Jump up^ Rubenstein, Lester A .; Lanzara, Richard G. (1998). “Activation of G protein-coupled receptors entails cysteine ​​modulation of agonist binding”(PDF) . Journal of Molecular Structure: THEOCHEM . 430 : 57. doi :10.1016 / S0166-1280 (98) 90217-2 .
fc5492c0333124ea
Thomas Royds Last updated Thomas Royds Born April 11, 1884 Moorside, Oldham, England DiedMay 1, 1955 (1955-06) (aged 71) Residence England, Germany, India, Turkey Citizenship English Alma mater Manchester University Scientific career Fields Solar physicist Institutions Manchester University, Kodaikanal Observatory, Indian Meteorological Service, Istanbul University Thomas Royds (April 11, 1884 – May 1, 1955) was a Solar physicist who worked with Ernest Rutherford on the identification of alpha radiation as the nucleus of the helium atom, and who was Director of the Kodaikanal Solar Observatory. Ernest Rutherford New Zealand-born British chemist and physicist Ernest Rutherford, 1st Baron Rutherford of Nelson, HFRSE LLD, was a New Zealand-born British physicist who came to be known as the father of nuclear physics. Encyclopædia Britannica considers him to be the greatest experimentalist since Michael Faraday (1791–1867). Helium atom mathematical modeling of a helium atom A helium atom is an atom of the chemical element helium. Helium is composed of two electrons bound by the electromagnetic force to a nucleus containing two protons along with either one or two neutrons, depending on the isotope, held together by the strong force. Unlike for hydrogen, a closed-form solution to the Schrödinger equation for the helium atom has not been found. However, various approximations, such as the Hartree–Fock method, can be used to estimate the ground state energy and wavefunction of the atom. Kodaikanal Solar Observatory Indian astronomical observatory The Kodaikanal Solar Observatory is a solar observatory owned and operated by the Indian Institute of Astrophysics. It is on the southern tip of the Palani Hills 4 km from Kodaikanal town, Dindigul district, Tamil Nadu state, South India. Early years Thomas Royds was born April 11, 1884 in Moorside, near Oldham, Lancashire, UK. He was the third son of Edmund Royds and Mary Butterworth. His father was a cotton spinner and his mother had been a cotton weaver. His eldest brother, Robert Royds, who was 6 years older than Thomas, became an engineer and wrote books on temperature measurement and on the design of steam locomotives. In 1897 he entered Oldham Waterloo Secondary School and in 1903 he won the King's Scholarship to Owen's College, Manchester University for three years, studying in the Honours School of Physics under Arthur Schuster. Arthur Schuster Anglo-German physicist Sir Franz Arthur Friedrich Schuster FRS FRSE was a German-born British physicist known for his work in spectroscopy, electrochemistry, optics, X-radiography and the application of harmonic analysis to physics. Schuster's integral is named after him. He contributed to making the University of Manchester a centre for the study of physics. In 1906 he took a First Class B Sc Honours degree in Physics, and stayed at Manchester doing research in spectroscopy, especially on the constitution of the electric spark. From 1907 to 1909 he worked with Ernest Rutherford (later Lord Rutherford, the father of nuclear physics) on the spectrum of radon and, more importantly, on the identification of the alpha particle as the nucleus of the helium atom, in what is called "The Beautiful Experiment." Rutherford and Royds published four joint papers. [1] From 1909 to 1911, as an 1851 Exhibition Scholar, he worked under Professor Paschen in Tübingen, Germany on spectroscopic research mainly in the infra-red, and later under Professor Rubens in Berlin on "infra-red restrahlen." At Manchester University in 1911, he took his D Sc degree in Physics awarded for all his research work to date. Middle years The same year, he was appointed Assistant Director of Kodaikanal Solar Physics Observatory, South India, where he worked partly in collaboration with the director, Sir J Evershed. They studied the displacement of the lines in the sun's spectrum, calling attention to the significance and interpretation of negative displacements, i.e., towards the violet. Between 1913 and 1937 he produced 49 research papers published at Kodaikanal Observatory. Others, such as the one proving the presence of oxygen in the sun's chromosphere, appeared in scientific journals, such as Nature. He was appointed Director of Kodaikanal when Evershed retired in 1922. In 1928, exceptional observation conditions enabled him to photograph a higher prominence on the sun's surface than ever seen before. He also photographed the brightest and largest solar hydrogen eruption up to that date. The following year, Dr Royds and Professor Stratton of Gonville and Cauis College, Cambridge, led the eclipse expedition to Siam (now Thailand), to photograph a total solar eclipse. Unfortunately, clouds prevented almost all observations. In 1936 Dr Royds acted as the Director General of Observatories in India for one year, while the DG was on leave. This entailed responsibility for the Indian Meteorological Service. Later that year, Royds and Stratton led a solar expedition to Hojjaido, Japan, mainly to study how the wavelengths on different parts of the sun's disc were affected by the scattered light from other parts of the disc. Their work also confirmed Einstein's theory that wavelengths of lines in the sun's spectrum would deviate slightly from the same lines in terrestrial laboratories. This expedition was a complete success. Later years Dr Royds came home to England on well-earned leave in 1937 and two years later officially retired. In the year following, the post of Professor of Astronomy and Director of the Observatory at Istanbul University, Turkey, fell vacant upon the death of the German incumbent. Anxious to increase British influence there, the British Council urged Dr Royds to apply. He was accepted. He was now 58, and the voyage out was long and arduous in wartime conditions; he had to sail round the Cape of South Africa to Cairo, and from there by small boat to Istanbul. The first term he lectured in French, but by the second term he was able to lecture in Turkish. When his contract with Istanbul University ended in the Autumn of 1947, he returned to England, where he spent his last years in retirement. He died of a cerebral haemorrhage on 1 May 1955, leaving his widow, two daughters and a son. Related Research Articles Norman Lockyer scientist and astronomer from Britain Sir Joseph Norman Lockyer, known simply as Norman Lockyer, was an English scientist and astronomer. Along with the French scientist Pierre Janssen, he is credited with discovering the gas helium. Lockyer also is remembered for being the founder and first editor of the influential journal Nature. Pierre Janssen French astronomer John Evershed CIE FRS FRAS was an English astronomer. He was the first to observe radial motions in sunspots, a phenomenon known as the Evershed effect. The Indian Institute of Astrophysics (IIA), with its headquarters in Bangalore, is a premier National Research Institute of India. IIA conducts research primarily in the areas of astronomy, astrophysics and related subjects. It is widely recognised as a leading research center for Astrophysics in India. Andrew David Thackeray, was an astronomer trained at Cambridge University. He served as director of the Radcliffe Observatory for 23 years. The Evershed effect, named after the British astronomer John Evershed, is the radial flow of gas across the photospheric surface of the penumbra of sunspots from the inner border with the umbra towards the outer edge. John Lewis Heilbron is an American historian of science best known for his work in the history of physics and the history of astronomy. He is Professor of History and Vice-Chancellor Emeritus at the University of California, Berkeley, senior research fellow at Worcester College, Oxford, and visiting professor at Yale University and the California Institute of Technology. He edited the academic journal Historical Studies in the Physical and Biological Sciences for twenty-five years. Solar telescope special purpose telescope used to observe the Sun A solar telescope is a special purpose telescope used to observe the Sun. Solar telescopes usually detect light with wavelengths in, or not far outside, the visible spectrum. Obsolete names for Sun telescopes include heliograph and photoheliograph. Sir John Anthony Carroll was a British astronomer and physicist. In the 1920s he worked at the Solar Physics Observatory, Cambridge, UK with F.J.M. Stratton and Richard van der Riet Woolley. He made major technological advances, inventing a high resolution spectrometer, and a coronal camera. F. J. M. Stratton British astronomer Lieutenant-Colonel Frederick John Marrian Stratton DSO OBE TD DL FRS PRAS was a British astrophysicist, Professor of Astrophysics (1909) at the University of Cambridge from 1928 to 1947 and a decorated British Army officer. Alpha particle helium-4 nucleus; a particles consisting of two protons and two neutrons bound together Alpha particles, also called alpha ray or alpha radiation, consist of two protons and two neutrons bound together into a particle identical to a helium-4 nucleus. They are generally produced in the process of alpha decay, but may also be produced in other ways. Alpha particles are named after the first letter in the Greek alphabet, α. The symbol for the alpha particle is α or α2+. Because they are identical to helium nuclei, they are also sometimes written as He2+ or 4 indicating a helium ion with a +2 charge. If the ion gains electrons from its environment, the alpha particle becomes a normal helium atom 4 Appadvedula Lakshmi Narayan, better known as A. L. Narayan BA, MA, D.Sc., F.I.P. was an Indian Astrophysicist and was the first Indian Director of Kodaikanal Solar Observatory during between 1937–1946. He was born in 1887 to Shri Appadvedula Vyasulu and Smt. Mahalakshmi in the Mukkamala village of East Godavari District of Andhra Pradesh. He studied up to Matriculation in the higher secondary school at Kothapeta. He developed keen interest in the study of Science and continued his studies in the Government Arts College, Rajahmundry. He passed the B.A. degree and did Postgraduation (M.A.) in Physics from the University of Madras in 1914. He has joined as Lecturer in Physics in Maharajah's College of Vizianagaram. Arvind Bhatnagar made significant contributions to Solar Astronomy, and founded several planetaria across India. He was the founder-director of the Udaipur Solar Observatory, and the founder director of Nehru Planetarium of Bombay. Mary Acworth Evershed was a British astronomer and scholar. Her work on Dante Alighieri was written under the pen name M.A. Orr. Although her second name is increasingly appearing as Ackworth, this is totally incorrect, she always gave it as Acworth, and it appeared as such in both her obituaries, of which the one appearing in the Monthly Notices of the Royal Astronomical Society was written by her nephew A. David Thackeray, who presumably would have known. The first appearance of this incorrect version appears to have occurred in an article written by Mary Brück. Kavasji Naegamvala, also known as Kavasji Dadabhai Naegamvala (1857-1938) (FRAS) was an astrophysicist and the director of the Takhtasingji Observatory. Professor Charles Michie Smith CIE FRSE was a 19th century Scottish astronomer. He founded the Kodaikanal Solar Observatory in the mountains of south India and served as its first Director. 1. "Coldest Cold". Time Inc. 1929-06-10. Retrieved 2008-07-27. Obituary - Times of London, May 4, 1955, page 15d. Obituary - Indian Journal of Meteorology and Geophysics, Quarterly Volume 6, July 1955, No:3 page 280. Indian Institute of Astrophysics Repository - and search for Royds The Nature of the Alpha Particle from Radioactive Substances (With E Rutherford) Phil Mag ser 6, xvii 281-6 1909 (the original paper) (a copy of the paper on the internet) Ernest Marsden, quoted on page 328 of Rutherford Scientist Supreme by John Campbell. AAS Publications 1999 Census of England and Wales, 1871 and 1881.
7020bd8dd8ac88b8
In this post I delve into the current view of what happens to a wave function as it interacts with its environment and tell the story of how I anticipated the idea of this view around 1971 or 1972 some 10 years before a crucial paper was published in 1991. If you have a non-technical background, I hope you can skim through without too much puzzlement. In the next post I will revert to writing which is entirely non mathematical. Back around 1970, when I first became interested in the “collapse of the wave function”, I noticed at some point while thinking about the situation, that this collapse entailed more than simply the materialization of, say, a particle in accord with its probability distribution. For the wave function is more than a probability distribution. It contains, in addition, information which allows it to be transformed into a new “representation” in which it gives a probability distribution for a different physical quantity. For example, if we have a wave function from which we can find a probability for a particle’s position, we can transform this wave function into a new form from which we can find the distribution for the particle’s energy. With the “collapse”, however, one loses the information that would allow such transformations. One loses the “phases” of the wave function. To understand what are meant by phases I need to point out that a complex number can be viewed as a little arrow, lying in a plane. The length of the arrow can represent a positive real number, i.e. a probability. The arrow, lying in its plane, can point in any 360-degree direction and the angle at which it points is called its “phase”. A wave function consists of many complex numbers, each of which can be looked upon as a little arrow with magnitude and phase. Looking at an entire collection of these little arrows, one can consider their lengths (actually length squared) as a probability distribution for one physical quantity, and the pattern of their phases as additional information about other physical quantities. Collapse occurs when a quantum system interacts with its environment. With the “collapse”, one of the probabilities becomes realized; and ALL of the phases simply disappear from the record. The information associated with the phases’ pattern goes missing. These days people have realized something I missed back in the 1970’s: the information contained in the phases doesn’t actually go missing, but leaks into the environment where it can show up, giving us information about the quantum system of interest. People no longer talk much about collapse, concentrating on the disappearance of a system’s phase pattern, which may or may not actually be linked to collapse. The modern buzz word for this possible way-station to collapse is “decoherence”. The phase pattern is “coherent” and when it goes away, we have “quantum decoherence”. Back in 1971, long before the word “decoherence” had ever appeared in this context, I wondered if there might be a way of calculating how the phases go away as a quantum system interacts with its environment, and, through blind luck, came to realize that there was indeed the possibility of such a calculation. In reading various papers about “measurement theory” I came across an essay by Eugene Wigner, a Nobel prize winning theorist, who pointed out that a quantum expression called “the density matrix” might possibly throw some light on the whole “measurement-collapse” situation because with the density matrix phases went away. Wigner said, however, that this possibility was of no use, because the density matrix belongs not to a single quantum system, but always to an “ensemble”. An ensemble is a collection of a number of similar systems, while the “collapse” happens with a single system. So, the essay’s conclusion was: forget about the density matrix as being of any help in understanding what was going on. I noted what Wigner had said and thought no more about it until I was browsing in a quantum text by Lev Landau and Evgeny Lifshitz, translated ten or so years earlier from the Russian. There on pages 35 – 38 was a definition and discussion of the density matrix; and the definition was definitely for a single system interacting with its environment. I remembered that Lev Landau had independently defined the density matrix along with von Neumann in 1927. Perhaps Landau’s version had simply been forgotten. In any case, being defined for a single system, to me it showed great promise for calculating how wave function phases could disappear. (See Landau and Lifshitz, Quantum Mechanics: Non-Relativistic Theory, First English Edition, 1958.) Lev Landau was still another of the geniuses associated with the development of quantum mechanics. Born in June, 1908, in Baku, Azerbaijan, of Russian parents, he was enough younger than the Pauli – Heisenberg generation that he missed out on the first 1925 – 1926 wave of the quantum revolution. By the time he was 19 or so he had caught up enough to independently define a version of the density matrix. Later he spent time in Europe, visiting the Bohr institute on several occasions between 1929 and 1931. A wonderful book about that time period is Faust in Copenhagen: A Struggle for the Soul of Physics by Geno Segrè. Dr. Segrè is a neutrino physicist who is also a talented writer. Warning! If you’re not a physics buff by now, this book might well make you into one. Geno Segrè’s uncle was Emilio Segrè, a famous member of Fermi’s group in Italy and later one of the atomic bomb developers. Talking about Landau, known by his nickname, Dau, Segrè says, “Dau, who became Russia’s greatest theoretical physicist and one of the twentieth century’s major scientific figures was never intimidated by anybody, …”. “As the Dutch physicist, Casimir remembered, ‘Landau’s was perhaps the most brilliant and quickest mind I have ever come across.’ This is high praise from someone who knew well both Heisenberg and Pauli.” With the Landau Lifshitz definition in hand I tried to see if I could prove that the right sort of environmental interaction could make the phases of the wave function fade away. The density matrix for discrete states is a square with the real probabilities running down the main diagonal from upper left to lower right. The off-diagonal elements are complex and contain the relevant phase information. (The matrix is Hermitian, though that fact is somewhat irrelevant in the context of interest here.) About the time I started working on the matrix there was a talented graduate student, Yashwant Shitoot from India, at Auburn who needed a thesis topic so I suggested that he work on the problem for his Master’s thesis which he did. Shitoot and I came up with somewhat different approaches to the problem. Yashwant observed that in practice the environment potentials could not be exactly specified and thus the off-diagonal elements of the matrix could be considered to be a probability distribution arising from the many unknown environmental potentials. Citing the “central limit theorem” he argued that these distributions were normal distributions and would vanish over time. (See Yashwant Anant Shitoot Theory of Measurement, M.S. Thesis, Auburn University, March, 1973.) The probabilities in Shitoot’s approach are classical probabilities arising from our ignorance; not quantum probabilities arising from the “mind of God”. In my approach I visualized the wave function in a Stern-Gerlach experiment. The classic Stern-Gerlach experiment passes a beam of silver atoms in vacuum between unsymmetrical poles of a magnet. Such poles generate a non-uniform magnetic field which exerts a force on a silver atom which has a magnetic moment due to the spin of its outer electron. A sliver atom wave function splits into a superposition of two spatially separated parts representing the two spin possibilities spin-up or spin down. (This splitting is similar to what occurs with Schrödinger’s unhappy cat.) After passing through the magnet poles the silver beam can either impinge on a barrier where it forms two spots of silver or, instead come to a barrier with a slit positioned where, say, the upper the silver dots would be. In the latter case some of the silver atoms form a dot below and others pass through the slit. The atoms that pass through the slit all have their spin up when passed through a second pole piece oriented like the first; or confirm the way that spin ½ works if the second pole piece is tilted. My interest, however, was not with the spin of the silver atoms, but instead, with a calculation of how the superposition changes as one part of it impinges on the atoms of the barrier. To attack the calculation, I considered a silver atom as the “system” and the atoms of the barrier as the “environment”. In quantum mechanics there are not only representations, but “pictures”. In the Schrödinger picture, the time dependence is carried by the wave function (state vector) while in the Heisenberg picture the time dependence is carried by the quantum mechanical operators. Furthermore, there is a third picture called the interaction picture where one ends up with the time dependence in the interaction part when a system and its environment interact. Using the interaction picture and a model potential consisting of a series of step functions to simulate the atoms of the barrier, I could easily show that the off-diagonal elements of the density matrix “gradually” went to zero. Of course, I’m being facetious in using the word “gradually” because the time involved here is of the order of 10⁻¹⁴ seconds. However, in one’s imagination one can split this time into thousands or millions of increments. Then the change is indeed gradual. Or one can imagine a different physical situation where a quantum particle traveling through an imperfect vacuum encounters the field from a stray atom from time to time. The essential point is that the quantum decoherence in not instantaneous and one can imagine situations where the time interval is experimentally significant. (See below.) There are two problems with my approach. First, I failed to find a proof that used a realistic interaction potential. Nevertheless, what I did was highly suggestive and over the years gave me the satisfaction of feeling that I understood what was happening whenever I encountered quantum puzzles involving collapse. In particular, the model calculation showed how an interaction of one piece of a superposition would affect another piece where there was no interaction. The second problem I had at the time was how to interpret the physical situation when the off-diagonal elements of the density matrix had gone only part way to zero. In particular, what was the physical meaning of the situation when a particle passed by a weak interaction potential into an area free from interaction so that any decoherence was only partial. I kept thinking about this second difficulty over the years and at some point, an answer dawned on me. (See below.) In spite of these difficulties, around 1973 I wrote up a paper and sent it to the Physical Review where it was summarily rejected because I had pointed out no ramifications of the calculation which could be experimentally tested. I didn’t follow up for a number of reasons: I had no answer to the second difficulty mentioned above, I was and am somewhat lazy, and my life was falling apart at the time. I left Auburn in 1974 and my only copy of the paper has disappeared. Currently, quantum decoherence is of interest because it is highly relevant to quantum computing. In a quantum computer a collection of “qubits” which act like spin ½ particles are put into a quantum state where they carry out a calculation provided that they do not “decohere” during the time necessary for the calculation to take place. This means that the qubit collection must be as isolated as possible from any stray potentials. However, it is likely to be impossible to completely isolate the collection. What happens during a partial decoherence? Here is my answer. During an encounter with a stray potential the off-diagonal terms of the density matrix of the system are slightly smaller. One can get a handle on this situation by splitting the density matrix into a linear superposition of two density matrices, one with zero off-diagonal elements and a second with diagonal elements somewhat reduced. Let the two coefficients of the superposition be c₁ and c₂. Then c₁*c₁ is the probability that decoherence has occurred and c₂*c₂ is the probability that the calculation is OK. I have applied a probability interpretation to the situation, a satisfying idea where quantum physics is concerned. In many cases a quantum calculation seeks an answer which takes too long to find with a conventional computer, but which is easily tested if found. With a quantum computer subject to decoherence one simply repeats the calculation until the answer shows up. Provided the isolation of the system is good, this should not require many repeats. Whether or not my ideas about partial decoherence are valid, it is clear that the entire situation about quantum measurement and decoherence will become clear as quantum computers are developed. To close this post, I want to consider my conscious motivations in talking about quantum decoherence and my engagement with it. One motivation is that this is an interesting story which goes a long way towards answering the puzzles of quantum measurement, decoherence and collapse. I believe that this history makes clear that the long-standing difficulties in this area which have led to much controversy, are puzzles in the Kuhnian sense and require no radical revolution involving quantum mechanics. A second motivation is personal. Although I certainly deserve no credit whatsoever in the story of how quantum decoherence came into being, I did have an understanding of the situation before the march of science explicated it and it gives me satisfaction to make my involvement public. A final motivation involves my hopes for this blog. I hope the story of my involvement with physics makes clear that I was a hard headed, skeptical practitioner of a basic science and that in promoting Western Zen I’m dedicated to a superstition-free insight that provides a unifying sub-structure for all of Western, and indeed, non-Western World thought. QM 1 Before completing this post, I need to acknowledge that my goal in writing about modern physics was to create a milieu for more talking about Western Zen. However, as I’ve proceeded, the goal has somewhat changed. I want you, as a reader, to become, if you aren’t already, a physics buff, much in the way I became a history buff after finding history incredibly boring and hateful throughout high school and college. The apotheosis of my history disenchantment came at Stanford in a course taught by a highly regarded historian. The course was entitled “The High Middle Ages” and I actually took it as an elective thinking that it was likely to be fascinating. It was only gradually over the years that I realized that history at its best although based on factual evidence, consists of stories full of meaning, significance and human interest. Turning back to physics, I note that even after more than a hundred years of revolution, physics still suffers a hangover from 300 years of its classical period in which it was characterized by a supposedly passionless objectivity and a mundane view of reality. In fact, modern physics can be imagined as a scientific fantasy, a far-flung poetic construction from which equations can be deduced and the fantasy brought back to earth in experiments and in the devices of our age. When I use the word “fantasy” I do not mean to suggest any lack of rigorous or critical thinking in science. I do want to imply a new expansion of what science is about, a new awareness, hinting at a “reality” deeper than what we have ever imagined in the past. However, to me even more significant than a new reality is the fact that the Quantum Revolution showed that physics can never be considered absolute. The latest and greatest theories are always subject to a revolution which undermines the metaphysics underlying the theory. Who knows what the next revolution will bring? Judging from our understanding of the physics of our age, a new revolution will not change the feeling that we are living in a universe which is an unimaginable miracle. In what follows I’ve included formulas and mathematics whose significance can be easily be talked about without going into the gory details. The hope is that these will be helpful in clarifying the excitement of physics and the metaphysical ideas lying behind. Of course, the condensed treatment here can be further explicated in the books I mention and in Wikipedia. My last post, about the massive revolution in physics of the early 20th century, ended by describing the situation in early 1925 when it became abundantly clear in the words of Max Jammer (Jammer, p 196) that physics of the atom was “a lamentable hodgepodge of hypotheses, principles, theorems, and computational recipes rather than a logical consistent theory.” Metaphysically, physicists clung to classical ideas such as particles whose motion consisted of trajectories governed by differential equations and waves as material substances spread out in space and governed by partial differential equations. Clearly these ideas were logically inconsistent with experimental results, but the deep classical metaphysics, refined over 300 years could not be abandoned until there was a consistent theory which allowed something new and different. Werner Heisenberg, born Dec 5, 1901 was 23 years old in the summer of 1925. He had been a brilliant student at Munich studying with Arnold Sommerfeld, had recently moved to Göttingen, a citadel of math and physics, and had made the acquaintance of Bohr in Copenhagen where he became totally enthralled with doing something about the quantum mess. He noted that the electron orbits of the current theory were purely theoretical constructs and could not be directly observed. Experiments could measure the wavelengths and intensity of the light atoms gave off, so following the Zeitgeist of the times as expounded by Mach and Einstein, Heisenberg decided to try make a direct theory of atomic radiation. One of the ideas of the old quantum theory that Heisenberg used was Bohr’s “Correspondence” principle which notes that as electron orbits become large along with their quantum numbers, quantum results should merge with the classical. Classical physics failed only when things became small enough that Planck’s constant h became significant. Bohr had used this idea in obtaining his formula for the hydrogen atom’s energy levels. In various “old quantum” results the Correspondence Principle was always used, but in different, creative ways for each situation. Heisenberg managed to incorporate it into his ultimate vector-matrix construction once and for all. Heisenberg’s first paper in the Fall of 1925 was jumped on by him and many others and developed into a coherent theory. The new results eliminated many slight discrepancies between theory and experiment, but more important, showed great promise during the last half of 1925 of becoming an actual logical theory. In January, 1926, Erwin Schrödinger published his first great paper on wave mechanics. Schrödinger, working from classical mechanics, but following de Broglie’s idea of “matter waves”, and using the Correspondence Principle, came up with a wave theory of particle motion, a partial differential equation which could be solved for many systems such as the hydrogen atom, and which soon duplicated Heisenberg’s new results. Within a couple of months Schrödinger closed down a developing controversy by showing that his and Heisenberg’s approaches, though based on seemingly radically opposed ideas, were, in fact, mathematically isomorphic. Meanwhile starting in early 1926, PAM Dirac introduced an abstract algebraic operator approach that went deeper than either Heisenberg or Schrödinger. A significant aspect of Dirac’s genius was his ability to cut through mathematical clutter to a simpler expression of things. I will dare here to be specific about what I’ll call THE fundamental quantum result, hoping that the simplicity of Dirac’s notation will enable those of you without a background in advanced undergraduate mathematics to get some of the feel and flavor of QM. In ordinary algebra a new level of mathematical abstraction is reached by using letters such as x,y,z or a,b,c to stand for specific numbers, numbers such as 1,2,3 or 3.1416. Numbers, if you think about it, are already somewhat abstract entities. If one has two apples and one orange, one has 3 objects and the “3” doesn’t care that you’re mixing apples and oranges. With algebra, If I use x to stand for a number, the “x” doesn’t care that I don’t know the number it stands for. In Dirac’s abstract scheme what he calls c-numbers are simply symbols of the ordinary algebra that one studies in high school. Along with the c-numbers (classic numbers) Dirac introduces q-numbers (quantum numbers) which are algebraic symbols that behave somewhat differently than those of ordinary algebra. Two of the most important q-numbers are p and s, where p stands for the momentum of a moving particle, mv, mass times velocity in classical physics, and s stands for the position of the particle in space. (I’ve used s instead of the usual q for position to try avoid a confusion with the q of q-number.) Taken as q-numbers, p and s satisfy ps – sp = h/2πi which I’ll call the Fundamental Quantum Result in which h is Planck’s constant and i the square root of -1. Actually, Dirac, observing that in most formulas or equations involving h, it occurs as h/2π, defined what is now called h bar or h slash using the symbol ħ = h/2π for the “reduced” Planck constant. If one reads about QM elsewhere (perhaps in Wikipedia) one will see ħ almost universally used. Rather than the way I’ve written the FQR above, it will appear as something like pqqp = ħ/i where I’ve restored the usual q for position. What this expression is saying is that in the new QM if one multiplies something first by position q and then by momentum p, the result is different from the multiplications done in the opposite order. We say these q-numbers are non-commutative, the order of multiplication matters. Boldface type is used because position and momentum are vectors and the equation actually applies to each of their 3 components. Furthermore, the FQR tells us exact size of the non-commute. In usual human sized physical units ħ is .00…001054… where there are 33 zeros before the 1054. If we can ignore the size of ħ and set it to zero, p and q, then commute, can be considered c-numbers and we’re back to classical physics. Incidentally, Heisenberg, Born and Jordan obtained the FQR using p and q as infinite matrices and it can be derived also using Schrödinger’s differential operators. It is interesting to note that by using his new abstract algebra, Dirac not only obtained the FQR but could calculate the energy levels of the hydrogen atom. Only later did physicists obtain that result using Heisenberg’s matrices. Sometimes the deep abstract leads to surprisingly concrete results. For most physicists in 1926, the big excitement was Schrödinger’s equation. Partial differential equations were a familiar tool, while matrices were at that time known mainly to mathematicians. The “old quantum theory” had made a few forays into one or another area leaving the fundamentals of atomic physics and chemistry pretty much in the dark. With Schrödinger’s equation, light was thrown everywhere. One could calculate how two hydrogen atoms were bound in the hydrogen molecule. Then using that binding as a model one could understand various bindings of different molecules. All of chemistry became open to theoretic treatment. The helium atom with its two electrons couldn’t be dealt with at all by the old quantum theory. Using various approximation methods, the new theory could understand in detail the helium atom and other multielectron atoms. Electrons in metals could be modeled with the Schrödinger’s equation, and soon the discovery of the neutron opened up the study of the atomic nucleus. The old quantum theory was helpless in dealing with particle scattering where there were no closed orbits. Such scattering was easily accommodated by the Schrödinger equation though the detailed calculations were far from trivial. Over the years quantum theory revealed more and more practical knowledge and most physicists concentrated on experiments and theoretic calculations that led to such knowledge with little concern about what the new theory meant in terms of physical reality. However, back in the first few years after 1925 there was a great deal of concern about what the theory meant and the question of how it should be interpreted. For example, under Schrödinger’s theory an electron was represented by a “cloud” of numbers which could travel through space or surround an atom’s nucleus. These numbers, called the wave function and typically named ψ, were complex, of the form a + ib, where i is the square root of -1. By multiplying such a number by its conjugate a – ib, one gets a positive (strictly speaking, non-negative) number which can perhaps be physically interpreted. Schrödinger himself tried to interpret this “real” cloud as a negative electric change density, a blob of negative charge. For a free electron, outside an atom, Schrödinger imagined that the electron wave could form what is called a “wave packet”, a combination of different frequencies that would appear as a small moving blob which could be interpreted as a particle. This idea definitely did not fly. There were too many situations where the waves were spread out in space, before an electron suddenly made its appearance as a particle. The question of what ψ meant was resolved by Max Born (see Wikipedia), starting with a paper in June, 1926. Born interpreted the non-negative numbers ψ*ψ (ψ* being the complex conjugate of the ψ numbers) as a probability distribution for where the electron might appear under suitable physical circumstances. What these physical circumstances are and the physical process of the appearance are still not completely resolved. Later in this or another blog post I will go into this matter in some detail. In 1926 Born’s idea made sense of experiment and resolved the wave-particle duality of the old quantum theory, but at the cost of destroying classical concepts of what a particle or wave really was. Let me try to explain. A simple example of a classical probability distribution is that of tossing a coin and seeing if it lands heads or tails. The probability distribution in this case is the two numbers, ½ and ½, the first being the probability of heads, the second the probability of tails. The two probabilities add up to 1 which represents certainty, in probability theory. (Unlike the college students who are trying to decide whether to go drinking, go to the movies or to study, I ignore the possibility that the coin lands on its edge without falling over.) With the wave function product ψ*ψ, calculus gives us a way of adding up all the probabilities, and if they don’t add up to 1, we simply define a new ψ by dividing by the sum we obtained. (This is called “normalizing” the wave function.) Besides the complexity of the math, however, there is a profound difference between the coin and the electron. With the coin, classical mechanics tells us in theory, and perhaps in practice, precisely what the position and orientation of the coin is during every instant of its flight; and knowing about the surface the coin lands on, allows us to predict the result of the toss in advance. The classical analogy for the electron would be to imagine it is like a bb moving around inside the non-zero area of the wave function, ready to show up when conditions are propitious. With QM this analogy is false. There is no trajectory for the electron, there is no concept of it having a position, before it shows up. Actually, it is only fairly recently that the “bb in a tin can model” has been shown definitively to be false. I will discuss this matter later talking briefly about Bell’s theorem and “hidden” variable ideas. However, whether or not an electron’s position exists prior to its materialization, it was simply the concept of probability that Einstein and Schrödinger, among others, found unacceptable. As Einstein famously put it, “I can’t believe God plays dice with the universe.” Max Born, who introduced probability into fundamental physics, was a distinguished physics professor in Göttingen and Heisenberg’s mentor after the latter first came to Göttingen from Munich in 1922. Heisenberg got the breakthrough for his theory while escaping from hay fever in the spring of 1925 walking the beaches of the bleak island of Helgoland in the North Sea off Germany. Returning to Göttingen, Heisenberg showed his work to Born who recognized the calculations as being matrix multiplication and who saw to it that Heisenberg’s first paper was immediately published. Born then recruited Pascual Jordan from the math department at Göttingen and the three wrote a famous follow-up paper, Zur Quantenmechanik II, Nov, 1925, which gave a complete treatment of the new theory from a matrix mechanics point of view. Thus, Born was well posed to come up with his idea of the nature of the wave function. Quantum Mechanics came into being during the amazingly short interval between mid-1925 and the end of 1926. As far as the theory went, only “mopping” up operations were left. As far as the applications were concerned there was a plethora of “low hanging fruit” that could be gathered over the years with Schrödinger’s equation and Born’s interpretation. However, as 1927 dawned, Heisenberg and many others were concerned with what the theory meant, with fears that it was so revolutionary that it might render ambiguous the meaning of all the fundamental quantities on which both the new QM and old classical physics depended. In 1925 Heisenberg began his work on what became the matrix mechanics because he was skeptical about the existence of Bohr orbits in atoms, but his skepticism did not include the very concept of “space” itself. As QM developed, however, Heisenberg realized that it depended on classical variables such as position and momentum which appeared not only in the pq commutation relation but as basic variables of the Schrödinger equation. Had the meaning of “position” itself changed? Heisenberg realized that earlier with Einstein’s Special Relativity that the meaning of both position and time had indeed changed. (Newton assumed that coordinates in space and the value of time were absolutes, forming an invariable lattice in space and an absolute time which marched at an unvarying pace. Einstein’s theory was called Relativity because space and time were no longer absolutes. Space and time lost their “ideal” nature and became simply what one measured in carefully done experiments. (Curiously enough, though Einstein showed that results of measuring space and time depended on the relative motion of different observers, these quantities changed in such an odd way that measurements of the speed c of light in vacuum came out precisely the same for all observers. There was a new absolute. A simple exposition of special relativity is N. David Mermin’s Space and Time in Special Relativity.) The result of Heisenberg’s concern and the thinking about it is called the “Uncertainty Principle”. The statement of the principle is the equation ΔqΔp = ħ. The variables q and p are the same q and p of the Fundamental Quantum Relation and, indeed, it is not difficult to derive the uncertainty principle from the FQR. The symbol delta, Δ, when placed in front of a variable means a difference, that is an interval or range of the variable. Experimentally, a measurement of a variable quantity like position q is never exact. The amount of the uncertainty is Δq. The uncertainty equation above thus says that the uncertainty of a particle’s position times the uncertainty of the same particle’s momentum is ħ. In QM what is different from an ordinary error of measurement is that the uncertainty is intrinsic to QM itself. In a way, this result is not all that surprising. We’ve seen that the wave function ψ for a particle is a cloud of numbers. Similarly, a transformed wave function for the same particle’s momentum is a similar cloud of numbers. The Δ’s are simply a measure of the size of these two clouds and the principle says that as one becomes smaller, the other gets larger in such a way that their product is h bar, whose numerical value I’ve given above. In fact, back in 1958 when I was in Eikenberry’s QM course and we derived the uncertainty relation from the FQR, I wondered what the big deal was. I was aware that the uncertainty principle was considered rather earthshaking but didn’t see why it should be. What I missed is what Heisenberg’s paper really did. The equation I’ve written above is pure theory. Heisenberg considered the question, “What if we try to do experiments that actually measure the position and momentum. How does this theory work? What is the physics? Could experiments actually disprove the theory?” Among other experimental set-ups Heisenberg imagined a microscope that used electromagnetic rays of increasingly short wavelengths. It was well known classically by the mid-nineteenth century that the resolution of a microscope depends on the wavelength of the light it uses. Light is an electromagnetic (em) wave so one can imagine em radiation of such a short wavelength that it could view with a microscope a particle, regardless of how small, reducing Δq to as small a value as one wished. However, by 1927 it was also well known because of the Compton effect that I talked about in the last post, that such em radiation, called x-rays or gamma rays, consisted of high energy photons which would collide with the electron giving it a recoil momentum whose uncertainty, Δp, turns out to satisfy ΔqΔp = ħ. Heisenberg thus considered known physical processes which failed to overturn the theory. The sort of reasoning Heisenberg used is called a “thought” experiment because he didn’t actually try to construct an apparatus or carry out a “real” experiment. Before dismissing thought experiments as being hopelessly hypothetical, one must realize that any real experiment in physics or in any science for that matter, begins as a thought experiment. One imagines the experiment and then figures out how to build an apparatus (if appropriate) and collect data. In fact, as a science progresses, many experiments formerly expressed only in thought, turn real as the state of the art improves. Although the uncertainty principle is earthshaking enough that it helped confirm the skepticism of two of the main architects of QM, namely, Einstein and Schrödinger, one should note that, in practice, because of the small size of ħ, the garden variety uncertainties which arise from the “apparatus” measuring position or momentum are much larger than the intrinsic quantum uncertainties. Furthermore, the principle does not apply to c-numbers such as e, the fundamental electron or proton charge, c, the speed of light in vacuum, h, Planck’s constant. There is an interesting story here about a recent (Fall, 2018) redefinition of physical units which one can read about on line. Perhaps I’ll have more to say about this subject in a later post. For now, I’ll just note that starting on May 20, 2019, Planck’s constant will be (or has been) defined as having an exact value of 6.626070150×10¯³⁴ Joule seconds. There is zero uncertainty in this new definition which may be used to define and measure the mass of the kilogram to higher accuracy and precision than possible in the past using the old standard, a platinum-iridium cylinder, kept closely guarded near Paris. In fact, there is nothing muddy or imprecise about the value of many quantities whose measurement intimately involves QM. During the years after 1925 there was at least one more area which in QM was puzzling to say the least; namely, what has been called “the collapse of the wave function.” Involved in the intense discussions over this phenomenon and how to deal with it was another genius I’ve scarcely mentioned so far; namely Wolfgang Pauli. Pauli, a year older than Heisenberg, was a year ahead of him in Munich studying under Sommerfeld, then moved to Göttingen, leaving just before Heisenberg arrived. Pauli was responsible for the Pauli Exclusion Principle based on the concept of particle spin which he also explicated. (see Wikipedia) He was in the thick of things during the 1925 – 1927 time period. Pauli ended up as a professor in Zurich, but spent time in Copenhagen with Bohr and Heisenberg (and many others) formulating what became known as the Copenhagen interpretation of QM. Pauli was a bon vivant and had a witty sarcastic tongue, accusing Heisenberg at one point of “treason” for an idea that he (Pauli) disliked. In another anecdote Pauli was at a physics meeting during the reading of a muddy paper by another physicist. He stormed to his feet and loudly said, “This paper is outrageous. It is not even wrong!” Whether the meeting occurred at a late enough date for Pauli to have read Popper, he obviously understood that being wrong could be productive, while being meaningless could not. Over the next few years after 1927 Bohr, Heisenberg, and Pauli explicated what came to be called “the Copenhagen interpretation of Quantum Mechanics”. It is well worth reading the superb article in Wikipedia about “The Copenhagen Interpretation.” One point the article makes is that there is no definitive statement of this interpretation. Bohr, Heisenberg, and Pauli each had slightly different ideas about exactly what the interpretation was or how it worked. However, in my opinion, things are clear enough in practice. The problem QM seems to have has been called the “collapse of the wave function.” It is most clearly seen in a double slit interference experiment with electrons or other quantum particles such as photons or even entire atoms. The experiment consists of a plate with two slits, closely enough spaced that the wave function of an approaching particle covers both slits. The spacing is also close enough that the wavelength of the particle as determined by its energy or momentum, is such that the waves passing through the slit will visibly interfere on the far side of the slit. This interference is in the form of a pattern consisting of stripes on a screen or photographic plate. These stripes show up, zebra like, on a screen or as dark, light areas on a developed photographic plate. On a photographic plate there is a black dot where a particle has shown up. The striped pattern consists of all the dots made by the individual particles when a large number of particles have passed through the apparatus. What has happened is that the wave function has “collapsed” from an area encompassing all of the stripes, to a tiny area of a single dot. One might ask at this point, “So what?” After all, for the idea of a probability distribution to have any meaning, the event for which there is a probability distribution has to actually occur. The wave function must “collapse” or the probability interpretation itself is meaningless. The problem is that QM has no theory whatever for the collapse. One can easily try to make a quantum theory of what happens in the collapse because QM can deal with multi-particle systems such as molecules. One obtains a many particle version of QM simply by adding the coordinates of the new particles, which are to be considered, to a multi-particle version of the Schrödinger equation. In particular, one can add to the description of a particle which approaches a photographic plate, all the molecules in the first few relevant molecular layers of the plate. When one does this however, one does not get a collapse. Instead the new multi-particle wave function simply includes the molecules of the plate which are as spread out as much as the original wave function of the approaching particle. In fact, the structure of QM guarantees that as one adds new particles, these new particles themselves continue to make an increasingly spread out multi-particle wave function. This result was shown in great detail in 1929 by John von Neumann. However, the idea of von Neumann’s result was already generally realized and accepted during the years of the late 1920’s when our three heroes and many others were grappling with finding a mechanism to explain the experimental collapse. Bohr’s version of the interpretation is simplicity itself. Bohr posits two separate realms, a realm of classical physics governing large scale phenomena, and a realm of quantum physics. In a double slit experiment the photographic plate is classical; the approaching particle is quantum. When the quantum encounters the classical, the collapse occurs. The Copenhagen interpretation explains the results of a double slit experiment and many others, and is sufficient for the practical development of atomic, molecular, solid state, nuclear and particle physics, which has occurred since the late 1920’s. However, there has been an enormous history of objections, refinements, rejections and alternate interpretations of the Copenhagen interpretation as one might well imagine. My own first reaction could be expressed as the statement, “I thought that ‘magic’ had been banned from science back in the 17th century. Now it seems to have crept back in.” (At present I take a less intemperate view.) However, one can make many obvious objections to the Copenhagen interpretation as I’ve baldly stated it above. Where, exactly, does the quantum realm become the classic realm? Is this division sharp or is there an interval of increasing complexity that slowly changes from quantum to classical? Surely, QM, like the theory of relativity, actually applies to the classical realm. Or does it? During the 1930’s Schrödinger used the difficulties with the Copenhagen interpretation to make up the now famous thought experiment called “Schrödinger’s Cat.” Back in the early 1970’s when I became interested in the puzzle of “collapse” and first heard the phrase “Schrödinger’s Cat”, it was far from famous so, curious, I looked it up and read the original short article, puzzling out the German. In his thought experiment Schrödinger uses the theory of alpha decay. An alpha particle confined in a radioactive nucleus is forever trapped according to classical physics. QM allows the escape because the alpha particle’s wave function can actually penetrate the barrier which classically keeps it confined. Schrödinger imagines a cat imprisoned in a cage containing an infernal apparatus (hollenmaschine) which will kill the cat if triggered by an alpha decay. Applying a multi-particle Schrödinger’s equation to the alpha’s creeping wave function as it encounters the trigger of the “maschine”, its internals, and the cat, the multi-particle wave function then contains a “superposition” (i.e. a linear combination) of a dead and a live cat. Schrödinger makes no further comment leaving it to the reader to realize how ridiculous this all is. Actually, it is even worse. According to QM theory, when a person looks in the cage, the superposition spreads to the person leaving two versions, one looking at a dead cat and one looking at a live cat. But a person is connected to an environment which also splits and keeps splitting until the entire universe is involved. What I’ve presented here is an actual alternative to the Copenhagen Interpretation called “the Many-worlds interpretation”. To quote from Wikipedia “The many-worlds interpretation is an interpretation of quantum mechanics that asserts the objective reality of the universal wavefunction and denies the actuality of wavefunction collapse. Many-worlds implies that all possible alternate histories and futures are real, each representing an actual ‘world’ (or ‘universe’).” The many-worlds interpretation arose in 1957 in the Princeton University Ph.D. dissertation of Hugh Everett working under the direction of the late John Archibald Wheeler, who I mentioned in the last post. Although I am a tremendous admirer of Wheeler, I am skeptical of the many-worlds interpretation. It seems unnecessarily complicated, especially in light of ideas that have developed since I noticed them in 1972. There is no experimental evidence for the interpretation. Such evidence might involve interference effects between the two versions of the universe as the splitting occurs. Finally, if I exist in a superposition, how come I’m only conscious of the one side? Bringing in “consciousness” however, leads to all kinds of muddy nonsense about consciousness effects in wave function splitting or collapse. I’m all for consciousness studies and possibly such will be relevant for physics after another revolution in neurology or physics. At present we can understand quantum mechanics without explicitly bringing in consciousness. In the next post I’ll go into what I noticed in 1971-72 and how this idea subsequently became developed in the greater physics community. The next post will necessarily be somewhat more mathematically specific than so far, possibly including a few gory details. I hope that the math won’t obscure the story. In subsequent posts I’ll revert to talking about physics theory without actually doing any math. Physics, Etc. In telling a story about physics and some of its significance for a life of awareness I’ll start with an idea of the philosopher Immanuel Kant (1724 – 1804). Kant, in my mind, is associated with impenetrable German which translates into impenetrable English. To find some clarity about Kant’s ideas one turns to Wikipedia, where the opening paragraph of the Kant entry explains his main ideas in an uncharacteristically comprehensible way. One of these ideas is that we are born into this world with our minds prepared to understand space, time, and causality. And with this kind of mental conditioning we can make sense of simple phenomena, and, indeed, pursue science. This insight predates Darwin’s theory of evolution which offers a plausible explanation for it, by some sixty-odd years, and was thus a remarkable insight on the part of Kant. Another Kant idea that is relevant to our story is his distinction between what he calls phenomena and noumena. Quoting from Wikipedia, “… our experience of things is always of the phenomenal world as conveyed by our senses: we do not have direct access to things in themselves, the so-called noumenal world.” Of course, this is only one aspect of Kant’s thought, but the aspect that seems to me most relevant to what might be meant by physical reality. Kant was a philosopher’s philosopher, totally dedicated to deepening our understanding of what we may comprehend about the world and morality by purely rational thought. He was born in Königsberg, East-Prussia, at the time a principality on the Baltic coast east of Denmark and north of Poland-Lithuania; and died there 80 years later. Legend has it that during his entire life he never traveled more than 10 miles from his home. The Wikipedia article refutes this slander: Kant actually traveled on occasion some 90.1 miles from Königsberg. The massive extent of Kant’s philosophy leaves me somewhat appalled, particularly since I understand little of it and because what I perhaps do understand seems dubious at best and meaningless at worst. What Kant may not have realized is the idea that the extent and nature of the noumenal world is relative to the times in which one lives. Kant was born 3 years before Isaac Newton died, so by the date of his birth the stage was well set for the age of classical physics. During his life classical mechanics was developed largely by two great mathematicians, Joseph-Louis Lagrange (1736 – 1813) and Pierre-Simon Laplace (1747 – 1849). Looking back from Kant’s time to the ancient world one sees an incredible growth of the phenomenal world, with the Copernican revolution, a deepening understanding of planetary motion, and Newton’s Laws of mechanics. In the time since Kant lived laws of electricity and magnetism, statistical mechanics, quantum mechanics, and most of present-day science were developed. This advance raises a question. Does the growth of the phenomenal world entail a corresponding decrease in the noumenal world or are phenomena and noumena entirely independent of one another? Of course, I’d like to have it both ways, and can do so by imagining two senses of noumena. To get an idea of the first original idea, I will tell a brief story. In the early 1970’s we were visited at Auburn University by the great physicist, John Archibald Wheeler, who led a discussion in our faculty meeting room. I was very impressed by Dr. Wheeler. To me he seemed a “tiger”, totally dedicated to physics, his students, and to an awareness of what lay beyond our comprehension. At one point he pointed to the tiles on the floor and said to us physicists, something like, “Let each one of you write your favorite physics laws on one of these tiles. And after you’ve all done that, ask the tiles with their equations to get up and fly. They will just lie there; but the universe flies.” Wheeler had doubtless used this example on many prior occasions, but it was new to me and seems to get at the meaning of noumena as a realm independent of anything science can ever discover. On the other hand, as the realm of phenomena that we do understand has grown, we can regard noumena simply as a “blank” in our knowledge, a blank which can be filled in as science, so to speak, peels back the layers of an “onion” revealing the understanding of a larger world, and at the same time, exposing a new layer of ignorance to attack. This second sense of the word in no way diminishes the ultimate mystery of the universe. In fact, it appears to me that the quest for ultimate understanding in the face of the great mystery is what gives physics (and science) a compulsive, even addictive, fascination for its practitioners. Like compulsive gamblers experimental physicists work far into the night and theorists endlessly torture thought. Certainly, the idea that we could conceivably uncover ever more specifics into the mystery of ultimate being is what drew me to the area. That, as well as the idea that if one wants to understand “everything”, physics is a good place to start. In my understanding, the story of physics during my lifetime and the 30 years preceding my birth is the story of a massive, earthshaking revolution. Thomas Kuhn’s The Structure of Scientific Revolutions, mentioned in earlier posts is a story of many shifts in scientific perception which he calls revolutions. In his terms what I’m talking about here is a “super-duper-revolution”, a massive shift in understanding whose import is still not fully realized in our society at large at the present time. Most of the ”revolutions” that Kuhn uses as examples affect only scientists in a particular field. For example, the fall of the phlogiston theory and the rise of Oxygen in understanding fire and burning was a major revolution for chemistry, but had little effect on the culture of society at large. Similarly, in ancient times the rise of Ptolemaic astronomy mostly concerned philosophers and intellectuals. The larger society was content with the idea that gods or God controlled what went on in the heavens as well as on earth. The Copernican revolution, on the other hand, was earth shaking (super-duper) for the entire society, mainly because it called into question theories of how God ran the universe and because it became the underpinning of an entirely new idea of what was “real”. Likewise, the scientific revolution of the 16th and 17th centuries was earthshaking to the entire society, which, however, as time wore on into the 18th and 19th centuries became accustomed to it and assumed that the classical, Newtonian “clockworks” universe was here to stay forever however uncomfortable it might be to artists and writers, who hoped to live in a different, more meaningful world of their own experience, rejecting scientific “reality” as something which mattered little in a spiritual sense. Who could have believed that in the mid 1890’s after 300 years (1590 – 1890, say) of continued, mostly harmonious development the entire underpinning of scientific reality was about to be overturned by what might be called the quantum revolution. Yet that is what happened in the next forty years (1895 – 1935) with continuing advances and consolidation up to the present day. (From now on I’ll use the abbreviation QM for Quantum Mechanics, the centerpiece of this revolution.) Of course, as with any great revolution, all has not been smooth. Many of the greatest scientists of our times, most notably Albert Einstein and Erwin Schrödinger, found the tenets of the new physics totally unacceptable and fought them tooth and nail. In fact, there is at least one remaining QM puzzle epitomized by “Schrödinger’s Cat” about which I hope to have my say at some point. It is my hope that readers of this blog will find excitement in the open possibilities that an understanding of the revolutionary physical “reality” we currently live in suggests. In talking about it I certainly don’t want to try “reinvent the wheel” since many able and brilliant writers have told portions of the story. What I can do is give references to various books and URL’s that are with few exceptions (which I’ll note) great reading. I’ll have comments to make about many of these and hope that with their underpinning, I can tell this story and illuminate its relevance for what I’ve called Western Zen. The first book to delve into is The Quantum Moment: How Planck, Bohr, Einstein, and Heisenberg Taught us to Love Uncertainty by Robert P. Crease and Alfred Scharff Goldhaber. Robert Crease is a philosopher specializing in science and Alfred Goldhaber is a physicist. The book, which I’ll abbreviate as TQM, tells the history of Quantum Mechanics from its very beginning in December, 1900, to very near the present day. Copyrighted by W.W. Norton in 2014 it is quite recent, today as I write being early November, 2018. The story this book tells goes beyond an exposition of QM itself to give many examples of the effects that this new reality has had so far in our society. It is very entertaining and well written though, on occasion it does get slightly mathematical in a well-judged way in making quantum mechanics clearer. A welcome aspect of the book for me was the many references to another book, The Conceptual Development of Quantum Mechanics by Max Jammer. Jammer’s book (1966) is out of print and is definitely not light reading with its exhaustive references to the original literature and its full deployment of advanced math. Auburn University had Jammer in its library and I studied it extensively while there. I was glad to see the many footnotes to it in TQM, showing that Jammer is still considered authoritative and that there is no more recent book detailing this history. Recently, I felt that I would like to own a copy of Jammer so found one, falling to pieces, on Amazon for fifty odd dollars. If you are a hotshot mathematician and fascinated by the history of QM, you will doubtless find Jammer in any university library. The quantum revolution occurred in two great waves. The first wave, called the “old quantum theory” started with Planck’s December, 1900, paper on black body radiation and ended in 1925 with Heisenberg’s paper on Quantum Mechanics proper. From 1925 through about 1932, QM was developed by about 8 or so geniuses bringing the subject to a point equivalent to Newton’s Principia for classical mechanics development. Besides the four physicists of the Quantum Moment title, I’ll mention Louis de Broglie, Wolfgang Pauli, PAM Dirac, Max Born, Erwin Schrödinger. And there were many others. A point worth mentioning is that The Quantum Moment concentrates on what might be called the quantum weirdness of both the old quantum theory and the new QM. This concentration is appropriate because it is this weirdness that has most affected our cultural awareness, the main subject of the book. However, to the physicists of the period 1895 – 1932, the weirdness, annoying and troubling as it was, was in a way a distraction from the most exciting physics going on at the time; namely, the discovery that atoms really exist and have a substructure which can be understood, an understanding that led to a massive increase in practical applications as well as theoretical knowledge. Without this incredible success in understanding the material world the “weirdness” might have well have doomed QM. As we will mention below most physicists ignore the weirdness and concentrate on the “physics” that leads to practical advances. Two examples of these “advances” are the atomic bomb and the smart phone in your pocket. In the next few paragraphs I will fill in some of this history of atomic physics with its intimate connection to QM. The discovery of the atom and its properties began in 1897 as J.J. Thomson made a definitive breakthrough in identifying the first sub-atomic particle, the lightweight, negatively charged electron (see Wikipedia). Until 1905, however, many scientists disbelieved in the “reality” of atoms in spite of their usefulness as a conceptual tool in understanding chemistry. In the “miracle year” 1905 Albert Einstein published four papers, each one totally revolutionary in a different field. The paper of interest here is about Brownian motion, a jiggling of small particles, as seen through a microscope. As a child I had a very nice full laboratory Bausch and Lomb microscope, given by my parents when I was about 7 years old. In the 9th grade I happened to put a drop of tincture of Benzoin in water and looked at it through the microscope, seeing hundreds of dancing particles that just didn’t behave like anything alive. I asked my biology teacher about it and after consulting her husband, a professor at the university, she told me it was Brownian motion, discovered by Robert Brown in 1827. I learned later that the motion was caused because the tiny moving particles are small enough that molecules striking them are unbalanced by others, causing a random motion. I had no idea at time how crucial for atomic theory this phenomenon was. It turns out that the motion had been characterized by careful observation and that Einstein showed in his paper how molecules striking the small particles could account for the motion. Also, by this time studies of radioactivity had shown emitted alpha and beta particles were clearly sub-atomic, beta particles being identical with the newly discovered electrons and the charged alpha particles turning into electrically neutral helium as they slowed and captured stray electrons. Einstein’s other 1905 papers were two on special relativity and one on the photoelectric effect. As strange as special relativity seems with its contraction of moving measuring sticks, slowing of moving clocks, simultaneity dependent upon the observer to say nothing of E = mc², this theory ended up fitting comfortably with classical Newtonian physics. Not so with the photoelectric effect. In December, 1900, Max Planck started the quantum revolution by finding a physical basis for a formula he had guessed earlier relating the radiated energy of a glowing “black body” to its temperature and the frequencies of its radiation. A “black body” is made of an ideal substance that is totally efficient in radiating electro-magnetic waves. Such a body could be simulated experimentally with high accuracy by measuring what came out of a small hole in the side of an enclosed oven. To find the “physics” behind his formula Planck had turned to statistical mechanics, which involves counting numbers of discrete states to find the probability distribution of the states. In order to do the counting Planck had artificially (he thought) broken up the continuous energy of electromagnetic waves into chunks of energy, hν, ν being the frequency of the wave, denoted historically by the Greek letter nu. (Remember: the frequency is associated with light’s color, and thus the color of the glow when a heated body gives off radiation) Planck’s plan was to let the “artificial” fudge-factor h go to zero in the final formula so that the waves would regain their continuity. Planck found his formula, but when he set h = 0, he got the classical Raleigh-Jeans formula for the radiation with its “ultra-violet catastrophe”. The latter term refers to the Raleigh-Jeans formula’s infinite energy radiated as the frequency goes higher. Another formula, guessed by Wien, gave the correct experimental results at high frequencies but was off at lower frequencies where the Raleigh-Jeans formula worked just fine. To his dismay what Planck found was that if he set h equal to a very small finite value, his formula worked perfectly for both low and high frequencies. This was a triumph but at the same time, a disaster. Neither Planck nor anyone else believed that these hν bundles could “really” be real. Maybe the packets came off in bundles which quickly merged to form the electromagnetic wave. True, Newton had thought light consisted of a stream of tiny particles, but over the years since his time numerous experiments showed that light really was a wave phenomenon, with all kinds of wave interference effects. Also, in the 19th century physicists, notably Fraunhofer, invented the diffraction grating and with it the ability to measure the actual wave length of the waves. The Quantum Moment (TQM) has a wonderfully complete detailed story of Planck’s momentous breakthrough in its chapter “Interlude: Max Planck Introduces the Quantum”. TQM is structured with clear general expositions followed by more detailed “Interludes” which can be skipped without interrupting the story. Einstein’s 1905 photoelectric effect paper assumed that the hν quanta were real and light actually acted like little bullets, slamming into a metal surface, penetrating, colliding with an atomic electron and bouncing it out of the metal where it could be detected. It takes a certain energy to bounce an electron out of its atom and then past the surface of the metal. What was experimentally found (after some tribulations) was that energy of the emerging electrons depended only on the frequency of the light hitting the surface. If the light frequency was too low, no matter how intense the light, nothing much happened. At higher frequencies, increasing the intensity of the light resulted in more electrons coming out but did not increase their energy. As the light frequency increased the emitted electrons were more energetic. It was primarily for this paper that Einstein received his Nobel Prize in 1921. A huge breakthrough in atomic theory was Ernest Rutherford’s discovery of the atomic nucleus in the early years of the 20th century. Rather than a diffuse cloud of electrically positive matter with the negatively charged electrons distributed in it like raisins (the “plum pudding” model of the atom) Rutherford found by scattering alpha particles off gold foil that the positive charge of the atom was in a tiny nucleus with the electrons circling at a great distance (the “fly in the cathedral model”). There was a little problem however. The “plum pudding” model might possibly be stable under Newtonian classical physics, while the “fly in the cathedral” model was utterly unstable. (Note: Rutherford’s experiment, though designed by him, was actually carried out between 1908 and 1913 by Hans Geiger and Ernest Marsden at Rutherford’s Manchester lab.) Ignoring the impossibility of the Rutherford atom physics plowed ahead. In 1913 the young Dane Niels Bohr made a huge breakthrough by assuming quantum packets were real and could be applied to understanding the hydrogen atom, the simplest of all atoms with its single electron circling its nucleus. Bohr’s model with its discrete electron orbits and energy levels explained the spectral lines of glowing hydrogen which had earlier been discovered and measured with a Fraunhofer diffraction grating. At Rutherford’s lab it was quickly realized that energy levels were a feature of all atoms, and the young genius physicist, Henry Moseley, using a self-built X-ray tube to excite different atoms refined the idea of the atomic number, removing several anomalies in the periodic table of the time, while predicting 4 new chemical elements in the process. At this point World War I intervened and Moseley volunteered for the Royal Engineers. One among the innumerable tragedies of the Great War was the death of Moseley August 10, 1915, aged 27, in Gallipoli, killed by a sniper. Brief Interlude: It is enlightening to understand the milieu in which the quantum revolution and the Great War occurred. A good read is The Fall of the Dynasties – The Collapse of the Old Order: 1905 – 1922 by Edmond Taylor. Originally published in 1963, the book was reissued in 2015. The book begins with the story of the immediate cause of the war, an assassination in Sarajevo, Bosnia, part of the dual monarchy Austria-Hungary empire; then fills in the history of the various dynasties, countries and empires involved. One imagines what it would be like to live in those times and becomes appalled by the nationalistic passions of the day. While explicating the seemingly mainstream experience of living in the late 19th and early 20th century, and the incredible political changes entailed by the fall of the monarchies and the Great War, the aspects of the times, which we think of, these days, as equally revolutionary are barely mentioned. These were modern art with its demonstration that aesthetic depth lay in realms beyond pure representation, the modern novel and poetry, the philosophy of Wittgenstein which I’ve discussed above and perhaps most revolutionary of all, the fall of classic physics and rise of the new “reality” of modern physics which we are talking about in this post. (With his deep command of the relevant historical detail for his story the author does, however, get one thing wrong when he briefly mentions science. He chooses Einstein’s relativity of 1905 but calls it “General Relativity” putting in an adjective which makes it sound possibly more exciting than plain “relativity”. The correct phrase is “Special Relativity” which indeed was quite exciting enough. General Relativity didn’t happen until 1915.) Unlike the second world war the first was not a total war and research in fundament physics went on. The mathematician turned physicist Arnold Sommerfeld in Munich generalized Bohr’s quantum rules by imagining the discrete electron orbits as elliptical rather than circular and taking their tilt into account, giving rise to new labels (called quantum numbers) for these orbits. The light spectra given off by atoms verified these new numbers with a few discrepancies which were later removed by QM. During this time and after the war ended, physicists became concerned about the contradiction between the wave and particle theories of light. This subject is well covered in TQM. (See the chapter “Sharks and Tigers: Schizophrenia”. It is easy to see the problem. If one has surfed or even just looked at the ocean, one feels or sees that a wave carries energy along a wide front, this energy being released as the wave breaks. This kind of energy distribution is characteristic of all waves, not just ocean waves. On the other hand, a bullet or billiard ball carries its energy and momentum in a compact volume. Waves can interfere with each other, reinforcing or canceling out their amplitudes. So, what is one to make of light which makes interference patterns when shined through a single or double slit but acts like a particle in the photoelectric effect or, even more clearly, like a billiard ball collision when a light quantum, called a photon, collides with an electron, an effect discovered by Arthur Compton in 1923. To muddy the waters still further, in 1922 the French physicist Louis de Broglie reasoned that if light can act like either a particle or wave depending on circumstances, by analogy, an electron, regarded hitherto as strictly a particle, could perhaps under the right conditions act like a wave. Although there was no direct evidence for electron waves at the time, there was suggestive evidence. For example, with the Bohr model of the hydrogen atom if one assumed the lowest, “ground state” orbit was a single electron wave length, one could deduce the entire Bohr theory in a new, simple way. By 1924 it was clear to physicists that the “old” quantum mechanics just wouldn’t do. This theory kept classical mechanics and classical wave theory and restricted their generality by imposing “quantum” rules. With both light and electrons being both wave and particle, physics contained an apparent logical contradiction. Furthermore, though the “old” theory had successes with its concept of energy levels in atoms and molecules, it couldn’t theoretically deal at all with such seemingly simple entities as the hydrogen molecule or the helium atom which experimentally had well defined energy levels. The theory was a total mess. It was in 1925 that the beginnings of a completely new, fundamental theory made its appearance leading shortly to much more weirdness than had already appeared in the “old quantum” theory. In the next post I’ll delve into some of the story of the new QM. Reality is what we all know about as long as we don’t think. It’s not meant to be thought about but reacted to; as threats, awareness of danger; bred into our bones by countless years of evolution. But now, after those countless years, we have a brain and a different kind of awareness that can wonder about such things. Is such wonder worthless? Who knows. Worthless or not, I’m stuck with it because I enjoy ruminations and trying to understand what we take for granted, finding as I think harder, nothing but mystery. In this post I will begin to talk about “reality” and try to clarify the idea somewhat, bringing in Zen, which may or may not be relevant. In thinking about “reality” I will take it as a primitive, attempting no definition. One may try to get at reality by considering “fiction”, perhaps a polar opposite. In this consideration one notes that Aristotelean logic doesn’t apply. There is a middle one can’t exclude, because, in this case, the middle is larger and more important than the ends of the spectrum. One can begin to work into this middle by considering the use of the word “fiction” in Yuval Harari’s Sapiens: A Brief History of Humankind, where “fiction” is applied to societal conventions and laws. Sapiens is a fascinating book, but Harari’s use of the word “fiction” for “convention” rubbed me the wrong way. Although laws and conventions are, strictly speaking, fictions, they have one property popularly attributed to “reality”. A common saying is: “One doesn’t have to believe in reality. It will up and bite you whether you believe in it or not.” The same applies to laws and convention. If one is about to be executed for “treason”, it doesn’t matter that the law is really a “fiction”, compared perhaps with physical reality. In fact, most “realities” whether physical or societal possess a large social component. This area of social agreement comes up when one judges whether another human is sane or crazy. The sine qua non of insanity is its defiance of reality as it is conceived by we “sane ones.” Unfortunately, it is all too easy to forget that conventions are a product of society and take them as absolutes. Teenagers are notorious for wanting to be “in” with their crowd even when the fashions of the crowd are highly dubious. But many so-called grown-ups are equally taken in by the conventions of society. Most of the time it is easy and harmless to go along with the conventions, but one should always realize that they are, in fact, made up and vary from society to society. Presumably that is what Harari was trying to emphasize. Then there are questions of the depth of realities. In many cultures there is a claim for “levels of reality” beyond everyday physical realities like streets, tile floors, buildings, weather, and the world around us. Hindu mystics consider the “real” world Maya, an illusion. Modern physics grants the reality of the everyday world, but has found a world of possibly deeper reality behind it. There are atoms, molecules, elementary particles, all governed by the “reality” of quantum mechanics which lies behind what one might be tempted to call the “fiction” of classical mechanics. No physicist “really” considers classical mechanics a fiction, though perhaps many would claim there is a wider and possibly deeper reality behind it. Most physicists would leave such questions to philosophers and would consider serious thought about them, a waste of time. Physics first imagined the reality of molecules in the nineteenth century, explaining concepts and measurements of heat related phenomena. For example, temperature is the mean kinetic energy of molecular motion related to what we measure with a thermometer by Boltzmann’s constant. In the early 20th century there were very reputable scientists skeptical of the existence of atoms and molecules. Most of them were convinced of the atom’s reality by Einstein’s theory of Brownian motion (1905). As the 20th century wore on the entire basis of chemistry was established in great detail by quantum theories of electron states in atoms and molecules. In the twenties and thirties cosmology came into being. Besides explaining the genesis of atomic elements, cosmology, using astronomical observations and theory, finds a universe consisting of 10’s of billions of galaxies, each consisting on average of 10’s of billions of stars, all of which originated in a “big bang” some 13.6 billion years ago. In a later post I’ll consider the current situation physics finds itself in, with dark matter, dark energy, string theory, and ideas of a multi-verse. If one considers these as realities, one should not hold such a belief too firmly. History teaches us that physics is subject to revolutions which alter the very “facts” of physical reality. Besides the lurking revolutions of the future one notes that the “realities” of physics and chemistry lie in their theories which have proved essential for the “reality” of our modern technologies. One might claim however, that these are theories of reality, rather than a more immediate impingement of reality in our lives. I hope to say more about “physical reality” in the next post. Leaving the physical world, one asks, “What about myth, an admitted fiction?” If a myth has a deep meaning and lesson for our lives, doesn’t that entail a certain kind of reality of more importance than a trivial sort of physical reality? Consider “myth” vs. “history”. Reality for history depends on “primary sources”, written records. The “written” record might be that of an oral interview when recent history is concerned; but the idea is that there is a concrete record of some kind that relates directly to the happenings that history is reporting. Consider the stories about Pythagoras I wrote about in the last post. These stories were based on “secondary sources”, accounts written hundreds of years after Pythagoras’s death, relying on hearsay or vanished primary sources with no way of telling which was which. They form the basis for the shallow kind of myth that gives “myth” its common pejorative connotation. We dismiss the myths about Pythagoras’s golden thigh, his flying from place to place, where he may appear simultaneously, not simply because these claims conflict with our present scientific world view, but because they have no relevance to facts about Pythagoras which matter to us in considering his contributions to the history of mathematics. The myths about Pythagoras can be considered “trivial” myths which discredit the very idea of myth. But what about deeper myths? Most religions tell stories about their founders and contributors which have a high mythic content. I ask in this context, “Does distinguishing between myth and historical reality in matters of religious history, really matter, or matter at all?” Buddhists are notorious for being unfazed when various historical stories are proven fictional by historians. I would baldly state their attitude as: “The religious importance of the story is what matters; not the factual truth of every so-called fact in the canon.” Getting closer to home, I might ask, “Suppose the facts about Jesus’s physical existence were convincingly proved to be completely fictional. Would it matter to Christianity?” I would guess that it WOULD be devastating to believers, but that, in fact, it SHOULDN’T be. What matters in Christianity is the insight that feelings of love are deeply embedded in the universe and that Jesus, whether a fictional person or not, is responsible for bringing this “fact” to life, to showing that in the deep mystery one might call “God”, there is a forgiveness of the animal brutishness of humans. If through an active nurture of love in ourselves we experience this deep truth and express it in the way we act towards others, we redeem ourselves, and potentially, all of humanity. The stories, “myths” if you will, help us towards this experiential realization, a realization that is utterly unrelated to “belief”, a realization which could be called “Christian Satori”. The uniqueness of Christianity, as far as I can tell, is this emphasis on “love”. Unfortunately, the methodology of Christianity, with its historical emphasis on grasping ever harder at “belief”, is deeply flawed, leading backwards to the brutishness, rather than forward to love. Certain Christian thinkers, Thomas Merton for example, seem to have realized that Zen practice can be helpful in reaching a deeper understanding of their religion. One aspect of a Western Zen would be its applicability to a Western religious practice of a more deeply realized Christianity. Actually, whether or not “love” is embedded in the universe, we, as humans are susceptible to it, and can choose to base our lives on realizing its full depths in our beings. Getting back to “reality”, I’ll consider possible insights from traditional Eastern Zen. So far in talking about Zen I’ve emphasized the Soto school of Japanese Zen and have tried to show how various Western ideas are susceptible to a deeper understanding by means of what might be called Western Zen. Actually, I claim that the insights of Zen lie below any cultural trappings; and that for a complete understanding, particularly as such might relate to “reality”, one should consider Zen in all its manifestations. The Rinzai Japanese school is the one we typically find written about in the US. It is the school which perhaps (I’m pretty ignorant about such matters) has deeper roots in China where Zen originated and the discipline of concentrating on Koans came into being. An excellent introduction to this school is the book Zen Comments on the Mumonkan, by Zenkei Shibayama, Harper and Row, 1974. The Chinese master Wu-men, 1183-1260, collected together 48 existing Koans and published them in the book, Wu-wen kuan. In Japan Wu-wen is called “Mumon” and his book is called the Mumonkan. During the late 1960’s and early 1970’s I attended an annual conference of what was then called the Society for Religion in Higher Education. Barbara, my wife at the time, as a former Fulbright scholar, was an automatic member of this Society. As her husband I could also attend the conference. The meetings of the Society were always very interesting with deeply insightful discussions going on, day and night. These discussions never much concerned belief in anything, but concentrated on questions of meaning and values. In fact, the name of the Society was later changed to the Society for Values in Higher Education. During one of the last meetings I attended, possibly in 1972, there was much discussion about a new Zen book that Kenneth Morgan, a member of the Society was instrumental in bringing into being. Professor Morgan had arranged for the Japanese Master Zenkei Shibayama to give Zen presentations of the Mumonkan at Colgate University. The entire Mumonkan had been translated into English by Sumiko Kudo, a long-time acolyte at Master Shibayama’s monastery and was soon to be published. Having committed to understanding Zen, I was very interested in all of this and looked forward to seeing the book. After moving to Oregon in 1974 I kept my eyes open for it and immediately bought it when it first appeared at the University of Oregon bookstore. Later, I developed a daily routine of doing some Yoga after breakfast and then reading one of the Koans. The insights that the Koans are to help one realize are totally beyond language. The Koans may be considered to be a kind of verbal Jiujitsu, which when followed rationally will throw one momentarily out of language thinking into an intuitive realization of some sort. I had encountered various Koans before working through the Mumonkan and had found little insight, but, as a student of physics and mathematics, thought of them as fascinating problems to be enjoyed and solved. I realized that in working on a difficult problem in math or physics, the crucial break-through often comes via intuition. One has a sudden insight, and even before trying to apply it to the problem, one realizes that one has found a solution. In a technical area one’s insight can be attached to mathematical or scientific language and the solution is a concrete expression which solves a concrete problem. I realized that with Zen, one might have a similar kind of intuitive insight even if it could not be expressed in ordinary language, but, perhaps, could be stated as an answering Koan to the one posed. Another metaphor besides the Jiujitsu one, is the focusing of an optical instrument, such as a microscope, telescope or binoculars. Especially when trying to focus a microscope one can be too enthusiastic in turning the focusing wheel and turn right past the focus, seeing that for an instant one had it, but that it was now gone. With a microscope one can recover the focus. With a Zen Koan the momentary insight is usually lost and efforts at recovery hopeless. A somewhat better example of this focusing metaphor occurred when I was a professor at Auburn University. One quarter I taught a lab for an undergraduate course in electricity and magnetism. This was slightly intimidating as I was a theoretical physicist with little background in dealing with experimental apparatus. One afternoon the experiment consisted of working with an ac (alternating current) bridge similar to a Wheatstone bridge for direct current, but with a complication arising from the ac. Electrical bridges were developed in the nineteenth century to measure certain electrical quantities which are these days more easily measured by other means. Nowadays the bridges mainly have pedagogical value. With a Wheatstone bridge one achieves a balance in the bridge by adjusting a variable resistor until the current across the bridge, measured by a delicate ammeter, vanishes. One can then deduce the value of an unknown resistor in the circuit. With ac there is not only resistance but also a quantity called reactance, which arises because a magnetic coil or capacitor will pass an ac current. To adjust an ac bridge, one twiddles not only a variable resistance but a variable magnetic coil (inductor) which changes the reactance. In the lab there were about 5 or 6 bridges to be set up, each tended by a pair of students. The students put their bridges together with no difficulties; but then, after about 10 minutes, it became clear that none of the student teams had been able to balance their bridge. The idea was to adjust one of the two adjustable pieces until there was a dip in the current through the ammeter. Then adjust the other until the dip increased, continuing in this back and forth manner until the current vanished or became very small. It turned out that no matter what the students did, the current though the ammeter never dipped at all. Of course, the students turned to their instructor for help in solving their problem and I was on the spot. The experience the students had is quite similar to dealing with a Koan. No matter what one does, how much one concentrates, or how long one works at it, the Koan never comes clear. With the ac bridge the students could actually have balanced it by a systematic process, but this would have taken a while. I should have suggested this, but didn’t think of it. Instead I had a pretty good idea of some of the quantities involved in the circuit, whipped out my slide rule (no calculators in those days), and suggested a setting for the inductor. This setting was close enough that there was a current dip when the resistor was adjusted and all was well. The reason that balancing an ac bridge is so difficult is that the two quantities concerned, the resistance R and the reactance X, are in a sense, at right angles to each other, even though they are both quantities measured by an electrical resistance unit, ohms, which is not spatial at all. Nevertheless, even though non-spatial, they satisfy a Pythagorean kind of equation R² + X² = Z² where Z is called the Impedance in an ac circuit. The quantities R and X can be plotted at right angles to each other and a triangle made with Z as the hypotenuse. If one adjusts either R or X separately, one is reducing the contribution towards the impedance of one leg of the triangle which does not greatly affect the impedance, at least not enough to noticeably change the current through the ammeter of an ac bridge. Incidentally, what I’ve just explained is a trivial example of a tremendously important idea in theoretical physics and mathematics called isomorphism, in which quantities in wildly different contexts share the same mathematical structure. I hope that the analogies of verbal Jiujitsu and getting things into focus make somewhat clearer the problem of dealing with Koans. One might well ask if such dealing is worth the trouble and, on a personal note, what kind of luck I’ve had with them, especially as they might throw some light on the nature of “reality”. First, I must say that I have found that engaging the Koans of the Mumonkan is very worthwhile even though most of them remain completely mysterious to me. Moreover, even though I have had epiphanies when reading some of the Koans or the comments about them, there is no way for me to tell whether or not I have really understood what, if anything, they are driving at. Nevertheless, after spending some years with them, off and on, in a very desultory, undisciplined manner, I feel that they have helped indirectly to make my thinking clearer. My approach when I first spent a year going through Zen Comments was to do a few minutes of Yoga exercises, with Yoga breathing and meditation, attempting to clear my mind. Then I would carefully read the Koan and the comments, not trying to understand at all, while continuing meditation. Typically, at that point, I would have a peaceful feeling from the meditation but no epiphany or understanding. I would then put the book aside and go about the business of the day until I repeated this exercise with the next Koan the next day. Sometimes I would skip a day and sometimes I would go back and look at an earlier Koan. This reading was very pleasant as an exercise. I tried to develop the attitude of indifference towards whether I understood anything or not and avoided getting wrought up in trying to break through. My feeling about this kind of exercise is that it does lead to some kind of spiritual growth whether or not the Koans make any sense. As for “enlightenment”, I think it is a loaded word and best ignored. A Western substitute might be “clarity of thought”. Whether or not meditation, studying Koans or just thinking has anything to do with it, I have, on occasion, been unexpectedly thrown into a state of unusual clarity, in which puzzles which once seemed baffling seemed to come clear. As for the Zen Comments I might make a few suggestions especially as they relate to “reality”. Consider, for example, Koan 19, “Ordinary Mind is Tao”, towards which the metaphor above, of finding a focus, might be relevant. If you haven’t heard about the concept of Tao, pick up and read the Tao Te Ching, Lao Tzu’s fundamental Chinese classic. Tao may be loosely translated as “Deep Truth Path”. Koan 19, as translated by Ms. Kudo reads as follows: “Joshu once asked Nansen, ‘What is Tao?’ Nansen answered, ‘Ordinary mind is Tao.’ ‘Then should we direct ourselves towards it or not?’ asked Joshu. ‘If you try to direct yourself toward it, you go away from it,’ answered Nansen. Joshu continued, ‘If we do not try, how can we know that it is Tao?’ Nansen replied, ‘Tao does not belong to knowing or not knowing. Knowing is illusion; not knowing is blankness. If you really attain to Tao of no-doubt, it is like the great void, so vast and boundless. How then can there be right or wrong in the Tao?’ At these words Joshu was suddenly enlightened.” Mumon Commented. This comment is very relevant. “Questioned by Joshu, Nansen immediately shows that the tile is disintegrating, the ice is dissolving, and no communication whatsoever is possible. Even though Joshu may be enlightened, he can truly get it only after studying for thirty more years.” I picked this particular Koan because it is one of the few that I feel I actually understand (although I may need another thirty years to really get it). Of course, I can in no way prove this. You must NOT be naïve and think that I understand anything. Furthermore, there is no real explanation of the Koan I can give. I can make a few remarks which should be considered as random twiddles of dials that may chance to zero the impedance in your mind. First, the whole thing is a logical mess. On the one hand there is nothing special or esoteric about “deep truth path”. It is just the ordinary world (reality) that we sense. On the other hand, when we get “it”, the ordinary world dissolves and we feel an overwhelming sense of the infinite ignorance and non-being which surrounds the small island of knowledge we have attained in our human history so far. In fact, both the ordinary and the transcendent are simultaneously present to our awareness and one cannot be considered more significant than the other. Note that this Koan is superstition free. There are no claims of esoteric knowledge. There are no contradictions of any scientific or historical claims to knowledge. There are no contradictions of anything we might consider superstitions. There is no contradiction of the doctrines of any religion. One might say that the Koan is empty of content. Of verbal content that is. There is an implicit criticism of Aristotelean logic with its excluded middle. As I’ve already pointed out more than once in this blog, logic has a limited applicability. Part of the “game” of science is to accept only statements to which logic DOES apply. I may later go into stories from the history of physics of the difficulties of playing this exciting game of science, keeping logic intact, when experimental evidence seems to deny it. However, the “game” of physics or any other science is not all of life; and, in fact, Aristotelian logic has been, as I’ve called it in earlier blogs, “the curse of Western Philosophy” and an impediment to a deeper understanding of realities outside of science. There is more to say about the Mumonkan, but I will leave such to a later blog post. As to differences between Soto and Rinzai Zen I wonder how serious these really are. Koan 19 seems to embody the Rinzai idea of instantaneous enlightenment until one sees Mumon’s comment about another 30 years being required for Joshu to really get it. The Soto doctrine is of gradual enlightenment and a questioning of the very “reality” of the enlightenment concept. A metaphor for either view is the experience of trying to get above a foggy day in a place like Eugene, Oregon, where, when the winter rain finally stops, the clear weather is obscured by a pea-soup fog. One climbs to a height such as Mt. Pisgah or Spencer’s Butte and often finds that though the fog is thinner with hints of blue sky, it is still present. But then there is perhaps a partial break and one sees through a deep hole towards a clear area beyond the fog. This vision may be likened to an epiphany or even to the “Satori” of Rinzai Zen. If we imagine we could wait on our summit for years until, after many breaks, the fog completely clears away, that would be full enlightenment. Leaving any further consideration of Koan 19, I will end this post on a personal note. If indeed I’ve had a deep enough epiphany to consider it as Satori, this breakthrough has helped reveal that I have a healthy ego, lots of “ego strength”, a concept that Dr. Carr, head of the physics department at Auburn came up with. Experimental physicists, such as Dr. Carr, like to measure things. “Having a lot of ego strength” was his amusing term for people who are overly wrapped up in themselves. My possible Zen insights have not diminished my ego at all. Rather, they have helped to reveal it. I’ve learned not to be too exuberant about insights which as a saying goes, “leave one feeling just as before about the ordinary world except for being two inches off the ground.” If I get too exuberant, I wake up the next day, feeling “worthless”, in the grip of depression. This is a reaction to an unconscious childhood ego build-up in the face of very poor self-esteem. Part of spiritual growth is perhaps not losing one’s ego, but lessening the grip it has on one. I hope that further practice helps me in this regard. Perhaps, some psychological considerations can be the subject of a later post. I will now, however, work on the foundations for such a post by attempting to clarify the “reality” status of scientific theories. Funny Numbers During the century between about 600 BCE to 500 BCE, the first school of Greek philosophy flourished in Ionia. This, arguably, is the first historical record of philosophy as a reasoned attempt to explain things without recourse to the gods or out-and-out magic. But where on earth was Ionia? Wherever it was it’s now long gone. Wikipedia, of course, supplies an answer. If one sails east from the body of Greece for around 150 miles, passing many islands in the Aegean Sea, one reaches the mainland of what is now Turkey. Along this coast at about the same latitude as the north coast of the Peloponnesus (37.7 degrees N) one finds the island of Samos, a mile or so from the mainland; and just to the north is a long peninsula poking west which in ancient times held the city-state of Ionia. Wikipedia tells us that this city-state, along with many others along the coast nearby formed the Ionian League, which in those days, was an influential part of ancient Greece, allying with Athens and contributing heavily, later on, to the defeat of the Persians when they tried to conquer Greece. One can look at Google Earth and zoom in on these islands and in particular on Samos, seeing what is now likely a tourist destination with beaches and an interesting, rocky, green interior. On the coast to the east and somewhat south of Samos was the large city of Miletus, home to Thales, Anaximander, Heraclitus and the rest of the Ionian philosophers. At around 570 BCE on the Island of Samos Pythagoras was born. Nothing Pythagoras possibly might have written has survived, but his life and influence became the stuff of conflicting myths interspersed with more plausible history. His father was supposedly a merchant and sailed around the Mediterranean. Legend has it that Pythagoras traveled to Egypt, was captured in a war with Babylonia and while imprisoned there picked up much of the mathematical lore of Babylon, especially in its more mystical aspects. Later freed, he came home to Samos, but after a few years had some kind of falling out with its rulers and left, sailing past Greece to Croton on the foot of Italy which in those days was part of a greater Greek hegemony. There he founded a cult whose secret mystic knowledge included some genuine mathematics such as how musical harmony depended on the length of a plucked string and the proof of the Pythagorean theorem, a result apparently known to the Babylonians for a thousand years previously, but possibly never before proved. Pythagoras was said to have magic powers, could be at two places simultaneously, and had a thigh of pure gold. This latter “fact” is mentioned in passing by Aristotle who lived 150 years later and is celebrated in lines from the Yeats poem, Among School Children: Plato thought nature but a spume that plays Upon a ghostly paradigm of things; Solider Aristotle played the taws Upon the bottom of a king of kings; World-famous golden-thighed Pythagoras Fingered upon a fiddle-stick or strings What a star sang and careless Muses heard: Yeats finishes the stanza with one more line summing up the significance of these great thinkers: “Old clothes upon old sticks to scare a bird.” Although one may doubt the golden thigh, quite possibly Pythagoras did have a birthmark on his leg. I became interested in Ionia and then curious about its history and significance because I recently wondered what kind of notation the Greeks had for numbers. Was their notation like Roman numerals or something else? I found an internet link, which explained that the “Ionian” system displaced an earlier “Attic” notation throughout Greece, and then went on to explain the Ionian system. In the old days when a classic education was part of every educated person’s knowledge, this would be completely clear as an explanation. Although I am old enough to have had inflicted upon me three years of Latin in high school, since then I had been exposed to no systematic knowledge of the classical world so was entirely ignorant of Ionia, or at least of its location. I had heard of the Ionian philosophers and had dismissed their philosophy as being of no importance as indeed is the case, EXCEPT for their invention of the whole idea of philosophy itself. And, of course, without the rationalism of philosophy, it is indeed arguable that there would never have been the scientific revolution of the seventeenth century in the West. (Perhaps that revolution was premature without similar advances in human governance and will yet lead to disaster beyond imagining in our remaining lifetimes. Yet we are now stuck with it and might as well celebrate.) The Ionian numbering system uses Greek letters for numerals from 1 to 9, then uses further letters for 10, 20, 30 through 90, and more letters yet for 100, 200, 300, etc. The total number of symbols is 27, quite a brain full. The important point about this notation along with that of the Egyptian, Attic, Roman and other ancient Western systems is that position within a string of numerals has no significance except for that of relative position with Roman numerals. This relative positioning helps by reducing the number of symbols needed in a numeric notation, but is a dead end compared to an absolute meaning for position which we will go into below. The lack of meaning for position in a string of digits is similar to written words where the pattern of letters within a word has significance but not the place of a letter within the word, except for things like capitalizing the first letter or putting a punctuation mark after the last. As an example of the Ionian system, consider the number 304 which would be τδ, τ being the symbol for 300 and δ being 4. There is no need for zero, and, in fact, these could be written in reverse order δτ and carry the same meaning. In thinking about this fact and the significance of rational numbers in the Greek system I came to understand some of the long history with the sparks of genius that led in India to OUR numbers. In comparison with the old systems ours is incredibly powerful but with some complexity to it. I can see how with unenlightened methods of teaching, trying to learn it by rote can lead to early math revulsion and anxiety rather than to an appreciation of its remarkable beauty, economy and power. In the ancient Western systems there is no decimal point and nothing corresponding to the way we write decimal fractions to the right of the decimal point. What we call rational numbers (fractions) were to Pythagoras and the Greeks all there was. They were “numbers”, period, and “obviously” any quantity whatever could be expressed using them. Pythagoras died around 495 BCE, but his cult lived on. Sometime during the next hundred years, one of his followers disproved the “obvious”, showing that no “number” could express the square root of 2. This quantity, √2, by the Pythagorean theorem, is the hypotenuse of a right triangle whose legs are of length 1, so it certainly has a definite length, and is thus a quantity but to the Greeks was not a “number”. Apparently, this shocking fact about root 2 was kept secret by the Pythagoreans, but was supposedly betrayed by Hippasus, one of them. Or perhaps it was Hippasus who discovered the irrationality. Myth has it that he was drowned (either by accident or deliberately) for his impiety towards the gods. The proof of the irrationality of root 2 is quite simple, nowadays, using easy algebra and Aristotelian logic. If a and b are integers, assume a/b = √2. We may further assume that a and b have no common factor, because we may remove them all, if any. Squaring and rearranging, we get a²/2 = b². Since b is an integer, a²/2 must also be an integer, and thus “a” itself is divisible by 2. Substituting 2c for a in the last equation and then rearranging, we find that b is also divisible by 2. This contradicts our assumption that a and b shared no common factor. Now we apply Aristotelian logic, whose key property is the “law of the excluded middle”: if a proposition is false, its contrary is necessarily true, there is no “weaseling” out. In this case where √2 is either a fraction or isn’t, Aristotelian logic applies, which proves that a/b can’t be √2. The kind of proof we have used here is called “proof by contradiction”. Assume something and prove it false. Then by the law of the excluded middle, the contrary of what we assumed must be true. In the early twentieth century, a small coterie of mathematicians, called “intuitionists”, arose who distrusted proof by contradiction. Mathematics had become so complex during the nineteenth century that these folks suspected that there might, after all, be a way of “weaseling” out of the excluded middle. In that case only direct proofs could be trusted. The intuitionist idea did not sit well with most mathematicians who were quite happy with one of their favorite weapons. Getting back to the Greeks and the fifth century BCE one realizes that after discovering the puzzling character of √2, the Pythagoreans were relatively helpless, in part because of inadequacies in their number notation. I haven’t tried to research when and how progress was made in resolving their conundrum during the 25 centuries since Hippasus lived and died, but WE are not helpless and with the help of our marvelous number system and a spreadsheet such as Excel, we can show how the Greeks could have possibly found some relief from their dilemma. The answer comes by way of what are called Pythagorean Triplets, three integers like 3,4,5 which satisfy the Pythagorean Law. With 3,4,5 one has 3² + 4² = 5². Other triplets are 8,15,17 and 5,12,13. There is a simple way of finding these triplets. Consider two integers p and q where q is larger than p, where if p is even, q is odd (or vice-versa) and where p and q have no common factor. Then let f = q² + p², d = q² – p², and e = 2pq. One finds that d² + e² = f². Some examples: p = 1, q = 2 leads to 3,4,5; p = 2, q = 3 leads to 5,12,13. These triplets have a geometrical meaning in that there exist right triangles who sides have lengths whose ratios are Pythagorean triplets. Now consider p = 2, q = 5 which leads to the triplet 20,21,29. If we consider a right triangle with these lengths, we notice that the sides 20 and 21 are pretty close to each other in length, so that the shape of the triangle is almost the same as one with sides 1,1 and hypotenuse √2. We can infer that 29/21 should be less than √2 and 29/20 should be greater than √2. Furthermore, if we double the triangle to 40,42,58, and note that 41 lies halfway between 42 and 40, the ratio 58/41 should be pretty darn close to √2. We can check our suspicion about 58/41 by using a spreadsheet and find that the 58/41 is 1.41463 to 5 places, while √2 to 5 places is 1.41421. The difference is 0.00042. The approximation 58/41 is off by 42 parts in 100,000 or 0.042%. The ancient Greeks had no way of doing what we have just done; but they could have squared 58 and 41 to see if the square of 58 was about twice the square of 41. What they would have found is that 58² is 3364 while 2 X 41² is 3362, so the fraction 58/41 is indeed a darn good approximation. Would the Greeks have been satisfied? Almost certainly not. In those days Idealism reigned, as it still does in modern mathematics. What is demanded is an exact answer, not an approximation. While there is no exact fraction equal to √2, we can find fractions that get closer, closer and forever closer. Start by noticing that a 3,4,5 triangle has legs 3,4 which though not as close in length as 20, 21, are only 1 apart. Double the 3,4,5 triangle to 6,8,10 and consider an “average” leg of 7 relative to the hypotenuse of 10. The fraction 10/7 = 1.428 to 3 places while √2 = 1.414. So, 10/7 is off by only 1.4%, remarkably close. Furthermore, squaring 10 and 7, one obtains 100, 49 while 2 = 100/50. The Pythagoreans could easily have found this approximation and might have been impressed though certainly not satisfied. I discovered these results about a month or so ago when I began to play with an Excel spread sheet. Playing with numbers for me is relaxing and fun; and is a pure game whether or not I find anything of interest. I suspect that this kind of “playing” is how “real” mathematicians do find genuinely interesting results, and if lucky, may come up with something worthy of a Fields prize, equivalent in mathematics to a Nobel prize in other fields. While my playing is pretty much innocent of any significance, it is still fun, throws some light on the ancient Greek dilemma, and for those of you still reading, shows how a sophisticated idea from modern mathematics is simple enough to be easily understood. With spreadsheet in hand what I wondered was this: p,q = 1,2 and p,q = 2,5 lead to approximations of √2 via Pythagorean triplets. Are there other p,q’s that lead to even better approximations? To find such I adopted the most powerful method in all of mathematics: trial and error. With a spreadsheet it is easy to try many p,q’s and I found that p = 5, q = 12 led to another, even better, approximation, off by 1 part in 100,000. With 3 p,q’s in hand I could refine my guesswork and soon came up with p = 12, q = 29. I noticed that in the sequence 1,2,5,12,29,… successive pairs gave increasingly better p,q’s. This was an “aha” moment and led to a question. Could I find a rule and extend this sequence indefinitely? In my life there is a long history of trying to find a rule for sequences of numbers. In elementary school at Hanahauoli, a private school in the Makiki area of Honolulu, I learned elementary arithmetic fairly easily, but found it profoundly uninteresting if not quite boring. Seventh grade at Punahou was not much better, but was interrupted part way through the year by the Pearl Harbor attack of December 7, 1941. The Punahou campus was taken over by the Army Corps of Engineers and our class relocated to an open pavilion on the University of Hawaii campus in lower Manoa Valley. I mostly remember enjoying games of everyone trying to tackle whoever could grab and run with a football even if I was one of the smaller children in the class. Desks were brought in and we had classes in groups while the rain poured down outside the pavilion. Probably, it was during this year that we began to learn how fractions could be expressed as decimals. In the eighth grade we moved into an actual building on the main part of the University campus and had Miss Hall as our math teacher. The math was still pretty boring, but Miss Hall was an inspiring teacher, one of those legendary types with a fierce aspect, but a heart of gold. We learned how to extract square roots, a process I could actually enjoy, and Miss Hall told us about the fascinating things we would learn as we progressed in math. There would be two years of algebra, geometry, trigonometry and if we progressed through all of these, the magic of “calculus”. It was the first time I had heard the word and, of course, I had no idea of what it might be about, but I began to find math interesting. In the ninth grade we moved back to the Punahou campus and our algebra teacher was Mr. Slade, the school principal, who had decided to get back to teaching for a year. At first, we were all put off a bit by having the fearsome principal as a teacher, but we all learned quickly that Mr. Slade was actually a gentle person and a gifted teacher. As we learned the manipulations of algebra and how to solve “word problems”, Mr. Slade would, fairly often, write a list of numbers on the board and ask us to find a formula for the sequence. I thoroughly enjoyed this exercise and learned to take differences or even second differences of pairs in a sequence. If the second differences were all the same, the expression would be a quadratic and could easily be found by trial and error. Mr. Slade also tried to make us appreciate the power of algebra by explaining what was meant by the word “abstraction”. I recall that I didn’t have the slightest understanding of what he was driving at, but my intuition could easily deal with an actual abstraction without understanding the general idea: that in place of concrete numbers we were using symbols which could stand for any number. Later when I did move on to calculus which involves another step up in abstraction, I at first had difficulty in the notation f(x), called a “function” of x, an abstract notation for any formula; or indeed a representation of a mapping that could occur without a formula. I soon got this idea straight and had little trouble later with a next step of abstraction to the idea used in quantum mechanics of an abstract “operator” that changes one function into another. Getting back to the sequence 1,2,5,12,29,… I quickly found that taking differences didn’t work; the differences never seemed to get much smaller because the sequence turns out to have an exponential character. I soon discovered, however, using the spreadsheet that quotients worked: take 2/1, 5/2, 12/5, 29/12, all of which become more and more similar. Then multiplying 29 by the last quotient, I got 70.08. Since 29 was odd, I needed an even number for the next q so 70 looked good and indeed I confirmed that the triplet resulting from 29, 70 was 4059, 4060, 5741 with an estimate for √2 that was off by only 1 part in a 100 million. After 70 I found the next few members of the sequence, 169, 408, 985. The multiplier to try for the next member seemed to be closing in on 2.4142 or 1 + √2. At this point I stopped short of trying for a proof of that possibility, both because I am lazy and because the possible result seemed uninteresting. What is interesting is that the sequence of p,q’s goes on forever and that approximations for √2 by using the resulting triplets will converge on √2 as a limit. The ideas of a sequence converging to a limit was only rigorously defined in the 19th century. Possibly it might have provided satisfaction to the ancient Greeks. Instead, the idea of irrational numbers that were beyond fractions became clear only with the invention by the Hindu’s in India of our place based numerical notation and the number 0. Place based number notation was developed separately in several places, in ancient Babylon, in the Maya civilization of Central America, in China and in India. A place based system with a base of 10 is the one we now use. Somewhere in one’s education one has learned about the 1’s column just to the left of a decimal point, then the 10’s column, the 100’s column and so forth. When the ancient Hindu’s and the other civilizations began to develop the idea of a place based system, there was no concept of zero. Presumably the thought was the idea that symbols should stand for something. Why would one possibly need a symbol that stood for nothing? So, one would begin with symbols 1 through 9 and designate 10 by ”1·”. The dot “·” is called a “place holder”. It has no meaning as a numeral, serving instead as a kind of punctuation mark which shows that one has “10”, not 1. Using the place holder in the example above of Ionian numbers, the τδ would be 3·4, the dot holding the 10’s place open. The story with “place holders” is that the Babylonians and Mayans never went beyond, but the Hindu’s gradually realized the dot could have a numerical meaning within its own right and “0” was discovered (invented?). Recently on September 13 or 14th, 2017, there was a flurry of reports that carbon dating of an ancient Indian document, the Bakhshali manuscript revealed that some of its birch bark pages were 500 years older than previously estimated, dating to a time between 224 – 383 AD. The place holder symbol occurring ubiquitously in the manuscript was called shunya-bindu in the ancient Sanskrit, translated in the Wikipedia article about the manuscript as “the dot of the empty place”. (Note that in Buddhism shunyata refers to the “great emptiness” a mystic concept which we might take as the profound absence of being logically prior to the “big bang”) A readable reference to the recent discovery is According to the Wikipedia article the Bakhshali manuscript is full of mathematics including algebraic equations and negative numbers in the form of debts. As a habitual skeptic I wondered when I first heard about the new dating whether Indian mathematicians with their brilliant intuition hadn’t immediately realized the numerical meaning of their place holder. Probably they did not. An easy way to see the necessity of zero as a number is to consider negative numbers as they join to the positives. In thinking and teaching about math I believe that using concrete examples is the best road leading to an abstract understanding. The example of debts is a compelling example of this. At first one might consider one’s debts as a list of positive numbers, amounts owed. One would also have another list of positive numbers, one’s assets, amounts owned. The idea might then occur of putting the two lists together, using “-“ signs in front of the debts. As income comes in one’s worth goes, for example, -3, then -2, -1. Then what? Before going positive, there is a time when one owes nothing and has nothing. The number 0 signifies this time before the next increment of income sends one’s worth to 1. The combined list would then be …, -3, -2, -1, 0, 1, 2, 3, … . Doing arithmetic, using properly extended arithmetic rules, when one wants to combine various sources of debt and income becomes completely consistent, but only because 0 was used. If the above seems as if I’m belaboring the obvious, let me then ask you why when considering dates, the next year after 1 BCE is not 0, but 1 AD? Our dating system was made up during an early time before we had adopted “0” in the West. Historians have to subtract 1 when calculating intervals in years between BCE and AD and centuries end in hundreds, not 99’s. This example is a good one for showing that if one gets locked in to a convention, it becomes difficult if not impossible to change. I was quietly amused at the outcry as Y2K, the year 2000 came along with many insistent voices pointing out the ignorance of we who considered the 21st century to have begun. The idea of zero is not obvious and I hope I’ve shown in considering the Pythagorean’s and their dilemma with square roots, just how crippled one is trying to get along without it. My last post was on 8/11/17 shortly before we needed to prepare for a big road trip from Bend, Oregon to the Maritime Provinces of Canada, followed by visits to Sue’s family in Lake George, New York and my daughter’s family in Annapolis, Maryland. Preparations for the trip had to be made early because just before the trip there was the total solar eclipse of 2017 on Monday, the 21st, the shadow passing 25 or so miles north of us. In the days before the eclipse our house filled with family. We had made viewing plans and they worked out well. On Monday before dawn we drove to an open field Northwest of Prineville, saw the sky darken, leaf shadows sharpen, and felt the temperature fall by 12 degrees or so. We then watched as a black shadow fell on Gray’s Butte 10 miles to the West and rushed towards us at 1700 MPH. The last bright spark on the sun’s rim flickered out; and there was the corona and Bailey’s shining diamonds along the rim of the shadowed sun. The entire experience was as stunning as advertised and brought home to us the reality of cosmic events. There really is a moon out there, a sun and an entire cosmos whose very existence is an impenetrable mystery that we can experience during our brief stay in conscious awareness. After the eclipse we waited a day for the traffic to clear, took my computer to the shop, finding out the mother board was dead, then headed out across the continent after taking Sue’s sister Nancy to the Portland airport. The trip was long and accomplished what travel should. We saw new country and discovered that some Canadians were more concerned with the possible shortcomings of their prime minister than with those of Trump. As a child I’d read about the tides of the Bay of Fundy but had no idea even where it was. Now we saw the 45-foot tide come in (record some 50 odd feet), finally got a good look at a tidal bore and added three provinces to our list. (We’ve traveled in all 50 US states so are now adding Canadian provinces and territories to our travel deeds.) We had been somewhat leisurely going East to the Maritimes. But then, after our family visits, drove across the US in 6 days, seeing some new territory on the way and being moved by a visit to the California Trail Interpretive Center on I80 in Nevada. One reads about the hardships and heartbreaks of the Westward migration and understands intellectually, but seeing the exhibits and dioramas makes for a much deeper emotional understanding. Arriving home on September 30th, we settled in for a week or two before going to the Stanford Alpine Club reunion. Now we’re really back home with a new computer fired up and it’s time to write. In previous posts I’ve expressed the theme that Western thought would be more satisfying if informed by the spirituality of the East, especially Zen Buddhism. Now I want to turn a somewhat skeptical eye on the foundations of that idea but later move away from the skepticism to try find a clearer and deeper exposition. I begin by considering what seems to be an unbridgeable gap between the Western idea, that in philosophy, science and humanism meaning can only be apprehended in words; and the Eastern idea, in Zen, that the deepest meaning is totally beyond direct expression in language. Let me first be skeptical about extreme claims for language. I’ve already talked about Plato and Wittgenstein with their thoughts on the limits of what can be said. Some humanists not only ignore possible limits to what language can express, but claim that only with language can there even be thinking. That idea seemed absurd to me the first time I heard it and has so seemed ever since. Perhaps it makes sense if one replaces “thinking” by “intellectualizing”. To me “thinking” is simply conscious mental processing and, at least, for me can occur in an entirely wordless manner. For example, when out hiking one often comes to a stream without a bridge but with rocks that will provide stepping stones if one doesn’t slip and take a fall into the water. When I arrive at such a place, I take in the scene, sketching out possible paths and making a wordless judgement about the slipperiness and stability of the rocks along each possible route. If one path seems feasible and best, I concentrate, get balanced and begin to hop. There has clearly been “thought” here, but none of it has been put into words. Of course, it could have been, and on some occasions, the hiking party might well discuss the matter, analyzing verbally the various possibilities before making a decision about the crossing. Another example, concerns a bear in Yosemite Valley who presumably lacked language, but through experience and awareness learned about canned goods. In one instance, during the night at Camp 4, a less experienced member of our group had left a rucksack full of canned food, out in the open. The next morning, we found the rucksack torn apart and cans scattered about. Some of the cans had been ripped open and the contents eaten. Others were untouched except for a single tooth hole in one end. The bear knew that some cans might have less desirable contents and saved energy by a “puncture and sniff” methodology whose existence to me implied “thought”. While thought clearly can be nonverbal, it seems to me that Zen seemingly goes further. Let me postulate that for Zen the deepest awareness about life and the emotional reconciliation with our non-existence and loss of awareness in death, is not only wordless, but, unlike the experience of stream crossing, is necessarily completely nonverbal. Further, that attempting to understand this experience through language is not only a distraction, but is counterproductive, a false path, that hinders rather than helps. Having not had the ultimate Zen experience I am in an excellent position to be skeptical about this postulate. This skepticism can operate on several fronts. First, though I’m unwilling to doubt the authenticity of the ultimate enlightenment for people who have claimed to have had this experience, I can doubt that it will ever happen to me. The fact of the matter is that other people having the experience is irrelevant to my spiritual understanding. Furthermore, if in the future I claim to have finally achieved satori, that should be irrelevant to you who read this blog. Second, I do think the Soto Zen insight is true and relevant. One can gradually gain deeper understanding of life and the world. One asks, “What is the alternative?” Just give up? Abandon the struggle to understand? Gradualism has its attractions in that there is at least the experience of “being in the zone” not only athletically, but philosophically and artistically. I definitely HAVE experienced being in the zone so know that it can contribute to almost any life activity. It may not be satori, but may well be a way station on the path and, in any case is well worth experiencing. Third, if the ultimate experience is totally unreachable through language, why write about it at all? There are countless books about Zen. The standard conclusion is that one must join an Ashram of some sort and devote one’s entire life to practices that will possibly bring about enlightenment. From the beginning I have been skeptical about joining a spiritual community. There are too many frauds about and even sincere gurus have no magic touch for bringing about the desired result. As I’ve said earlier in this blog concerning spiritual matters, “The buck stops here” with you and me. Spiritual support can possibly be of help but quite possibly also contribute to self-delusion. So why do I write this blog? Simply because I have an irresistible urge to try “get things straight”, to understand as much as possible about everything, to share my ideas, and to become a skillful enough writer to be worth reading. Concerning Zen, I feel that there is a paradox involved. Being as skeptical as possible advances Zen. Smash it. Stomp it. Deny it sincerely. Such a denial of the basic postulate could be considered a Western approach to Zen. A fundamental trait of Western culture is the idea of “speaking out”, of not holding back. Accompanying this is a certain lack of respect for authority. The Eastern tendency, on the other hand, is to remain quiet and humble in the face of what likely cannot be said or understood. Besides a deep respect for authority, there is the idea that being forward is being egotistical by being “showy” to no end but self-aggrandizement. A Western approach to Zen would be a tradition-denying attempt to actually spell out what “cannot be said”, weaving a magic potion in words. A potion that not only makes perfectly clear but also carries to its reader an emotional acceptance of why one should be content and happy in the thought that the uniqueness that each of us possesses vanishes with our death forever into the emptiness of non-being. To attempt this kind of verbal depth and clarity is not only very Western, but paradoxically very Zen. “Let’s not grasp at the idea that nothing can be said.” At root Zen is neither Eastern nor Western. It is about such a complete letting go, that one mustn’t get hung up even on the idea of letting go. As I continue in a possibly too-outspoken Western manner, consider that in what I’ve said above is an explicit acceptance of the idea that our awareness does indeed vanish with our death. There is no consciousness after death. Perhaps the mind functions briefly after the heart is stilled, but such functioning is brief and comes to an end. In rejecting the idea of “eternal life” I’m applying the spiritual postulate that there be no acceptance of belief simply because it seems comforting. It certainly would be extremely meaningful and exciting to be reconciled with all one’s family and friends who have passed away. Whether one could be happily conscious for an eternity is another question, but still it seems that any awareness might well be better than none. As a friend of my wife said talking about accidents and sickness, painful medical treatment, and long boring recoveries while incapacitated, “Any kind of living you can live with; it’s the dying you can’t stand.” And I think that is the way most of us instinctively feel. Certainly, although there is no certainty about what happens after death, the weight of the evidence, seems to me, to favor oblivion. Whether or not that is the case, if oblivion is what we really fear, that fear is what we need to grapple with spiritually in order to find understanding and peace. When I use the word “spiritually”, it brings to mind traditional Western religion; in particular Christianity and the belief in God. What are my thoughts on this matter? Here I’ll deal with them briefly. It seems that there may well be the possibility of a deeper consideration in future posts. So… Am I an atheist? Well, no. Do I believe in God? Well, no. Am I an agnostic? Well, no. Surely either one believes in God or is an Atheist. Well, no. The problem as I’ve said before is Aristotelian logic, the curse of Western Philosophy, and, I might add, Western thought in general. When formalized, logic is tremendously useful in mathematics, theoretical physics, generally in science and in many areas of life. When applied elsewhere, its denial of any possibility beyond true and false, black and white, is untrue to reality. In most areas of life there are “shades of grey” which Aristotelian logic simply can’t deal with. In the distinction between atheism and belief, there is, as well, another problem. The entire distinction, seems to me to be stuck in spiritual shallows. Getting lost in controversy about a dichotomy which may well be meaningless instead of attempting to dive more deeply into spiritual awareness seems to me a waste of time and life. Let us consider belief in “God”. When one uses a word to characterize the deepest experience of spirituality, one inevitably comes to think of God as Something, in particular Something apart from the remainder of existence, having all sorts of contradictory properties. He (certainly not “She” or “It”) is all powerful and all controlling, but tolerates “evil” and the “devil” as a necessary part of existence. And I have mentioned only one muddle. The problem lies in Naming an ultimate which is beyond what we can possibly know. In Judaism and the Old Testament of Christianity, there is a tradition of revulsion in making images of gods or of even speaking God’s name except once a year. The sin involved is called idolatry, a belittling of the ultimate mystery, belief in a false image of God. It seems clear to me, however, that simply in treating the ultimate as a concept and calling it God, one is close to committing idolatry. Whether idolatry is the deep sin claimed by the Old Testament is possibly questionable, but one can well imagine that the ancients had a sound and provocative insight. The idolatry of Naming the ultimate is likely the root cause of religious conflict. One begins with the Name. From the Name comes the tenets. From the tenets Belief. From Belief comes fanaticism and we all know where fanaticism leads. Of course, this sequence is by no means logically necessary and most thoughtful believers realize that “God” is simply a convenient word for what they apprehend in their deepest religious experience. A word that simply spells out an ultimate mystery whose properties are beyond our understanding. For example, the theologian Paul Tillich is very aware of assigning false attributes to the deity and uses the phase “the ground of being” instead of “God”. Nevertheless, there have been many “believers”, past and present, who HAVE followed the sequence from the concept of God to tenets to a tight grip of belief that can only be labeled as fanaticism. Fanaticism demands the death of all apostates and war against other religions or even other branches of one’s own religion. Every thoughtful person should know about the “Thirty Years War, 1618-1648” to say nothing of the horrors occurring in the name of Christianity before that period and understand the potential for fanaticism which lurks in “belief”. So where does this leave us? It seems to me that modern, mainstream Western thought, especially in the sciences, but also in philosophy and the humanities, in realizing the trap of belief, has accepted the unspoken idea that any spirituality involves false beliefs about the deity and a lack of critical thinking which leads to an acceptance of SUPERSTITIONS from astrology to witchcraft to evolution by intelligent design; leads in fact to a rejection of the fundamental skepticism which drives science and, above all, to a total abandonment of reason. Any acceptance of spirituality threatens a new dark age. What I’m pointing out in this blog is not only that there is no necessary link between spirituality and mindless superstition, but that the extreme skepticism of the spirituality which I’m advocating is completely in line with that informing science and modern thinking in general. For lack of a better name and to emphasize its doctrine that ungrasping from all belief leads to depths of meaning and understanding, free from all superstition, I have called it Zen. This label emphasizes and pays respect to the long historical development in the East of the realization that belief is unnecessary for spiritual well-being. Unfortunately, Zen carries the connotation of Eastern thought, of the quietism mentioned earlier in this post. A form of what I’ve called Western Zen would comfortably fit with our Western science, philosophy and humanism. Based on “radical ungrasping” it would take up the idea that our spiritual ignorance can drive a quest for spiritual knowledge and answers, growing out of the deep mysteries that have arisen from our secular science and knowledge. Although we have made remarkable progress in science and in other fields in the past several centuries, our remaining ignorance is not only infinite in extent but concerns the questions most significant to our spiritual well-being. For the deep questions are not going away. What is the meaning of your life or my life? What is the meaning, if any, of our deaths? What is this universe all about anyway? Can one live in a spiritual vacuum? Is one to suppress the urgency of these questions and lose oneself in the anodynes of work, pleasure, sex, sports and consumerism, resisting of course, the threat of addiction to these as well as to less heathy activities such as drinking, drugs and gambling? Or is one to seek answers in the superstitions mentioned above or in shallow forms of Fundamentalism, stilling any doubts by an ever tighter grasping at unreasonable beliefs? It seems to me that Western thought in ignoring its spiritual vacuum is helping to bring about the very evils it fears. A final word. What I’m proposing falls short in that it lacks specificity. That fact must be accepted in all humility. Nevertheless, I do think that I’ve made a showing that there is a path towards a Western spirituality which does not violate the integrity of our thought and that such a path is would fill an important gap. In my last post, “Two Cultures”, I wrote that “…one hopes for a creative amalgam of West and East.” So far this blog has concentrated on Eastern, especially Buddhist ideas, particularly Zen, wondering if Western thought can be helpful in approaching the Zen experience. If I am indeed dedicated to going in the other direction demonstrating that Zen intuition can contribute to Western philosophy, I need now to understand Western philosophy at a deeper level. In fact, it may well be the case that Eastern and Western approaches to ultimate understanding are immiscible like oil and water, so that far from being helpful to one another their intersection becomes nothing more than a contradictory mess. My intuition says otherwise, but in order for me to specifically find and point out ways that each can help the other combine into a single broader and deeper approach to what it’s all about, I need a more thorough appreciation of Western philosophy. That is, I need to understand Plato. I say Plato because I remembered and then found (in the book I’m about to consider) a quote: “The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato” from Process and Reality by Alfred North Whitehead. Besides the Whitehead quote there is a general understanding that Western philosophy only came into full flower with Plato. Plato’s works were the urquell, the Spring from which all flowed. Of course, over the years, I’ve been casually exposed to Plato. At Stanford, all freshmen at the time I was there, were required to take the year long History of Western Civilization course which consisted of the reading of works deemed significant for Western thought with lectures and discussions in class. The class was largely wasted on me, because, as a freshman, besides being occupied with my interesting roommates, I was on the swimming team, not much interested in History, and bone lazy. I do remember reading Plato’s Phaedo, impressed with the story though far from impressed with Socrates’s reasons for not being afraid of death. Then, over the years, I ran many times into allusions to the story of “the cave.” Then there are Platonic “ideals”. None of this exposure really grabbed me. What did make a difference was running recently into a piece on the internet which discussed a philosophical issue with impressive clarity. Here was someone who could talk philosophy in a way that made sense. The author was a women named Rebecca Goldstein. Googling her on the internet I found that she was a rather unusual philosopher in that she wrote novels as well as philosophy. I won’t get into the interesting biographical details about her because these can easily be found on the internet. After enjoying her novel, 36 Arguments for the Existence of God: A work of Fiction, I looked in Amazon to see what else she had written and saw listed Plato at the Googleplex: Why Philosophy Won’t Go Away. This was available in our library in eBook form so I read it on my Kindle, and then ordered a hard copy from Amazon. Below, in the interests of brevity I will sometimes refer to Ms. Goldstein as RNG (for Rebecca Neuberger Goldstein). Understanding Plato via the writing of a gifted philosopher who writes with clarity seemed better than trying to find adequate translations of Plato’s work or trying to learn classical Greek so I that I could read him in the original. Of course, there would be the difficulty of really understanding Plato no matter what the approach. So, I will consider Ms. Goldstein’s book not as authoritative, but as a foundation for riffs off of what I conceive her to have said about Plato and Western Philosophy. Of course, I agree with her thesis that philosophy is here to stay and find her criticism of philosophy-jeerers, such as Lawrence Krauss, amusing and telling though that is not what interests me in her book. Incidentally, I have read Krauss’s A Universe from Nothing: Why There is Something Rather than Nothing, and found it fascinating. He is a great physics popularizer and, in my opinion, writes philosophically so his wholesale condemnation of philosophy is not to be taken seriously. Possibly, a critical review of his “Something” book by a philosopher intensified his antagonism toward philosophy to the point that he had to express his outrage. In that state one finds slings and arrows to hurl at philosophy, rather than relaxing one’s ideological grip as suggested in my last post. A wholesale condemnation of philosophy is ridiculous. However, it seems to me that the situation is not “either/or”, for part of the life blood of philosophy is criticism of philosophy. For example, if in getting at what really matters in philosophy, one should consider “differences that make a difference”, (Gregory Bateson’s definition of “information”), I find too often, philosophers seem to haggle over differences that to me make no difference whatsoever. Perhaps I lack a critical component of what it takes to be a philosopher. Whether or not that is so, I find Ms. Goldstein’s writing mostly clear and fascinating. Before getting into what Ms. Goldstein has to say about Plato I will mention one more thought about philosophy. With most disciplines talking about or discussing the discipline is separate from practicing the discipline. Writing about physics, chemistry or molecular biology, sociology, economics, or engineering, for example, is not doing research in or practicing those disciplines. If one writes about philosophy however, one is actually doing philosophy whether or not one is a professional, card carrying, philosopher. If one writes ignorantly, without sufficient thought or insight, one is doing “bad” philosophy, easily dismissed; but, nevertheless, one is doing philosophy. The only other subject, I can think of offhand, which perhaps possesses this characteristic is literature. A literary critic, writing about a literary work can actually create a piece of literature. I don’t think this claim works for history. A historian can do primary research and write up the story she or he finds (readable history always tells a story), but as soon as she talks in general or makes a judgement, she is doing philosophy of history, not history. Perhaps this last claim is merely a quibble, but certainly one reason philosophy will never go away is that thoughtful people will always continue to practice it, making judgements and seeking insights into whatever is on their mind. Whether university departments of philosophy offering degrees in the subject will wither away in the future is another question. It seems to me intuitively, unlikely. Turning to Plato whether in classical Greece or in today’s Googleplex, it is clear that as a professional philosopher RNG has read everything Plato wrote or might have written, probably in more than one translation, as well as what other philosophers have had to say about Plato, including inquiries into the meaning of words in classical Greek and into the ethos of the society that gave rise to Plato’s philosophy. A fascinating observation (Googleplex p4) is that it is difficult or impossible to discover what Plato really himself personally thought about any of the far flung positions expounded in his various dialogues. Positions there are aplenty, but no positions that Plato would unambiguously assent to. RNG remarks on the many disagreements that philosophers have had on Plato’s various positions and has compared him to Shakespeare as one, whose personal views are unknowable. Further (on p40), quoting from Plato’s Seventh Letter, RNG concludes that “he never committed his own philosophical views to writing.” And further, “Plato didn’t think the written word could do justice to what philosophy is supposed to do.” This in spite of the fact that he wrote extensively. RNG considers that the form of Plato’s writings as dialogue suggests that Plato’s view of what philosophy is supposed to do is “Nothing less than to render violence to our sense of ourselves and our world, our sense of ourselves in the world.” RNG quotes Plato, talking of philosophy as saying, “… for there is no way of putting it in words like other studies. Acquaintance with it must come rather after a long period of attendance in instruction in the subject itself and of close companionship, when suddenly like a blaze kindled by a leaping spark, it is generated in the soul and at once becomes self-sustaining.” (Googleplex, p40, Seventh Letter quote.) This last sounds suspiciously like the “enlightenment” that is supposed to come out of Buddhist meditation and training. What is different is the methodology. With Plato’s philosophy one attains the transcendent state by intense thinking about the conundrums of philosophy, trying to gain insight through reason and rationality into deep, questions, compelling but unanswerable, which pursuit ultimately withdraws from one, the “life support” of one’s unquestioned certainties, leaving one “free” in an empty universe. Or am I reading too much into a specious resemblance between Plato and Buddhism? Certainly, besides bringing personal enlightenment, philosophy is attempting to bring about insights which can be expressed in language. It seems, in fact, that over the stretch of time since the days of classical Greece, philosophy has concentrated on trying to bring clarity to its questions using language in a precise way, rather than becoming a means of instilling an awareness beyond language. Western Philosophy, it seems, has given up a quest for transcendence by relinquishing such a pursuit to religions based on faith. It seems to me that Zen has a contribution to make here in that the enlightenment it postulates is beyond language and therefore is irrefutable via language. It is to be approached, according to what I’ve said earlier in this blog via a path which totally rejects superstition, magic or even belief in anything, as far as that is possible. Philosophy, it seems to me, is an excellent Western path for a “seeker” who is attracted in that direction. And if, as I assume, RNG is correct in what she has said about Plato’s philosophy, such seeking would not be new to philosophy, but instead a turn of a spiral back towards Plato’s original conception. So much for this post. Later I would hope to return to RNG, Plato at the Googleplex and further ideas about a joining of East and West. For the immediate future, however I would like to take into account the objection that philosophy as a spiritual path is intellectually elitist, as indeed it might seem if one accepts the idea that “elitism” itself is other than an elitist convention. Be that as may be, now that I’ve brought up the idea of a “seeker”, it would be good to point out that seeking can adopt paths that are physical or artistic in nature though not necessarily anti-intellectual. So, onto the next post…
f6aad004e36cebf2
436(4) : General Solution of the Schroedinger Equation Interesting discussion. It becomes clear that computational quantum chemistry packages can be modified with t goes to m(r) power half t and r goes to r / m(r) half in the wavefunction. In general all the ideas of UFT415 ff. can be developed with computational quantum chemistry. In the IBM Clementi environment the LCAP system was used with the IBM 3096 and IBM 3084. These systems were also used at the Cornell Theory Center. I think that these computations can now be carried out on desktops. In the first instance it would be very interesting to compute the Lamb shift for 2S sub 1 / 2 to 2P sub 1/2 in atomic H and adjust m(r) for exact agreement with the data. These are known now with great precision. the spin connection seems to play a similar role as the so-called exchange-correlation potential in N-electron calculations. Perhaps there is a connection. Also the N-electron calculations have to be performed iteratively until a convergence is obtained, i.e. the new electron potential from your eq. (46) leads to the original total potential entering the solution of the Schrödinger equation. Am 08.04.2019 um 17:44 schrieb Doug Lindstrom: I’ve attached a pdf version which opened on my computer okay. The doc file was corrupt. As far as I remember, the potential was periodic up to the first Bohr orbit. On Apr 8, 2019, at 7:53 AM, Horst Eckardt <mail> wrote: I cannot open the Word document. Besides this, did you apply a periodic potential of infinite length? Normaly the charge density as well as the atomic potential have to go to zero for r –> infinity. Am 08.04.2019 um 16:47 schrieb Doug Lindstrom: This is somewhat similar to the structure of the hydrogen atom computed for the Serbian talk back in 2010 (paper attached), which is reassuring. Figure 1 for the electron potential has a minimum near r=0.1 and a max around r=0.5 for kappa = 2 Pi (orange). On Apr 7, 2019, at 9:41 PM, Myron Evans <myronevans123 wrote: Computation of the Valence Charge Density of the Nickel Atom This is an exceedingly interesting development, this program applies computational quantum chemistry to the m theory and finds an effect on the valence structure of the nickel atom. It is a small effect as expected, but nevertheless it is a real effect, similar to the Lamb shift, also a small effect. So the generally relativistic quantum mechanics can be coded up in computational quantum chemistry, and applied to a vast number of problems. This program can be used to develop the m theory far in advance of analytical solutions of the Schroedinger equation. The latter is analytical only for the H atom, as is well known. For the helium atom onwards, computational methods have to be used. It would be interesting to apply this program to a proton interacting with the nickel atom, using m theory. That might lead to low energy nuclear reaction. I think that thi sis a big step forward and this program can be used in many ways. The effect of m space is that the wave functions psi(r) are shifted to the outer region by psi(r) --> psi( r/sqrt(m(r)) ) because r/sqrt(m(r)) >= r. I succeeded in reactivating an old electronic structure program for atoms which a colleague at the TU Clausthal sent me years ago. I calculated the charge density of a Ni atom. The valence charge density (10 electrons) is graphed in the file, in original form and with shifted radius coordinate as above. One has to make the parameter R of the m function quite large to find a visible effect. I hope that jpg files go through the wordpress upload better than png files. Horst Am 07.04.2019 um 09:31 schrieb Myron Evans: 436(4) : General Solution of the Schroedinger Equation This is given by Eq. (8) and several examples given. In the usual vacuum free quantum mechanics the expectation value of energy from Eq. (8) is given by <E> = E, but in quantum mechanics in m space (or the vacuum), i.e. generally covariant quantum mechanics, the energy levels are shifted according to Eq. (31). This is a general law of quantum mechanics, true for any spectral line. %d bloggers like this:
5125f8d58db929fa
I'm TAing linear algebra next quarter, and it strikes me that I only know one example of an application I can present to my students. I'm looking for applications of elementary linear algebra outside of mathematics that I might talk about in discussion section. In our class, we cover the basics (linear transformations; matrices; subspaces of $\Bbb R^n$; rank-nullity), orthogonal matrices and the dot product (incl. least squares!), diagonalization, quadratic forms, and singular-value decomposition. Showing my ignorance, the only application of these I know is the one that was presented in the linear algebra class I took: representing dynamical systems as Markov processes, and diagonalizing the matrix involved to get a nice formula for the $n$th state of the system. But surely there are more than these. What are some applications of the linear algebra covered in a first course that can motivate the subject for students? • 2 $\begingroup$ See here. $\endgroup$ – Andrés E. Caicedo Dec 17 '14 at 20:59 • 4 $\begingroup$ take a look here math.stackexchange.com/questions/344879/… $\endgroup$ – Riccardo Dec 17 '14 at 21:18 • 2 $\begingroup$ There are a lot of great answers already - if I have time later I'll add some specific ones as an answer, but in general, any system that is described by more than a handful of variables or equations is a candidate. I am actually having a hard time thinking of examples where you can't use linear algebra. Some places to look are: almost anything in engineering, physics, or anything related to optimization, of any kind. $\endgroup$ – thomij Dec 17 '14 at 21:50 • 2 $\begingroup$ Eigenvector centrality (finding the principal eigenvector) of an adjacency matrix of a graph is widely used, see this (scroll down): activatenetworks.net/… $\endgroup$ – Ryan Dec 17 '14 at 23:36 • 7 $\begingroup$ I think I'll link to this answer for my linear algebra course next semester, then, proceed to cover none of them :) $\endgroup$ – James S. Cook Dec 18 '14 at 0:30 19 Answers 19 I was a teaching assistant in Linear Algebra previous semester and I collected a few applications to present to my students. This is one of them: Google's PageRank algorithm This algorithm is the "heart" of the search engine and sorts documents of the world-wide-web by their "importance" in decreasing order. For the sake of simplicity, let us look at a system only containing of four different websites. We draw an arrow from $i$ to $j$ if there is a link from $i$ to $j$. The goal is to compute a vector $\underline{x} \in \mathbb{R}^4$, where each entry $x_i$ represents the website's importance. A bigger value means the website is more important. There are three criteria contributing to the $x_i$: 1. The more websites contain a link to $i$, the bigger $x_i$ gets. 2. Links from more important websites have a more relevant weight than those of less important websites. 3. Links from a website which contains many links to other websites (outlinks) have less weight. Each website has exactly one "vote". This vote is distributed uniformly to each of the website's outlinks. This is known as Web-Democracy. It leads to a system of linear equations for $\underline{x}$. In our case, for $$P = \begin{pmatrix} 0&0&1&1/2\\ 1/3&0&0&0\\ 1/3& 1/2&0&1/2\\ 1/3&1/2&0&0 \end{pmatrix}$$ the system of linear equations reads $\underline{x} = P \underline{x}$. The matrix $P$ is a stochastical matrix, hence $1$ is an eigenvalue of $P$. One of the corresponding eigenvectors is $$\underline{x} = \begin{pmatrix} 12\\4\\9\\6 \end{pmatrix},$$ hence $x_1 > x_3 > x_4 > x_2$. Let $$G = \alpha P + (1-\alpha)S,$$ where $S$ is a matrix corresponding to purely randomised browsing without links, i.e. all entries are $\frac{1}{N}$ if there are $N$ websites. The matrix $G$ is called the Google-matrix. The inventors of the PageRank algorithm, Sergey Brin and Larry Page, chose $\alpha = 0.85$. Note that $G$ is still a stochastical matrix. An eigenvector for the eigenvalue $1$ of $\underline{x} = G \underline{x}$ in our example would be (rounded) $$\underline{x} = \begin{pmatrix} 18\\7\\14\\10 \end{pmatrix},$$ leading to the same ranking. • 5 $\begingroup$ Technically, PageRank was imagined simply as a Markov process, so it is not really different from OPs existing example. However, it is very cool and probably very motivating for students. $\endgroup$ – Richard Dec 17 '14 at 21:41 • 2 $\begingroup$ Today, PageRank is vastly more compplicated with hundreds of "signals" (and new ones added all the time) and revisions to the algorithms - mostly to address the endless war on SEO: en.wikipedia.org/wiki/Search_engine_optimization. $\endgroup$ – alancalvitti Dec 18 '14 at 15:01 • $\begingroup$ Hilarious Man!!!!!!! $\endgroup$ – Naseer Ahmed Dec 20 '14 at 16:13 • 2 $\begingroup$ The idea that a massive ad company is supported by mathematics needs to be addressed honestly, especially when teaching young people. Google has explicitly stated for over a decade that PageRank is no longer central to their ranking criteria (LDA was used for a while, and that uses linear algebra). Yet academics continue to claim that eg Perron-Frobenius theory is a path to business success. $\endgroup$ – isomorphismes Dec 28 '17 at 16:36 Another very useful application of Linear algebra is Image Compression (Using the SVD) Any real matrix $A$ can be written as $$A = U \Sigma V^T = \sum_{i=1}^{\operatorname{rank}(A)} u_i \sigma_i v_i^T,$$ where $U$ and $V$ are orthogonal matrices and $\Sigma$ is a diagonal matrix. Every greyscale image can be represented as a matrix of the intensity values of its pixels, where each element of the matrix is a number between zero and one. For images of higher resolution, we have to store more numbers in the intensity matrix, e.g. a 720p greyscale photo (1280 x 720), we have 921'600 elements in its intensity matrix. Instead of using up storage by saving all those elements, the singular value decomposition of this matrix leads to a simpler matrix that requires much less storage. You can create a rank $J$ approximation of the original image by using the first $J$ singular values of its intensity matrix, i.e. only looking at $$\sum_{i=1}^J u_i \sigma_i v_i^T .$$ This saves a large amount of disk space, but also causes the image to lose some of its visual clarity. Therefore, you must choose a number $J$ such that the loss of visual quality is minimal but there are significant memory savings. Example: The image on the RHS is an approximation of the image on the LHS by keeping $\approx 10\%$ of the singular values. It takes up $\approx 18\%$ of the original image's storage. (Source) • 6 $\begingroup$ That's neat but actually a fake application. No image codec ever used the SVD of the full images. Back in the days when digital images were just a few hundred pixels wide computer were to slow to be practical and nowadays images are too large to perform a SVD on them. And anyway, JPEG(2000) and PNG are just way better. $\endgroup$ – Dirk Dec 19 '14 at 13:23 • $\begingroup$ Check out this somewhat huge PDF for more example images compressed with SVD. I especially like how, in the extreme case $J = 1$, you can clearly see that the rank of the compressed image matrix is only $1$ -- any column is visibly a scalar multiple of any other column. $\endgroup$ – Ingo Blechschmidt Dec 21 '14 at 14:39 • 1 $\begingroup$ Dirk: I agree that JPEG and PNG are better. The anwer at dsp.stackexchange.com/a/7862 is relevant, comparing DCT to SVD. $\endgroup$ – Ingo Blechschmidt Dec 21 '14 at 14:44 • $\begingroup$ Relevant: stats.stackexchange.com/questions/177102/… $\endgroup$ – kjetil b halvorsen Jan 21 '18 at 15:08 This is a simpler example, but maybe that'll be good for undergraduate students: Linear algebra is a central tool in 3-d graphics. If you want to use a computer to represent, say, a spaceship spinning around in space, then you take the initial vertices of the space ship in $\mathbb{R}^3$ and hit them by some rotation matrix every $.02$ seconds or so. Then you have to render the 3-d image onto a 2-d screen, which also involves linear algebra (and stuff with colors and lighting probably) There are probably graphics packages that do a lot of that work for you these days (I actually don't know that much programming), but the linear algebra is still a pretty good first-order approximation for what the computer is doing. • 4 $\begingroup$ This was the first thing I thought of; in particular, 3d graphics libraries make heavy use of 4x4 matrices to allow for affine transformation of vectors in $\Bbb{R}^3$ $\endgroup$ – Dan Bryant Dec 18 '14 at 0:03 • $\begingroup$ @Dan Capturing the affine aspect like that is a cool trick. You learn something new every day. $\endgroup$ – Sam Dittmer Dec 18 '14 at 5:04 • $\begingroup$ Matrix algebra is a standard tool used in textbooks about computer graphics, along with quaternions. You only get to use very simple techniques though. I think my knowledge of basic computer graphics actually made it harder for me to understand how general the techniques are. I would expect this to be true for many computer science students (like I was). Math students may not have the same challenge. $\endgroup$ – Jørgen Fogh Dec 18 '14 at 11:06 • $\begingroup$ This also works the other way around, when calibrating several cameras of which you want to combine the images into a 3D model, you need to figure out what the transformation matrices between them are. $\endgroup$ – RemcoGerlich Dec 18 '14 at 15:18 • $\begingroup$ It's not just important for 3D graphics. It is also important for 2D graphics. It could probably be applied to any xD graphics, but 2D and 3D are the ones most likely encountered. $\endgroup$ – ThomasW Dec 19 '14 at 4:53 We can also use Linear Algebra to solve Ordinary Differential Equations An ODE is of the form $$\underline{u}'(t) = A \underline{u}(t) + \underline{b}(t)$$ with $A \in \mathbb{C}^{n \times n}$ and $\underline{b}(t) \in \mathbb{C}^{n \times 1}$. If we have an initial condition $$\underline{u}(t_0) = \underline{u_0}$$ this is an initial value problem. Assuming the entries of $\underline{b}(t)$ are continuous on $[t_0,T]$ for some $T > t_0$, Picard-Lindelöf provides a unique solution on that interval. If $A$ is diagonalisable, the solution of the homogeneous initial value problem is easy to compute. $$P^{-1} A P = \Lambda = \operatorname{diag}(\lambda_1, \dots, \lambda_n),$$ where $P = \begin{pmatrix} x_1 & \dots & x_n \end{pmatrix}$. Defining $\tilde{\underline{u}}:= P^{-1} \underline{u}(t)$ and $\tilde{\underline{u_0}} = P^{-1} \underline{u_0}$, the IVP reads $$\tilde{\underline{u}}'(t) = \Lambda \tilde{\underline{u}}(t), \; \tilde{\underline{u}}(t_0) = \tilde{\underline{u_0}} =: \begin{pmatrix} c_1 & \dots & c_n \end{pmatrix}^T.$$ These are simply $n$ ordinary, linear differential equations $$\tilde{u_j}'(t) = \lambda_j \tilde{u_j}(t), \; \tilde{u_j}(t_0) = c_j$$ for $j = 1, \dots, n$ with solutions $\tilde{u_j}(t) = c_j e^{\lambda_j(t-t_0)}$. We eventually retrieve $\underline{u}(t) = P \tilde{\underline{u}}(t)$. Example: We can write $$x''(t) = -\omega^2 x(t), \; x(0) = x_0, \; x'(0) = v_0$$ as $\underline{u}'(t) = A \underline{u}(t), \; \underline{u}(0) = \underline{u_0}$, where $\underline{u}(t) = \begin{pmatrix} x(t)&x'(t) \end{pmatrix}^T$ and $$A = \begin{pmatrix} 0&1\\ -\omega^2&0 \end{pmatrix} \text{ and } \underline{u_0} = \begin{pmatrix} x_0\\ v_0 \end{pmatrix}.$$ Computing eigenvalues and eigenvectors, we find $$\underline{u}(t) = c_1 e^{i \omega t} \begin{pmatrix} 1\\ i \omega \end{pmatrix} + c_2 e^{-i \omega t} \begin{pmatrix} 1 \\ -i \omega \end{pmatrix}.$$ Using the initial condition, we find $x(t) = x_0 \cos(\omega t) + \frac{v_0}{\omega} \sin(\omega t)$. Matrix exponential: I don't know if your students already are familiar with the matrix exponential, but using it we find a solution of the homogeneous initial value problem to be given by $$\underline{u}(t) = e^{(t-t_0)A} \underline{u_0}.$$ To solve the inhomogeneous differential equation, we use can vary the constants. Since every solution of the homogeneous system $\underline{u}'(t) = A \underline{u}(t)$ is of the form $\underline{u}(t) = e^{tA} \underline{c}$ for some constant vector $\underline{c}$, we set $\underline{u_p}(t) = e^{tA} \underline{c}(t)$ and find by plugging in $$\underline{c}'(t) = e^{-tA} \underline{b}(t).$$ $$\underline{u}(t) = e^{(t-t_0)A} \underline{u_0} + \int_{t_0}^t e^{(t-s)A} \underline{b}(s) \, \mathrm ds.$$ The restricted isometry property (RIP) of matrices is something not too hard for undergraduates to understand: it means that a (rectangular) matrix $A$ satisfies $$ (1-\delta) \|x\| \le \|Ax\|\le (1+\delta)\|x\| \tag{1}$$ for all vectors $x$ with at most $s$ nonzero components. The constant $\delta$ should be small, and of course independent of $x$. The number $s$ is strictly less than the dimension of the domain of $A$ (its number of columns). This means that $A$ encodes sparse vectors with little distortion to their norm. The fact that "fat" RIP matrices exist (with the number of their rows less than the number of columns) is not obvious, and there is no easy deterministic algorithm to construct them. But suitably random matrices are known to satisfy RIP with high probability. The use of such matrices is essential in the work of Candes and Tao from about 10 years ago, which formed the mathematical foundations of compressed sensing, a novel signal processing technique now applied in MRI and other areas where making a large number of measurements is expensive. Multiplication of a graph's adjacency matrix can be used to calculate the number of walks of length $n$ from one vertex to another. In particular: Proposition. For any graph formed of vertices connected by edges, the number of possible walks of length $n$ from vertex $V_i$ to vertex $V_j$ is given by the $i,j^\text{th}$ entry of $A^n$, where $A$ is the graph's adjacency matrix. This proposition is proved by induction. The nicest examples you could inspect are the simple triangle and square... and unit cube. For nice problems on the basics, you might check Exercise 1.2.15,16,17 in Hubbard$^2$. You can also make a matrix that allows for directed edges. For some examples via exercise, see Exercise 1.2.18,19 in Hubbard$^2$. This has applications to not only graph theory, but computers and multiprocessing. • 1 $\begingroup$ In addition, eigenvector centrality is used with adjacency matrices. There is a lot of linear algebra associated with graphs! $\endgroup$ – Ryan Dec 17 '14 at 23:38 • 1 $\begingroup$ Something else which I like is that the dimension of the nullspace of the graph Laplacian $D - A$ is equal to the number of connected components of $G$ ($D$ is diagonal matrix with the degrees of vertices on the diagonal, and $A$ is the adjacency). Approximate versions of this have a lot to do with sparse cuts and from there with many applications (e.g. finding communities in social networks) $\endgroup$ – Sasho Nikolov Dec 19 '14 at 23:52 Stoichiometry (not that our students would ever stoop to linear algebra for it) is a very elementary place it shows up. Quantum mechanics is an advanced place. Linear programming is ubiquitous in business applications, as is game theory in economics and political science, and a lot of game theory is based on linear algebra and Markov processes. Least squares and $A^\top A$ show up all over statistics and econometrics. The stoichiometry issue raises an ilk of question that shows up in graph theory (which presumably is of interest in some applications): Given an integer matrix, what is the integer vector with "smallest" entries in its kernel? When is there a vector with just $\pm 1$? When is there a vector with all nonnegative entries? Etc. I worked as a software engineer for 27 years for a large Defense corporation. They used Finite Element software tools to model spacecraft designs for stress tests, amount of construction material required, simulated launch testing, etc. Finite Element theory is based on a matrix of vectors that describe the connections and forces on elements of a structure. It also applies to bridges and other civil engineering structures. It may also apply to thin shell models, but I never worked with those. Another matrix application was to perform text searches in document troves. This involves a lot of natural language approximation. We created a taxonomy (word list) of interesting words for a particular application. Then we used a normalization process to define a word basis as the core of a word, so that "report", "reports", "reporting", and maybe "reporters" would all count for the word "report". Then create a vector of a document, based on the count of each normalized taxonomy word in the document. Then create a vector from your requirement description text, or even a subset of interesting taxonomy words. Do a dot-product of the requirement text vector with the text vector of each stored document. The documents with the highest dot-product values are closely related to your text search criteria. In general, linear algebra can be used to approximate models of any system. Define elements of the system, make a vector of properties of the element. Then try to normalize your matrices, or apply equations and transformations to minimize overall cost, maximize overall strength, etc., relating to properties of the model. One of the examples my students have absolutely loved in the past is Hill cipher. It is a "real" application although outdated but the students do love playing with it. Using some sort of a encoding, convert a message to a bunch of numbers, and then a key matrix is chosen. The message can be organized into an array and multiplied by the key for encryption. And the encrypted message can be multiplied by the inverse of the key matrix to be decrypted. It is fun to give messages to students and them encrypt/decrypt them. It is also very simple to ask them to break a scheme and figure out the key by giving them a couple of plaintext/ciphertext pairs. You can also demonstrate some man-in-the-middle attacks and how a malicious eavesdropper can change the contents of the message without having to know the key at all. If you want to be a bit more advanced, you can also do modular arithmetic so that you can take the English language and add any three symbols like space, comma, and a period to have a total of 29 symbols and then work modulo 29. 29 is a prime number so it avoids a whole bunch of invertibility issues. Matrix arithmetic in a finite field like Z/29Z illustrates some concepts very clearly. A smaller modulus will make it easier to do this by hand if you want to try that. Anything to do with scheduling and maximising linear systems: an airline scheduling planes and pilots to minimise the time airplanes are stationary and pilots are just sitting around waiting is an example. Linear optimisation saves millions if not billions of dollars each year by allowing companies to allocate resources optimally, and it's basically an application of linear algebra. • 2 $\begingroup$ Wouldn't such scheduling problems be examples of linear programming, rather than linear algebra? $\endgroup$ – Jørgen Fogh Dec 18 '14 at 11:53 • 3 $\begingroup$ @JørgenFogh I would say that linear programming is an application of linear algebra. $\endgroup$ – Johanna Dec 18 '14 at 15:45 A standard application in undergraduate physics is to find the principal axes of inertia of a solid. For any solid object, we can construct an inertia matrix $I_{ij}$, which is a 3x3 symmetric matrix describing how the object's mass is distributed in space, with respect to a specific Cartesian reference frame $\{x_1,x_2,x_3\}$. Its diagonal elements $I_{ii}$, or moments of inertia are defined as $$I_{ii} = \int (r^2- x_i^2) \rho(\vec{r})dV$$ and its off-diagonal elements, or products of inertia are $$I_{ij} = -\int x_ix_j \rho(\vec{r}) dV$$ (here $\rho(\vec{r})$ is the density of the object at point $\vec{r}$ in space). The principal axes of inertia are the eigenvectors of this matrix (which form an orthogonal set, since $I$ is symmetric). Without trying to explain the physics itself here (check out here for a simple description, or here for more details), the special thing about the principal axes is that they are the only ones about which the body will freely rotate in the absence of any external forces or torques. (Finite dimensional) quantum mechanics, if this is considered "outside of maths": The basic postulates of quantum mechanics read: • The quantum system is described by a Hilbert space (make this a complex vector space in finite dimensions). • "observables" are Hermitian operators (read: symmetric/self-adjoint matrices in finite dimensions) and measurement outcomes are given by eigenvalues, hence you need to know that every Hermitian matrix is diagonalizable with real eigenvalues (spectral theorem). • composite systems are given by the tensor product (this might not be "elementary", but still, it's pure linear algebra. • time evolution is given by the Schrödinger equation. The famous "bra-ket"-notation of Dirac used in virtually all textbooks on quantum mechanics is all about inner products and the duality between a vector space and its dual. Of course, most of quantum mechanics needs infinite dimensional systems and thus functional analysis, but quantum computing and quantum information routinely consider finite dimensional systems (the Hilbert space for qubits is just $\mathbb{C}^2$ and the most important matrices are the Pauli-matrices and their dual). In short: You can't do quantum mechanics without knowing the spectral theorem and in quantum information, other norms, such as the Schatten-p-norms deriving from the singular values are frequently used. In order to be able to understand their merit, you need to know what the SVD is. Using Vandermonde matrices, one can show that for any $k$ and any $n$, there exists $k$ points in general position in $\Bbb R^n$. Indeed, given $k$ (assuming $k>n$), pick $k$ distinct real numbers and consider ${\bf v}_i=(r_i,r_i^2,\ldots,r_i^{n})$ for $i=1,\ldots,k$ and use Vandermonde's determinant to prove the claim. • 6 $\begingroup$ Outside of math? $\endgroup$ – user147263 Dec 17 '14 at 20:51 • $\begingroup$ @Behaviour BLERGH; I missed that. $\endgroup$ – Pedro Tamaroff Dec 17 '14 at 20:51 I like the Google Page Rank and Adjacency Matrix points. Linear Algebra is a deep subject that is readily connected to computer science, graph theory, and combinatorics in unexpected ways. The traditional connection is with numerical analysis. However, Linear Algebra is closely related to graph theory. There is a field known as algebraic graph theory which utilizes vector spaces, spectral theory, and group theory to study graphs. The idea of linear independence is closely related to the property of a graph being acyclic. And so finding a basis is like finding a spanning tree in a graph. This idea is formalized quite nicely with Matroid Theory. A Matroid $\mathcal{M}(G, I)$ is a construct with a ground set $G$ and an independent set $I \subset 2^{G}$, where $H \subset G \in I$ iff $H$ is independent. If $G$ is a set of vectors, then $I$ contains all the linearly independent subsets. Similarly, we can let $G$ be the edge set of a graph, and $I$ contains all subsets of edges that don't form a cycle. If you weight the elements of the ground set and sort them, you can construct a greedy basis. Observe that Kruskal's algorithm is an instance of this greedy basis approach, but applied to graphs. Matroids also come into play relating linear independence to collinearity on the Fano Plane. That is, we don't want three vertices on the same line on the Fano plane. If the vertices of the Fano Plane are weighted, we can label them with vertices from $\mathbb{F}_{2}^{3}$ such that any three vectors are linearly dependent if their vertices are on the same line on the Fano Plane. Vector Spaces over graphs are nice to explore as well. Cycle Space and Cut Space are the common ones, where they are over the field $\mathbb{F}_{2}$ with addition being the symmetric difference operation. MacLane's planarity criterion is based on the cycle space. Spectral graph theory is another neat topic. Once you have the proof about powers of the adjacency matrix counting walks on the graph, you can use the trace operation on the adjacency matrix and the spectral theorem to give you diagonalization. You can easily count triangles and edges using eigenvalues. The number of distinct eigenvalues of the adjacency matrix is also at most one less than the diameter of the graph. There are other neat spectral properties of graphs. Optimization was mentioned above. Both Linear and Integer programming techniques rely heavily on optimization. You can do a lot of economics here. You can also formulate the Network-Flow problem as an LP, and the min-cut problem is the dual. Computer science has a lot of applications! 1. Manipulating images. 2. Machine learning (everywhere). For example: Multivariate linear regression $(X^TX)^{-1}X^{T}Y$. Where X is an n x m matrix and Y is an N x 1 Vector. Linear Algebra is widely used in the optimization of particle accelerators. These machines (either rings or linear) are composed by a number of electro-magnetic elements which are far from perfect and perfectly aligned. As a result the beam is not exactly steered. To correct for this you have a set of Beam Position Monitors (BPM), that tells you how much the beam is displaced, and a set of corrector magnets, that can adjust the beam trajectory. According to the size of the machine their number can be up to few hundreds and often the number of correctors is lower than the number of BPMs. They are placed all around the machine, but they are useless if you are not able to find a proper configuration of the correctors! A popular and effective technique consists in measuring the response matrix $R$ which tells you how each BPM responds to each corrector with a simple product: $$b = R\;c$$ Where $b$ and $c$ are vectors containing the values of the BPMs and correctors. Now all that we need to do is to determine the configuration of correctors that minimizes the excitation of the BPMs (this means that the beam is passing closer to their centre). Very often the procedure goes through an SVD which simplifies A LOT the computation of the minimum. Once properly implemented in the control system, this and slightly more advanced techniques are extremely faster and more effective than manual/empirical optimizations. Affine Geometry works well to smoothly fit curves and surfaces to a set of desired data points. Affine Geometry builds on top of Linear Algebra and is essential to any engineering that combines absolute and relative for both direction and displacement. Engineering the moving parts of a vehicle combine absolute and relative motion with both rotational and non-rotational movement. I'm using matrix operations when work with neural networks. It helps to represent weights and inputs of neural network as matrix which also give me the way to parallel computations at different processors or cores of processors. Before usage of matrix operations I didn't know how to spread calculations at neural networks in parallel mode • $\begingroup$ Welcome to the site, Yura! Consider expanding your answer about how linear algebra was applied in your case. $\endgroup$ – Mark Fantini Dec 18 '14 at 18:15 In game development, basic linear algebra pops up all the time. This is some code I wrote this morning: var toMouse = transform.InverseTransformPoint(Screen.mousePosition); var cosOfAlpha = Vector3.Dot(toMouse, _toSameEdgeBind)/(toMouse.magnitude * _sameEdgeMagnitude); var alpha = Mathf.Acos(cosOfAlpha); Your Answer
d164bb131b366d04
01131nas a2200133 4500008004100000245006000041210005900101260003000160520071200190100002100902700001900923700001900942856003600961 2002 en d00aSpace-adiabatic perturbation theory in quantum dynamics0 aSpaceadiabatic perturbation theory in quantum dynamics bAmerican Physical Society3 aA systematic perturbation scheme is developed for approximate solutions to the time-dependent Schrödinger equation with a space-adiabatic Hamiltonian. For a particular isolated energy band, the basic approach is to separate kinematics from dynamics. The kinematics is defined through a subspace of the full Hilbert space for which transitions to other band subspaces are suppressed to all orders, and the dynamics operates in that subspace in terms of an effective intraband Hamiltonian. As novel applications, we discuss the Born-Oppenheimer theory to second order and derive for the first time the nonperturbative definition of the g factor of the electron within nonrelativistic quantum electrodynamics.1 aPanati, Gianluca1 aSpohn, Herbert1 aTeufel, Stefan uhttp://hdl.handle.net/1963/5985
aa988a1acfbd8921
Like us on Facebook and Follow us on Twitter PowerPedia:Quantum Ring Theory at Temple University ::Temple University (Philadelphia-USA) holds the ::Center for Frontier Sciences, responsible for the publication of ::Frontier Perspectives, a semiannual journal. Quo Vadis Quantum Mechanics? Extracted from Frontier Perspectives1 :::On September 2002, the Center for Frontier Sciences held one of its most successful international workshops, titled “Quo Vadis Quantum Mechanics ? Possible New Developments in Quantum Theory in the 21st Century?. Sixteen eminent physicists and philosophers presented findings, ideas and speculations concerning the future revolution in physics, in the light of quantum mechanics’ intriguing revelations. :::The prestigious publishing house, Springer Verlag has undertaken the publication of a book based on the lectures and panel discussions held during that momentous workshop. Quo Vadis Quantum Mechanics? (Elitzur, A. C., Dolev, S. and Kolenda, N., Editors) is now available and is a prominent addition to Springer-Verlag’s new collection, “The Frontier Series?. :::Contributing authors include Nobel Laureates There was an error working with the wiki: Code[1] and There was an error working with the wiki: Code[2], following a foreward by There was an error working with the wiki: Code[3] Quantum Ring Theory and Quo Vadis Quantum Mechanics are two rival books. They both present findings and ideas concerning the future revolution in physics. :The main difference between the two books lies in the following: ::a) Quo Vadis Quantum Mechanics has been written keeping the fundamental foundations of Quantum Mechanics. ::b) Quantum Ring Theory (QRT) shows that some fundamental principles are missing in Quantum Mechanics, and there are other ones that must be replaced. According to QRT, among the fundamental concepts missing in Quantum Mechanics, one is concerning the question on the new model of neutron, required by a Cold fusion theory canditate to explain cold fusion experiments. ::Therefore, the fundamental difference between the two rival books lies in the fact that in Quo Vadis Quantum Mechanics the cold fusion existence is neglected, while in the book Quantum Ring Theory the cold fusion occurrence is taken in consideration. :::The neglection of cold fusion research by the theorists that colaborated in the book Quo Vadis Quantum Mechanics can be resumed in this sentence of Dr. t’Hooft in 2001 concerning the Borghi’s experiment: :::“There is much more wrong with n=p+e, but most of all the fact that the ‘experimental evidence’ is phony?2. From the principles of Quantum Mechanics cold fusion occurrence is impossible to occur, as stated by the Nobel Laureate Murray Gell-Mann at a public forum (lecture at Portland State University in 1998): “It’s a bunch of baloney. Cold fusion is theoretically impossible, and there are no experimental findings that indicate it exists? 3. Many other theoretical restrictions against cold fusion viability can be seen in the Wikipedia’s article Cold fusion. So, concerning possible new developments in Quantum Theory in the 21st Century, the book Quo Vadis Quantum Mechanics does not take in consideration the theoretical implications that cold fusion occurrence requires. Unlike, in the book Quantum Ring theory these implications are taken in consideration. Quantum Ring Theory quoted in Frontier Perspectives Quantum Ring Theory, the rival book of the Quo Vadis Quantum Mechanics, is now quoted in Frontier Perspectives4 ::'''Guglinski, Wladimir. (2006) ::Quantum Ring Theory: Foundations for Cold Fusion. Boulder, Co.: The Bäuu Institute Press. ::In Quantum Ring Theory Guglinski presents a new theory concerning the fundamental nature of physics. Here, the author argures that the current understanding of physics does not showcase an accurate model of the world. Instead, he argues that we must consider the “aether?, a notion originally developed by Greeck philosophers, and by considering the nature of “aether? and its role in physical processes, Guglinski is able to create a theory that reconciles quantum physics with the Theory of Relativity. As part of his new theory, Guglinski showcases a new model of the neutron and this model has been confirmed by contemporary physical experiments. The problem of spin of the electron in the nucleus A reviewer of the There was an error working with the wiki: Code[4] Magazine wrote the following review on the solution proposed in QRT, concerning the electron's spin within the nucleus: ::“The basic question here is can a classical model (which postulates a trajectory for the electron) cast any light on the inner workings of the nucleus? Most physicists would respond with a resounding NO. However, it generally happens that classical models have quantum analogs and thus can prove suggestive in at least a qualitative way. For instance, without the classical Hamiltonian energy expression there would be no clue to how to write the Schrödinger equation. And the classical energy expression would not exist without trajectory pictorialization. Therefore one cannot reject Guglinski’s “helical trajectory? model (or similar models due to Bergman and others) out of hand as useless to physics. We don’t know what the final physics will be, if any. ::Moreover, Guglinski’s model may solve the problem of spin of the electron in the nucleus.? Experiments that confirm the new neutron model of QRT The new model of the neutron n=p+e proposed in Quantum Ring Theory is confirmed by the following experiments: Don Borghi’s experiment5 Conte-Pieralice experiment6 Taleyarkhan’s experiment7 Additional experiment: ::The fundamental background on the Guglinski’s new model of neutron n=p+e is the solution proposed in Quantum Ring Theory for a question considered unsurmountable by the most quantum theorists: how to conciliate a model of neutron formed by a proton and electron with the Fermi-Diract statistics. In QRT the Fermi-Dirac statistics is conciliated with the neutron model n=p+e through the spin-fusion hypothesis, which received an experimental corrobotation in 2006 from the ARPES experiment8 (Angle-Resolved Photoemission Spectroscopy), performed by the staf of Dr. Changyoung Kim, of Yonsei University, where they did succeed to apart the charge and the spin of an electron. 1- N. Kolenda, From the editor’s desk, Frontier Perspectives, V. 14, No.1, 2005 2- W. Guglinski, Quantum Ring Theory, pg. 3, Bäuu Press, 2006 3- E. Mallove, CSICOP: “Science Cops? at War with Cold Fusion, Infinite Energy, V. 4, No. 23, 1999 4- N. Kolenda, New books received, Frontier Perspectives, V. 16 , No. 1 , 2007 8- First direct observations of 'spinons' and 'holons' seen after 40-year hunt - See also PowerPedia:Quantum Ring Theory PowerPedia:Quantum Ring Theory burnt in a Brazillian university - PowerPedia - Main Page
381441fcf5c10928
On the Road to a Quantum Computer Ben Deen By Ben Deen February 26, 2009 15:18 A computer in which the carriers of information behaved according to quantum mechanics could yield tremendous gains in processing speed. While today’s computers must feed through input strings one by one, quantum mechanics could allow algorithms to be executed on all possible input strings at once. Progress toward constructing a quantum computer, however, remains in its infancy. Computations have been performed on systems of eight “quantum binary digits” (“qubits”), but these have been nowhere near modern computers’ performance. A collaboration led by Yale professors Rob Schoelkopf, Michel Devoret, and Steven Girvin is developing a method of communication between qubits, which could be extended to systems involving more qubits working at greater distances. Known as a “quantum bus,” this system sends information back and forth from one stationary circuit-based qubit to another via the exchange of a single photon resonating in a cavity, like a light wave bouncing between mirrors. Alternatively, information is sent from a stationary qubit to a single traveling photon with information encoded in the probabilities of the existence and nonexistence of this moving packet of energy. Quantum Aims and Obstacles In quantum mechanics, observable quantities in physical systems don’t necessarily take determinate values, but are instead distributed probabilistically over a finite or infinite number of values. The position of a particle, for instance, is given by a probability distribution over three dimensional space, not a fixed value. This critical distinction between the quantum and classical worldviews is ultimately what gives quantum computation an edge over current methods of computation. Classical computers store information as bits, registers that hold one of two determinate values, but quantum computers store information in the probabilities that a bit holds the value 0 or 1 (known as “quantum information”). These qubits are far harder to manage physically than classical bits: there are countless possible physical manifestations of qubits. In Schoelkopf ’s words, “Anything that is quantum mechanical—and we believe that everything is—can be a quantum computer.” Any quantum system whose energy can take one of two discrete values is a potential qubit: for instance, the existence or nonexistence of a photon, or the alignment or antialignment of an the magnetic moment of a nucleus with an external magnetic field. Yale researchers use circuit-based qubits: imagine a sea of electron pairs surrounding a thin barrier (about 10 atoms thick). In a superconducting circuit, a single electron pair may tunnel through the barrier to give a charge distribution of higher energy. Why use probabilistic information at all? Superficially, this seems to just complicate matters or jeopardize the precision that we expect of a computer. While the former may be true, computing with qubits opens a range of possibilities unattainable with classical bits. In classical computation, to see how an algorithm acts on a set of input strings, we would have to feed each one through sequentially; in quantum computation, we can take a probabilistic combination of all possible inputs and immediately obtain a combination of all corresponding outputs. How do we extract this rich probabilistic information? When we measure a qubit’s state, we find either 0 or 1 with certain probabilities, but the qubit subsequently “collapses” to yield a single classical bit of information from each measurement. One difficulty in quantum computation is that we must measure a large number of copies of each qubit to know its full probabilistic original state; in practice, one obtains a solution within a probability of error, which can be reduced by repeating the computation. Luckily, clever algorithms can circumvent these issues. For instance, Shor’s factorization algorithm, the most promising quantum algorithm, factors a number into primes at a much higher speed than any algorithm running on classical bits. Using classical bits, the time required to factor a number increases exponentially in its size. With qubits, this time can be made proportional to the log cubed of size, which increases more slowly as size increases. However, the possibility of cracking modern cryptography methods using quantum computing remains distant: the most intensive task handled so far has been using Shor’s algorithm to factor the number 15 on a seven-qubit system. What prevents us from building a quantum computer today? One fundamental problem is that qubits’ states are extraordinarily fragile. Ideally, a qubit would remain exactly as we leave it, isolated from all unwanted or unknown influences (known as “decoherence”) which might alter its state and thus its information. But in the world described by modern physics, nothing is truly in isolation; temporally sustaining a specific quantum state in a qubit poses a formidable engineering problem. In the context of quantum computation, decoherence is a source of noise, threatening the memory of our device. A second major problem is the difficulty of wiring qubits together into a functional architecture. To function as parts of a computer, qubits must interact in a prescribed way; we must be able to put a qubit in a specific state, manipulate it as we wish, and ultimately retrieve the final state. Progress on building a quantum computer must overcome the tension between these two requirements: we want our qubits isolated from any interaction to preserve their information, but want them to interact very strongly in specific ways with other parts of the computer. As the size of our computer increases, there are increasingly many outlets by which information can be leaked to the environment in uncontrolled ways. This quandary of assembling qubits that can work together is called the problem of scalability. For qubits based on manmade devices the major obstacle is maintaining coherence— sustaining the probabilistic superposition of a qubit’s state. It is easier to craft a means to connect qubits if you’ve built them yourself. For qubits in nature, (e.g. a particle’s spin) the opposite is true. Schoelkopf explains these qubits “are great because they have very long lifetimes and you can do very good quantum control. The problem is, it’s very hard to wire them up, to string them together in a complicated arrangement.” Current research trends lean toward solid-state devices such as superconducting circuits. Experts agree that computation with “god-given” qubits probably won’t be able to breach the scale of 7-8 qubit systems that have been built thus far. Devoret notes, however, that they could perhaps be used even in a large-scale quantum computer as a form of permanent memory. The recent work at Yale is a step toward constructing an integrated, many-qubit system. Prior methods of exchanging quantum information have relied on close proximity between qubits, but the photon exchange method described below could potentially couple distant qubits, crucial for a many-qubit architecture. Quantum bits and gates basics First, it will be useful to detail the general mechanisms by which qubits are used to store and manipulate information. The fundamental distinction between classical and quantum bits manifests itself in the space of possible states for each. In classical bits, this space has two elements, 0 and 1. In quantum mechanics, for each possible state there is some complex number whose magnitude squared is the probability of that state. This can be a continuous complex “wavefunction” (e.g. for position), or a discrete set of numbers (e.g. two numbers for a qubit). The general complex number is one real number plus another real number times i, so one complex number can be represented as a point on the space of real numbers, i.e. the plane. It is convenient to specify complex numbers by their distance from the origin (magnitude), and their angle in the plane (phase). The space of two complex numbers (two planes) has four real dimensions, but for qubits their squared magnitudes must add to one to give a probability distribution (which removes one dimension). Rotating both numbers in the complex plane gives an identical single-qubit state where only the relative magnitude and relative phase matter, leaving two continuous real dimensions. This space, mathematically defined as the set of two complex numbers up to a complex multiple, is isomorphic to the space of points on the surface of a figure known to physicists as the Bloch sphere (Fig. 1). Fig. 1: A qubit’s state is specified by a point on a sphere. The relative phase of the qubit’s weights is φ. The magnitude of the “1” state is cosθ, the magnitude of “0” is sinθ, and the corresponding probabilities are cos2θ and sin2θ. At the north pole the qubit holds 1, at the south pole it holds 0, and on the equator it’s 50/50. The relative magnitude of the two probabilities corresponds to latitude on the Bloch sphere; the relative phase is given by the longitudinal angle. For spin-based qubits, whose two states have the particle’s magnetic moment aligned and anti-aligned with an external field in z direction, position on the Bloch sphere corresponds to the mean measured direction of the particle’s actual angular momentum. In both classical and quantum computation, bits are manipulated by sending them through “gates.” For instance, classical NOT and OR gates take one and two bits, respectively, and output one. Quantum gates, however, always input and output the same number of bits and are always reversible: one can deduce the input from the output. This is because according to quantum mechanics, present wavefunctions are mapped to future wavefunctions by a class of mappings called “unitary transformations,” and these are always invertible. A one-bit quantum gate corresponds simply to some rotation of the qubit’s position on the Bloch sphere; rotating one or both of the qubit’s weights in the complex plane corresponds to linear, normalization-preserving transformations, and these correspond to changes in Bloch sphere longitude (or latitude). Mathematically, this means varying their magnitudes while keeping the sum of squared values equal to one, like the sine and cosine functions. Changes in longitude (relative phase of the weights), which leave the qubit with the same probability of holding 0 or 1, are easy to obtain. We both predict and observe that the state of any two-level quantum system constantly rotates longitudinally on the Bloch sphere as it evolves in time. For a charged particle with spin in a magnetic field, Bloch sphere axes are real spatial axes and this rotation is the phenomenon of spin precession, crucial in nuclear magnetic resonance; but remarkably, the result follows directly from the Schrödinger equation and applies to all two-level quantum systems, which come in a staggering range of physical varieties. But to manipulate a qubit in a computationally useful manner, we must also alter its probabilities. We can accomplish this by perturbing the system with a force that oscillates at the system’s resonant frequency, proportional to the energy difference between the two levels. Driving at resonant frequency causes the latitude of the qubit to sinusoidally oscillate as it absorbs and emits energy. Its position on the Bloch sphere spirals about, spinning longitudinally while bouncing up and down—both a theoretical and experimental result. By controlling the temporal duration or strength of the driving, we can control the rotation angle to manipulate the probabilities that the qubit will be in either state. In a quantum computer, ideally, many qubits would act in concert. The first step toward this coordination is the two-bit quantum gate, an operation performed on two qubits such that the final state of one depends on the state of the other. There are countless conceivable two-bit gates, but the one which has received the most attention among physicists trying to build a quantum computer is the CNOT gate. In this, the first bit is flipped if the second bit is 1, but left the same if the second bit is 0, or vice versa. Another novel feature of quantum computation is the universality of the CNOT gate. CNOTs can be combined to produce any other possible gate, and thus in principle execute any terminating computation. To construct a quantum CNOT, two qubits must be coupled so that the frequency of one depends on the state of the other. After this, we can sinusoidally drive the first qubit at a frequency that is only just right if the second bit is in a certain state. Since a qubit’s natural frequency is proportional to the energy gap between its two states, we must arrange this energy to be influenced by the state of the second bit. The mechanism by which this is accomplished depends on the type of qubit. For spins, coupling of magnetic moment can be used; for charge-based superconducting circuit qubits in proximity, the charge distribution across one will alter the energy required to move a charge across the other. The quantum bus Coupling mechanisms developed so far have relied on the physical proximity of the qubits. The primary purpose of the “quantum bus” is to potentiate longdistance coupling of quantum states by sending information-carrying photons between qubits. In two 2007 Nature articles, the Yale researchers described their progress toward such a device. The first study demonstrated a mechanism to transfer information from a charge-based qubit to a moving system whose two states are the existence or absence of a photon. This was the first step toward a mechanism for long-distance quantum information transfer—taking the information in a qubit with fixed position and putting it in a traveling photon (or “flying qubit”). However, a long-distance two-bit gate also requires the reverse task: putting this information into another charge-based qubit. This was demonstrated in the second paper: quantum information sent back and forth between two charge based qubits with a resonating photon (a standing electromagnetic wave) bouncing between them. The quantum bus is the union of several carefully engineered components, each developed independently yet working together. The backbone and transport mechanism of the device is an electromagnetic chamber known as a “cavity,” which houses light waves with amplitude reaching to the walls of the cavity (Fig. 2). Fig 2: A photo of the very small experimental setup. To the right is a penny. This is manifested as a stretch of winding wire about 0.01 mm wide and 12 mm long. When current passes through normal wires, the ripple in the electromagnetic field extends outside of the wire. In a coaxial cable (like those that bring information to televisions), this electromagnetic signal is confined within a tube of metal surrounding a central wire. The tiny cavity in the quantum bus is a 2-D version of the coaxial cable called a coplanar waveguide: electromagnetic waves are trapped within a long, skinny winding box of metal housing a central wire. The central wire is clipped on either end to give two breaks in the wire; these act as mirrors for the electromagnetic waves so they bounce back and forth as standing waves. Note that in the first experiment, the properties of the two gaps are offset so that the photon exits traveling out one side instead of resonating. A photon resonating or traveling in the cavity corresponds to electrons’ motion in the wires. According to Girvin, “You can either think of that as charges sloshing back and forth in the wires, or you can think of it as photons traveling down through the empty space between the wires.” Normally, when electrons move in a wire, they bump into atoms in the surrounding metal, transferring energy which is ultimately radiated as heat and lost to the system. This corresponds to the resistance of the wire, and is why a circuit must be continually fed with energy to maintain a current. However, electrons in the quantum bus cavity are “superconducting”; that is, they travel in pairs and do not lose energy by crashing into atoms. Circumventing this energy loss is crucial for maintaining coherence of quantum information held in the circuit. All uncontrolled routes of energy loss are sources of decoherence, so much of the effort in building a quantum computer tries to find and suppress these forms of “relaxation.” Generally speaking, irreversible computations must dissipate energy as heat, adding entropy to the universe. However, the converse is also true: if quantum computation operations are reversible, they must not dissipate energy. Also, superconductivity is a fundamentally quantum phenomenon, so small superconducting circuits are well-suited to house a system of qubits. Because metals typically superconduct only at very low temperatures, the quantum bus is cooled to 20 milliKelvin. Low temperature is important for sustaining quantum information: at high temperatures a qubit will be more likely to randomly receive energy from its surroundings and end up in the excited state. How is a superconducting circuit used as a system of qubits? In small enough circuits, degrees of freedom such as the charge across a capacitor must be treated as quantum (probabilistic) variables. One superconducting circuit element that is intrinsically quantum mechanical is the Josephson junction, a thin (~10-atom) barrier that makes a break in a superconducting wire like a capacitor, but thin enough that electrons can tunnel through and jump to the other side, allowing current to flow. Yale researchers have developed a Josephson junction-based qubit called the transmon that is designed to be easily integrated into a larger circuit like the aforementioned cavity. In this design, two Josephson junctions connect either side of a small loop of metal onto a much longer wire. The small loop, called an “island,” can receive any number of electrons from the enormous amount in the wire via tunneling; each electron on the island beyond the equilibrium charge distribution gives a quantized higher energy state. We can treat the lowest two states of this system as a qubit if two conditions are met. (1) The temperature is sufficiently low that random excitations are unlikely to occur. (2) The energy gap between the lowest two levels differs from the gaps between other pairs of levels so that driving at the proper frequency will cause transitions only between the lowest two states. In the quantum bus experiments, the “longer wire” for these qubits was the 2D coaxial-esque wire described above. One or two qubits sit inside the cavity, coupled via electromagnetic interaction so that the existence or absence of a resonant photon in the cavity depends on their energy state. In the first study, a single transmon qubit was placed in the cavity. An input pulse— an electromagnetic wave shaped like a normal distribution, is sent in on one end of the waveguide to excite the qubit, which then releases the energy as a photon exiting on the other end. The bell-curve shape is given by a sum of many photons of different frequencies, including the right frequency to excite the transmon. Energy is absorbed at only this resonant frequency, and the rest of the wave passes through. The oscillation rotates the qubit’s state, changing its latitude, and the angle of this rotation increases linearly with the strength of the input pulse. Maximum latitude corresponds to the absorption of a single full packet of electromagnetic energy (a photon), and lower latitudes correspond to some smaller probability of absorption. Shortly after absorbing the photon (but after the rest of the input pulse has left), the qubit emits a probabilistic signal, a superposition of the existence and absence of an emitted photon, with precisely the same quantum information as the stationary qubit had before. This measurement was made for a wide range of input pulse strengths, and the latitude of the output signals varied like the sine of pulse strength. Besides demonstrating quantum information transfer from a stationary circuit element to a moving photon, it is remarkable that this system is able to generate a single, discrete photon as output. Electromagnetic devices typically emit many photons at once, e.g. a cell phone emits about 1023 microwave photons per second. Single photon signals are necessary to encode individual quantum states; the low temperature and energy of the system contribute crucially to the capability of generating single microwave photons, which have extremely low energy compared to other parts of typical circuits. In the second study, the cavity houses two qubits with about 10 mm of waveguide wire in between. An electromagnetic input pulse excites one of the qubits, putting it in the pure excited “1” state. The “mirror” gaps are adjusted so that the photon emitted by this first qubit doesn’t exit through the other side; it remains in the cavity as a resonant standing wave. Therefore, the photon bounces between the two transmon qubits. First, one transmon is excited, the other is relaxed, and no photon is present. Next, a photon exists in the cavity, and the two transmons are in an “entangled” (probabilistically dependent) state known as an EPR (Einstein-Podolsky- Rosen) pair, with the energy of one dependent on the energy of the other. Finally, the second transmon takes up the probabilistic packet of energy and is excited, while the resonant photon and first transmon relax—and the process then repeats (probably about twenty times, according to Girvin, although the exact number is not known). This two-bit quantum gate transformation is called an “iSWAP.” Like the CNOT, it is a universal gate. The photon is “virtual”; it carries more energy than would be permitted by energy conservation, but vanishes after a several nanosecond lifetime, so that energy is conserved in the long run. Girvin describes the role of this virtual photon in transmitting the energy swap between qubits: “The first qubit makes the photon, then realizes, ‘Oh, I can’t conserve energy, I have to stop,’ and the energy either has to go back or jump onto the other qubit. The photon winks into existence and then disappears before you could tell that it violated energy conservation.” Schematic of a quantum cavity bus chip The road to a quantum computer The photon transfer mechanism implemented in the Yale researchers’ quantum bus is an early but important piece of progress toward the vision of a large-scale quantum computer. Schoelkopf described the studies as the first step in making the principles of quantum computing useful. Photons can travel on microwave lines for up to ten kilometers, and more qubits could conceivably be coupled to the bus’s cavity, opening the potential for longdistance information transfer between many bits. The professors stress that their inventions are works in progress. For now, the bus can only swap energy between qubits a few times, and the number of swaps isn’t precisely known or controllable. Given our current progress, the ultimate goal of a quantum replacement for a desktop seems quite distant. Considering the tremendous difficulty in controlling one or two-qubit systems and the formidable intricacy of the theoretical analysis required, the hope of performing computations over thousands or millions of integrated qubits might seem vain. In any case, a large body of research toward building a quantum computer has emerged since theoretical work in the ‘80s demonstrated its remarkable computational potential. At the very least, many physicists hope and believe that quantum computation will prove useful. The creation of a full-fledged quantum computer, a delicately controlled bulk system, would represent a tremendous technological feat. The development of a means to transfer quantum information on photons has brought us one step closer to this vision. About the Author Ben Deen is a senior in Trumbull College double-majoring in physics and cognitive science. Further Reading • Houck et al. (2007). Generating single microwave photons in a circuit. Nature 449, p. 328-331. • Majer et al. (2007). Coupling superconducting qubits via a cavity bus. Nature 449, p. 443-447. Ben Deen By Ben Deen February 26, 2009 15:18
d75662cbe121a3f7
Presentation is loading. Please wait. Presentation is loading. Please wait. The Development of atomic theory Similar presentations Presentation on theme: "The Development of atomic theory"— Presentation transcript: 1 The Development of atomic theory Chemistry Rules! 2 The Philosophical Era (Circa 500~300BCE) A time when logic ruled the land… This is a good era to do before Chapter 4 officially begins 3 Philosophical Era (Ancient Greece) Two ancient Greeks stand out in the advancement of chemistry. Their ideas were purely based on logic, without experimental support (as was common in that time) 4 Philosophical Era Democritus ( BCE) The most well-known proponent of the idea that matter was made of small, indivisible particles Called the small particles “atomos” meaning “that which cannot be divided” Believed properties of matter came from the properties of the “atomos” 5 Aristotle (384-322 BCE) Famous philosopher of the ancient Greeks Philosophical Era Philosophical Era Aristotle ( BCE) Famous philosopher of the ancient Greeks Believed matter was comprised of four elements Earth, Air, Fire, Water These elements had a total of four properties Dry, Moist, Hot, Cold People liked him – so this idea stayed 6 Alchemical Era (300 BCE ~ 1400CE) The “Dark Ages” of Chemistry where early chemists had to work in secret and encode their findings for fear of persecution This is another good era to do before Chapter 4 officially begins 7 Alchemical Era Alchemy the closest thing to the study of chemistry for nearly two thousand years based on the Aristotelian idea of the four elements of matter If you change the properties, then you could change elements themselves – lead to gold and immortality Very mystical study and experimentation with the elements and what was perceived as magic Study was persecuted, findings hidden in code 8 Procedures of Alchemy Alchemy brought about many lab procedures Alchemical Era Procedures of Alchemy Alchemy brought about many lab procedures We use some of the same methods and the names developed in these dark ages of chemistry 9 Alchemical Era Elements in Alchemy Alchemists studied many different materials, and their properties, in order to find a way to turn lead into gold and achieve immortality 10 Alchemical symbols for various materials Alchemical Era Alchemical symbols for various materials Alchemy had to be discussed in secret so that its students could avoid persecution 11 Alchemists’ Persecution Alchemical Era Alchemists’ Persecution Alchemy was tied to witchcraft and druids it was perceived as heresy by the catholic church Practitioners had to hide their trade or hobby Information was passed in code Coded messages were sent between friends Symbols were used to avoid readable words The growth of Chemistry was stunted by the oppression endured during this era (No such problems in the Far East –Hence gunpowder) Ending the alchemy era with the Flame test lab is a good experience, and a preview for the spectroscopy to come. 12 The Classical Era (1400CE – 1887CE) The printing press heralds the widespread transfer and acquisition of knowledge ------This is a good section to do with Chapter 4, sections 1&2 (students read those simultaneously) The Printing Press was invented in Germany, and this lead to the widespread transfer of knowledge in Europe. Other regions were more geographically restricted from this technological advancement. 13 Foundations Robert Boyle departs from Aristotle (1661) Classical Era Foundations Robert Boyle departs from Aristotle (1661) Suggested in A Skeptical Chymist a substance was not an element if it was made of more than one component Antoine Lavoisier ( ) Accepted Boyle’s idea of elements Developed the concept of compounds Determined Law of Conservation of Mass Law: There is no change in mass due to chemical reactions Discovered Oxygen Recognized Hydrogen as an element 14 Foundations (continued) Classical Era Foundations (continued) Joseph Proust (1790s) Determined the Law of Definite Proportions Elements combine in definite mass ratios to form compounds Robert Boyle Irish Antoine Lavoisier (and wife) French Joseph Proust French This slide is a good opportunity to comment on the ethnicity of scientists and how even Lavoisier’s wife was highly involved with chemistry. Note: these are eastern Europeans because the printing press was invented in east Europe. 15 John Dalton [really famous] (1766-1844) Classical Era John Dalton [really famous] ( ) Dalton returns to Democritus’ ideas in 1803 with four postulates All matter is made up of tiny particles called atoms All atoms of a given element are identical to one another and different from atoms of other elements Atoms of two or more different elements combine to form compounds. A particular compound is always made up of the same kinds of atoms and the same number of each kind of atom. A chemical reaction involves the rearrangements, separation, or combination of atoms. Atoms are never created or destroyed during a chemical reaction. John Dalton English (Originally poor and self-educated) 16 Defense of Atoms (After Dalton) Classical Era Defense of Atoms (After Dalton) Joseph Gay-Lussac ( ) 2L hydrogen (g) + 1L Oxygen (g)  2L Water Vapor (g) Experimental findings disagreed with some of Dalton’s beliefs Amadeo Avogadro ( ) Suggested Hydrogen and Oxygen are diatomic molecules This solved the riddle over Gay-Lussac’s experimental results Gay-Lussac had the only experiment that seemed to be contrary to Dalton’s ideas. This was unsettling for Dalton, and many people began to seek a way to resolve this issue. Avogadro was the one to suggest a functional response, but living past the Swiss alps, he was at a disadvantage to defend his ideas in the majorly English/French Chemistry forum. Joseph Gay-Lussac French Amadeo Avogadro Italian lawyer 17 Dalton’s Disbelief Dalton refused Avogadro's Diatomic molecules Classical Era Dalton’s Disbelief Dalton refused Avogadro's Diatomic molecules Dalton wrongly believed that similar types of atoms would repel, like poles of a magnet – hence no diatoms Due to Dalton’s reputation in chemistry, his ideas were believed over Avogadro’s Sustaining Dalton’s (wrong) theory, that mass corresponded to amount of atoms, led to confusion Avogadro’s ideas lived on in Italy (south of the Alps) 18 Classical Era Avogadro’s Number In 1860 a council of chemists met to solve the problems they had standardizing atomic masses This was only a problem because they kept Dalton’s idea instead of Avogadro’s An Italian chemistry teacher, Cannizzaro, presented His teaching pamphlet used simple math based on a corollary of Avogadro’s theory– Avogadro's Number Avogadro's Number grouped atoms into moles: ×1023 parts = 1mole (6.022×1023parts/mole) 19 Classical Era Mendeleev’s Table (1869) Once a standard for atomic masses was made, people started to see trends These trends showed that properties gradually changed with atomic mass, but seemed to cycle periodically Dmitri Mendeleev was a Russian teacher He arranged the elements in a table so that his students could learn more easily Listed atoms by atomic masses New columns whenever the properties cycled Empty spots left – He predicted undiscovered elements Dmitri Mendeleev Russian teacher 20 Mendeleev’s table quickly became famous Classical Era The B/W version on the left is one of Mendeleev’s original Russian manuscripts. The image on the right is the same information translated into an English textbook – only a few years later. Mendeleev’s table quickly became famous Here is a black and white copy of the manuscript, and an English textbook version 21 **Don’t Forget Newton!!! (1643-1727) Classical Era **Don’t Forget Newton!!! ( ) Isaac Newton was very important to science He is most remembered for his contributions to physics, including gravity and much work in optics (light) He was the first person to divide white light into its parts Splitting light into parts lead to many interesting discoveries Use spectroscopes of some kind to re-evaluate the flame test labs for their emission spectra. It will likely be a good idea to link this activity to the flame test, but instead use the spectra emission tubes. 22 The Subatomic Era (1897CE – 1932CE) The relatively quick discovery of things smaller than the once “indivisible” atom This is a good era to do with Chapter 4, section 3 23 It’s Electric! Electricity was studied throughout the classical era Subatomic Era It’s Electric! Electricity was studied throughout the classical era Ben Franklin’s kite in a thunderstorm (1752) Electricity could flow through gasses (atmosphere) 24 Cathode Ray Tubes Glass chambers used to study electricity in gasses Subatomic Era Cathode Ray Tubes Glass chambers used to study electricity in gasses Crooke observed glowing rays emitted from the cathode Glowing rays were observed in all gasses, and even gasless set-ups 25 J.J. Thompson English (1897) Subjected cathode rays to magnetic fields Subatomic Era J.J. Thompson English (1897) Subjected cathode rays to magnetic fields Using three different arrangements of CRTs he was able to determine that the Cathode rays… Were streams of negatively charged particles Those particles had very low mass-to-charge ratios The observed mass-to-charge ratio was over one thousand times smaller than that of hydrogen ions The CRT particles had to be much lighter than hydrogen and/or very highly charged Mass-to-charge ratio of Electron: ×1011C/kg Mass-to-charge ratio of Proton (H+):9.578×107C/kg The schematic depiction of the CRT given here is one of only three types of CRTs that Thompson experimented with. He needed all three types to collect the data needed to get the information he presented. Also, the particular schematic shown here is also a rudimentary schematic for any CRT television. An interesting talking point for students, who may have some experience with the latter. 26 Robert Millikan American (1909) Subatomic Era Robert Millikan American (1909) Thompson needed to know either the mass or the charge of his negative particles to describe them Millikan’s oil drop let him find that the charge on objects is always some multiple of 1.60×10-19C He proposed this as the basic increment of charge Applying this charge to Thompson’s particles, he found the mass to be much less than any atom This is a good time to read the excerpt from the Caltech commencement speech about refining Millikan’s results. It greatly highlights the idea of Scientific Bias and how this affects “real” scientists and what students need to be leery of in their own classroom experiments (and other life scenarios). Find an atomizer – and build this set-up. Learn how to either do it for real as a demonstration, or make it with an illusion good enough that the students can’t tell its fake. 27 Subatomic Era Plumb Pudding Model (1904) With the combined work of Thompson and Millikan the first subatomic particle was established! Electrons – one part of an atom with one negative fundamental increment of electrical charge Since whole atoms were known to be electrically neutral, Thompson developed the plumb pudding model of the atom Positively (+) charged majority Negatively (-) Charged electrons 28 Ernest Rutherford New Zealander (1910) Subatomic Era Ernest Rutherford New Zealander (1910) Rutherford worked with radiation and had heard of Thompson’s plumb pudding model He wanted to use radiation to prove Thompson’s model He set-up an alpha particle gun (with help from Marie Curie) to shoot at an ultra-thin piece of gold foil, with a Geiger counter on the other side This is another good break to comment on the diversity of people in science. Marie Curie was a BIG DEAL. She had 2 Nobel prizes to be proud of. Ernest Rutherford New Zealand Marie Curie Polish/ French 29 Rutherford’s Results Rutherford’s results were not what he expected Subatomic Era Rutherford’s Results Rutherford’s results were not what he expected Expected to have all alpha particles go straight through all of the atoms Saw that occasionally an alpha particle would ricochet Determined the positive charge of an atom must be held in a massive, centrally located, “nucleus” 30 Subatomic Era The Second Subatomic After more realizations and experiments the second subatomic particle was formally named (1911) Through more Nuclear physics Rutherford determined all atomic nuclei were made up of hydrogen nuclei Hydrogen nuclei are deemed Protons Antonius van den Broek suggested elements on the periodic table are in order by their increasing number of protons, not Mendeleev’s atomic masses Proton: The massive subatomic particle, within the nucleus of an atom, with a single positive charge 31 Subatomic Era The Planetary Model (1911) Earnest Rutherford took his idea of a nucleus, and the known electrons, to construct a new atomic model There is a compact nucleus The nucleus, made of nucleons, is the location of positive charge in the atom The charge of the nucleus might be proportional to its mass The orbit of the electrons kept them from falling directly into the nucleus, just like planetary motion The Rutherford Model or The Planetary Model The image shows a distinction between two types of particles in the nucleus. Rutherford’s model technically would not have had this – or even possibly known about neutrons. In fact, Rutherford's model was kind of vaguely described even in the article he used to propose it – he was very leery of committing to more than what he absolutely knew to be true about the atom. (He never even said “electron orbits.” That idea was just pieced together from commentary on Rutherford’s model and what came after it.) You can raise questions as to why that may have been a good move… 32 Subatomic Era The Third Subatomic (1932) Electrons and Protons were identified as particles, but these alone could not fully describe atoms The charge-to-mass ratio of atoms was off without another addition James Chadwick studied an unnamed form of radiation– he found it to be electrically neutral and about the mass of a proton Including these particles in the nucleus of the atom solved all discrepancies that were previously observed James Chadwick English 33 Subatomic Review Subatomic Era Electrons Orbit the nucleus Very small mass: ×10−31 kg Negatively charged: − ×10−19 C Nucleons: all particles that make up the nucleus Protons Reside in the nucleus Relatively large mass: ×10−27 kg Positively Charged: ×10-19 C Neutrons Reside in the Nucleus Relatively large mass: ×10−27 kg No electric charge 34 Atomic Variance An atom’s element is defined by the number of… Protons Subatomic Era Atomic Variance An atom’s element is defined by the number of… Protons Any atom with a non-neutral charge is called an… Ion Ions exist because the atom has either more or fewer than There are several different forms of elements called that vary in amounts of Electrons Protons Isotopes Neutrons 35 The Modern Era (1900CE – Present) The Quark Era starts in 1964, but that advance can be regarded as outside the realm of chemistry – instead a part of nuclear physics Comment on the scope of the course, and how chemistry is distinct from other “nearby” physical sciences. ****Warning: Before this era there needs to be a presentation on the nature of light and EM radiation. Chapter 5 in your book! Read pages 36 Modern Era It all begins… (1900) Scientists believed that we had answered all major questions- only leaving a few items to finish Max Plank was commissioned to build a better light bulb He wanted to answer questions about “black body radiation” He reluctantly used statistics to solve questions (he was very conservative) December 14, 1900 Statistics was a “dirty word” at the time in science. It couldn’t make concrete predictions or descriptive and absolute rules about the world, like calculus could. Max Plank German, Physicist 37 Statistics in Science Modern Era Most science uses regular math (ex: F=ma) This era starts to deviate from tradition… The second law of thermodynamics (Boltzmann) All systems move toward a less organized state Plank knew about Boltzmann’s ideas –but disproved of deviation from tradition Plank reluctantly adopted statistics to best explain experimental findings, although he didn’t want to be progressive Einstein interpreted Plank’s use of statistics to start Quantum theory Highlight the supremacy of the second law of thermodynamics in chemistry. Inform the students that it is one thing they will have to understand in chemistry. Take the time to comment on the interaction between scientists in this time era – the social aspect of science is important. 38 Quantum Theory Energy can only be transferred in small packets Modern Era Quantum Theory Energy can only be transferred in small packets Plank saw the emission of light could not be explained by classical physics of the day Energy transferred in whole-number multiples of hν ΔE = energy transferred n = integer multiple ν = frequency of light h = Plank constant (4.134×10-15eV·s ) ΔE = nhν Contrast this type of math to statistics, and ensure the students know they will be held accountable for basic algebra skills in this class. 39 Modern Era Photon – light packets Light partially behaves like particles that Einstein called Photons De Broglie said - all matter can be described by similar wave packets This blurred the line between particles and waves λ=h/p Highlight that this the second time students have seen this slide. 40 λ=h/p …or(λ=h/mv) Wavelength = Plank’s constant / momentum Modern Era λ=h/p …or(λ=h/mv) Wavelength = Plank’s constant / momentum Wavelength – wave property Plank’s constant – a fundamental constant × 10-34 m2 kg / s Momentum – a mechanical property Momentum = mass × velocity (p=mv) Find the wavelength of lots of things! Highlight that this the second time students have seen this slide. 41 Modern Era Explaining Data The quantum theory suddenly meant energy could only be transferred in discrete amounts We had observed emission spectra and knew the Rutherford model, but neither was fully explained Emission Spectra of Iron (Fe) Define discrete. Emission Spectra of Hydrogen (H) 42 Bohr’s Planetary Model of the Atom Modern Era Bohr’s Planetary Model of the Atom integrated all known information into a new, mathematically based, model of the atom He kept electrons in orbits around the nucleus Only allowed certain specific electron orbits for each atom Electron transitions between energy levels (orbits) could only be jumps – nothing could be in between these energy levels (like steps on stairs) Make connection between orbits and energy levels. Be sure that students know that he drew in the lines that Rutherford was not willing to do. His model only worked well for hydrogen atoms Niels Bohr Danish Physicist 43 Discrete Electron Energy Levels Modern Era Discrete Electron Energy Levels DeBroglie said that electrons always act like waves This supported the idea of discrete energy levels Only certain wavelengths will “fit” around the atom Shake a jump rope with someone, slowly increasing speed. Comment on how not all speeds will create a standing wave, and how this relates to discrete orbits or energy levels. 44 Bohr Energy levels Z2 E=-13.6eV n2 Modern Era Bohr Energy levels Electrons can only travel in specific energy levels E=-13.6eV Z2 n2 E = The actual energy of the given energy level Z = the nuclear charge (number of protons) n=1 n=2 n=3 This linked the properties of atoms with the observations of emission spectrum 45 Bohr Energy Levels Atoms typically found in “Ground State” Modern Era Bohr Energy Levels Atoms typically found in “Ground State” Electrons want to exist in the lowest energy levels available Atoms can be raised to an “Excited State” Electrons can be put into higher energy levels than usual, but energy has to be added to do so Lowest energy levels due to 2nd law of thermodynamics 46 Energy Level Transitions Modern Era Energy Level Transitions Electron jump: Quantum leap! Electrons can jump from any lower energy level to a higher energy level and vice versa Total energy of atom changes Light is absorbed to get to higher energy states Light is emitted when electrons jump to lower energy states 47 Modern Era Electron Transitions Only Specific wavelengths of light are absorbed and emitted by atoms – you have seen these before Light emitted by atoms is the emission spectra ΔE = Efinal –Einitial E = hν h=Plank’s Constant 4.134×10-15eV·s 6.63×10-34 m2kg/s 48 Modern Era Some Practice! Colors of light are identified by their frequency and/or wavelength Find the frequency of light for transitions 1-3 Find the wavelength of light for transition 3 What does 4 mean? 2 4 3 1 49 Modern Era The Fall of Bohr… Bohr had easily come up with the best model for the atom so far, and his impact is still felt today but… Werner Heisenberg, a student of Bohr’s, stated: It is impossible to know the absolutely exact position and momentum of anything at the same time Δx Δp ≥ h Werner Heisenberg Germany 50 Modern Era The New Quantum Model In 1926 Erwin Schrödinger developed an equation that took care of all inconsistencies of Bohr’s model Completely treated electrons as waves (Ψ) Accounted for uncertainty principle This took the electron from existing in defined orbits to living in a “probability cloud” Concentric probability clouds expand out from the nucleus Probability cloud – the area where an electron is likely to be found The above equation is the 1-dimensional Schrödinger equation for the behavior of quantum particles. It is coursely: E=energy, Ψ=wavefunction, V=velocity, ▼(del)= multivariable derivative (since it is squared it is the second derivative), m=mass? 51 The Modern (current) Atom Modern Era The Modern (current) Atom We don’t know any electron’s exact location or momentum Heisenberg uncertainty principle We know electrons act like waves Electrons are likely to exist in some areas around a nucleus, and not in other areas We can find probabilities where electrons can be found Erwin Schrödinger Austria 52 Modern Era What does it look like? Likely electron locations are now represented by probability clouds – a way to graph probability in three dimensions Electron Clouds Electron Bubbles The bubble represent the same thing as the clouds, however it is much easier to draw a bubble. So, when graphing this 3-D data the bubble is constructed by choosing an arbitrary point of probability (usually two standard deviations) and drawing in the surface of the bubble at that point of equal probability. 53 Modern Era Electron Orbitals Bubbles are much easier to draw… Download ppt "The Development of atomic theory" Similar presentations Ads by Google
39c384e54e1bb61c
Syllabus for CHE 3920: Electronic Structure: Basic Theory, Modeling and Simulations Spring, 2010 Tuesday & Thursday 2:00-3:15 p.m. Room: BEH G28 Wissam A. Saidi Email: alsaidi@pitt.edu Office hours: MW 1:00 to 3:00 PM and by appointment Course Objetives and Description: The field of electronic structure includes the study of the ground and excited states of electrons which determine the properties of atoms, molecules, solids, and other condensed matter systems. In ab initio (first-principles) electronic structure methods, many properties can be calculated directly with a high degree of accuracy starting from the microscopic description of the system as defined by the Schrödinger equation, and with no experimental input. The region of applications of this field is vast and spans a wide range of problems in different majors such as physics, chemistry, materials science, earth sciences, and biology. The purpose of this course is to give the students insight into the intellectual challenges in the theory of electrons in condensed matter and the wide scope of applications of electronic structure theory. It is the intention to cover not only the theoretical principles of electronic structure theory but also to provide a hands-on experience in modeling and simulations using an electronic structure package. In the first part of the course, the emphasis will be on the theory, specifically, on the density functional theory (DFT) which in principle can provide the exact ground state of many-electron systems. DFT, with its favorable computational requirements, is the predominant method for systematic studies of various properties of condensed matter such as binding energy, crystal structure, magnetic properties, vibrations of the nuclei, ferroelectricity, optical properties, and many others. The shortcomings of this theory and also theoretical methods that go beyond DFT such as the GW approximation and quantum Monte Carlo methods will also be covered. In the second part of the course, the goal is to learn how to apply an ab initio DFT code in real applications – in practice, the modeling and simulation section will be “meshed” with the theory part. We will use ABINIT which is one of the most successful GNU-GPL free codes. The course is appropriate for beginning and advanced graduate students with interests in modeling and simulations in engineering, physics and chemistry. It is most useful for those who are specializing in solid state physics, materials science, or quantum chemistry both in the theoretical and experimental tracks. Background Expected: The background expected of students in this class is: 1. Familiarity with quantum mechanics and solving the Schrödinger equation for simple problems such as a harmonic oscillator and particle in a box. 2. Familiarity with the description of periodic crystalline lattices, reciprocal lattice, Brillouin zones, Bloch theorem and other general properties of bands in crystals, as described, for example, in Kittel’s (chapters 1,2 and 7) or Aschroft & Mermin (chapters. 4,5 and 8). In the second part of the course, where the focus is on running and learning a computer simulation program, it would help if the students are familiar with LINUX or with running a computer code in other environments Text and References: The first part of the course will be drawn mostly from Richard Martin’s book “Electronic structure: Basic theory and practical methods.” Other useful books include: 1. Ab initio molecular dynamics by Marx and Hutter 2. Solid State Physics, by Ashcroft and Mermin 3. Solid State Physics by Kittel 4. Condensed Matter Physics by Marder 5. Atomic and Electronic Structure of Solids by Kaxiras 6. Electronic Structure Calculations for Solids and Molecules: Theory and Computational Methods by Kohanoff 7. A Chemist's Guide to Density Functional Theory, 2nd Edition by Koch 8. Density functional Theory: A practical introduction by D. S. Sholl and J. A. Steckel 9. A guide to Monte-Carlo Simulations in statistical physics Disability Statement: The University Counseling Center’s staff is dedicated to assisting students in their pursuit of personal and academic growth, to helping students gain a better understanding and appreciation of themselves, and to supporting students as they make important decisions about their lives. If you are in need of counseling services, please contact the University Counseling Center at 334 William Pitt Union (412) 648-7930 Refer to www.counseling.pitt.edu for details. Academic Integrity: See http://www.pitt.edu/~provost/ai1.html for the University Guidelines on Academic Integrity. You are encouraged to discuss the homework problems with your classmates, however, the final work you turn in must be your own. Copying someone else’s work, in any way, is unacceptable. You should not borrow notes, homework, homework solutions, exams, or other materials from students who took this course in previous years. There is a distinction between discussing work, and merely copying someone else’s work. The idea here is that you should help each other to understand the problems and the concepts involved; you will learn more if you work on the assignments in groups and explain the methods to each other. On the other hand, if you simply copy what someone else has done then you are not increasing your understanding, and you are not being honest. You must put in your own effort on solving the problems. Exams, whether in class or take home, must be strictly each students individual effort. You must not discuss the exams with anyone but the instructor. Any violation of the Academic Integrity code will be prosecuted to the fullest extend of the code. Exams and Homework: There will be a cumulative final exam at the end of the semester. There will be also homework sets which will be oriented toward basic knowledge of the key points in the course and are important for all students to master. Some of the problem sets are computational which are based on running ABINIT. Term Paper or Project: There will be a term paper or project that each student is expected to submit at the end of the semester. The topic can be chosen upon a mutual consent or from a provided list of topics. The term paper must fully describe some problem related to electronic structure with appropriate references to the literature. Students can also choose to do a computational project using ABINIT or any other density-functional code. All of the students are encouraged to choose a project which is closely related to their current research. However, the term paper or project does not have to be original research, but it must be original work on the part of the student. Topics to be covered: Background Review ( 3 weeks) 1. Quantum Mechanics • Schrödinger equation for a particle in a box • Schrödinger equation of Harmonic oscillator and Hydrogen atom 2. Solid State physics • Bravias lattice and basis • Brillioun zone • Block theorem • Electron bands in solids • Nearly free electron approximation 3. Bonding in solids • Metallic, Covalent, Ionic bonding 4. Many-electron problem (electron gas) • Hartree-Fock approximation • ost-Hartree Fock (MCSCF, CI, coupled cluster) Density functional Theory ( + simulations =7-8 weeks) 1. Thomas-Fermi approximation 2. Hohenberg-Kohn Theorems and Density functional theory 3. Khon-Sham formalism 4. Density functional approximations (LDA,GGA, mixed) 5. Solving Kohn-Sham equations 6. DFT with planewaves. 7. Pseudopotentials Computer simulations: 1. Density functional codes (Abinit, PWSCF, VASP, DACAPO, SIESTA, … ) 2. Applications using ABINIT code: • Study of molecules (dissociation energy, bondlength, angular frequency) • Study of an insulator (lattice constant, band structure) • Study of a metal (lattice constant) • Phonon calculations • Polarization and Berry phase Advanced Topics (1-2 weeks): 1. Time dependent density functional theory 2. GW approximation 3. Ab initio Quantum Monte Carlo 4. Modern theory of polarization (Berry phase)
b976e786ef20b3ee
A differential equation, which involves differentiating a function with more than one parameter. For example: (d2/dt2 - v2 d2/dx2) y(x,t) = 0 where t is time, x is position, v is velocity and y is displacement, is a partial differential equation (abbreviated PDE), which can be used to describe a vibrating guitar string. A differential equation is a relation between a mathematical function and its derivates. If the function depend on more than one variable, the differential equation is said to be partial. These are a lot harder to solve analytically and numerically than the ordinary differential equations. The theories for PDEs has been developed mainly for the field of theoretical physics, where they play a very important part, especially in quantum mechanics. Below I list the most important PDEs in physics and how you would solve them. (An excuse goes out to all Netscape users who may not be able to see the equations below.) One of the most important PDEs in physics is Poisson's partial differential equation, which is what mechanical problems usually reduces to: n ∂2u Δu = ----- = f (1) j=1 ∂x2j Where u is the sought function, f is a known function, and x is a coordinate in space. In physics, n usually equals 3, for our 3 dimensional world. When f=0 the equation is called Laplace's equation. The Newton potential gives solutions to this as: 1 f(y) u(x) = ---- ∫∫∫------- dy1dy2dy3 4π |x-y| In physical applications of this, you usually search for solutions within a domain Ω where you have a Dirichlet boundary condition u = φ on the boundary ∂Ω. There are many methods to find this solution analytically, and usually they are focused on finding upper and lower values of the function . This PDE can also be solved for other boundary conditions such as Neumann boundary condition. Equation (1) is and example of a elliptical differential equation. The term "elliptical" refers to the characteristic polynomial of the equation, which is a way to classify PDEs.  Another common PDE is the heat equation, or diffusion equation: --- - Δu = f (2) where u is a function both in the room (x1, x2, x3... xn) and time (t). For n=3 this equation describes the distribution of heat in an homogenous material or some other process of diffusion in the physical world. If we have the initial value of u, u(t=0) = u0, the solutions for t > 0 can be derived from u(t,x) = (4πt)-π/2e-|x-y|2/4t·u0(y)dy For solutions within a domain Ω you can also apply boundary conditions which makes the PDE solvable. Equation (2) is a hypo elliptical differential equation. In quantum mechanics, the Schrödinger equation is an important PDE, and it is fairly similar to the heat equation. The difference is that it has a imaginary factor: i--- - Δu + Vu = 0 (3) This equation is not easily solved, but great progress has been made during the last decades. In physics, where V is the same potential function as in the N-body problem, and u = u(x,y,z), this is the equation for the quantum mechanical, non-relativistic N-body problem. For each φ where |φ(x)|2dx <  there is a solution u to (3) which is equal to φ for t = 0 and  |u(t,x)|2dx is independent of t.  The wave equation, which describes the movement of light and electromagnetic waves, is also a PDE: ----- - Δu = f (4) Maxwell's equations reduce to the above. For u = u(x,y,z) a solution is given by u(t.x)= ∫∫∫------------ dy1dy2dy3 The wave equation (4) is an example of a hyperbolical differential equation. Cauchy's problem, to find a solution for u=φ and ∂u/∂t=Ψ, can easily be derived from this last equation. Other problems lead to more complex calculations. For instance, the Dirichlet problem leads to a complicated study of singularities, which is related to the study of refraction and diffraction in optics All of the above are examples of linear partial differential equations. However, many of the fundamental equations in physics are non-linear, such as Navier-Stokes equations in fluid dynamics. These are a bit harder to solve analytically and are usually the subject to numerical methods. Usually one has to study the linear solution to the non-linear problem, and then based upon this adjust the solution in the surrounding you are interested in. Reference: ne.se
01a8570ffc06dca8
Monday, 22 February 2016 Macro and Credit - The Monkey and banana problem "There are three growth industries in Japan: funerals, insolvency and securitization." - Dominic Jones, Asset Finance International Nov., 1998 Looking at the evolution of markets and convolution in which central bankers have fallen, we reminded ourselves for our chosen title analogy of the "Monkey and banana problem", which is a famous toy problem in artificial intelligence, particularly in logic programming and planning. The problem goes as follows: While there are many applications to this problem. One is as a toy problem for computer science, the other, we think is a "credit impulse" problem" for central bankers. The issue at hand is given financial conditions are globally tightening, as shown as well in the US in the latest publications of the Fed Senior Loan Officer Survey and the bloodbath in European bank shares, the problem is how on earth our central bankers or "monkeys" (yes it's after all the Chinese year of the fire monkey...) are going to avoid the contraction in loan growth? As put it bluntly by our friend Cyril Castelli from Rcube, the European credit channel is at risk as banks' share of credit transmission is much higher in EU than US, which of course is bound to create a negative feedback loop and could therefore stall much needed growth: - source Rcube - @CyrilRcube Another possible tongue in cheek purpose of our analogy and problem is to raise the question: Are central bankers intelligent? Of course they are, and most of them have to deal with complete lack of political support (or leadership). It seems we have reached the limits of what monetary policies can do in many instances. Although both humans and monkeys have the ability to use mental maps to remember things like where to go to find shelter, or how to avoid danger, it seems to us that in recent years central bankers have lost the ability to avoid danger. While monkeys can also remember where to go to gather food and water, as well as how to communicate with each other, it seems to us as of late, that central bankers are losing their ability to communicate, not only with each other but, as well with markets, hence our chosen title. Could it be that monkeys have indeed superior abilities than central bankers given their ability not only to remember how to hunt and gather but to learn new things, as is the case with the monkey and the bananas? Despite the facts that the monkey may never have been in an identical situation, with the same artifacts at hand (printing press), a monkey is capable of concluding that it needs to make a ladder, position it below the bananas, and climb up to reach for them. It seems to us that despite the glaring evidence that the "wealth effect" is not translating into strong positive effects into the "real economy", yet it seems central bankers have decided to all embrace the Negative Interest Rate Policy aka NIRP as the new "banana". One would argue that, to some extent, central bankers have gone "bananas" but we ramble again... In this week's conversation we will voice our concern relating to the heightened probability of a credit crunch in Europe thanks to banking woes and the unresolved Italian Nonperforming loans issue (NPLs). We will as well look at the credit markets from a historical bear market perspective and muse around the relief rally experienced so far. • Macro and Credit - The risk of another credit crunch in Europe is real • Why NIRP matters on the asset side of a bank balance sheet • Credit spreads and FX movements - Why we are watching the Japanese yen • Final chart: US corporate sector leverage approaching crisis peak • Credit - The risk of another credit crunch in Europe is real The fast deterioration in European bank stocks in conjunction with the rising and justified concerns relating to Italian NPLs, constitute a direct threat to the "credit impulse" needed to sustain growth in Europe we think. As we pointed out on numerous occasions, the ECB and the FED have taken different approaches in tackling their banking woes following the Great Financial Crisis (GFC). In various conversations we have been highlighting the growth differential between the US and Europe ("Shipping is a leading deflationary indicator"): "We have long argued that the difference between the FED and the ECB would indeed lead to different growth outcomes between the US and Europe (US economy will grow 2.2% this year versus a 0.4% contraction in the euro area, according to the median economist estimates compiled by Bloomberg): Exactly. The issue with Italian NPLs is that the tepid Italian growth of the last few years is in no way alleviating the bloated balance sheets of Italian banks which would help sustain credit growth for consumption purposes in Italy as evidently illustrated in a recent Société Générale chart we have used in our conversation the "Vasa ship":  And no earnings thanks to NIRP means now, no reduction in Italian NPLs which according to Euromoney's article entitled "Italy's bad bad bank" from February 2016 have now been bundled up into a new variety of CDOs" - source Macronomics, February 2016 - source Société Générale. So, all in all, the ECB is going to have to find a way to shift these impaired assets onto its balance sheet, if it wants to swiftly and clearly deal with the worsening Italian situation. While some pundits would point out that the new "bail-in" resolutions in place since the 1st of January are sufficient to deal with such an issue, we do not share their optimism. This is a potential "political" problem of the first order, should the ECB decides to deal with this sizable problem à la Cyprus. Caveat emptor. You could expect once more politicians and the ECB to somewhat twist the rule book in order to facilitate through this "securitization" process and an ECB take up of part of the capital structure (Senior tranches probably) of these new NPLs CDOs. A new LTRO at this point might once again alleviate funding issues for some but in no way alter the debilitating course of the credit profile of the Italian banks. On a side note we joked in our last conversation around these new NPLs CDOs being the new "Big Short": But, when it comes to the "credit impulse" and its potential "impairment" in Europe thanks to bloated banks balance sheets, and equities bleeding, we read with interest Deutsche Bank's take in their note Focus Europe note from the 19th of February entitled "Moving down a gear": "The balance between the growth drivers and detractors is being tipped towards the negative as the questioning of confidence in European banks threatens to result in a less beneficial credit impulse. In last week’s Focus Europe we presented a scenario analysis to demonstrate the sensitivity of euro area GDP growth to the provision of bank credit. We did this via the credit impulse relationship. Our earlier assumption of 1.6% real GDP growth in 2016 was consistent with 2% credit growth. If, on the other hand, banks issue no net new credit this year, domestic demand would fall, confidence deteriorate and financial markets tighten. With no reaction from the ECB, 2016 GDP growth would fall to about 0.5% The recent fall in bank equities and rise in bank debt costs combined with increasing economic risks, the balance of probabilities suggests that lending standards will tighten relative to what we expected previously. Therefore, to some degree the provision of bank credit, and hence economic growth, will suffer. The revision we are announcing is an attempt to capture this effect. There are considerable uncertainties as to the scale of the problem, but we feel a modestly weaker lending impulse is now a more appropriate baseline. Credit (-0.2pp). Our previous baseline forecast of 1.6% GDP growth was consistent with an acceleration in bank credit growth from broadly zero in 2015 to about 2% this year. The improvement in credit conditions in the last Bank Lending Survey implied a modest upside risk relative to forecasts. The last Bank Lending Survey was conducted in December and published in January. There were no indications at that point of concern about capital, liquidity or risk. However, as we said above, the balance of probabilities implies that lending standards will now tighten. We are conservatively allowing for a scenario in which the contribution to GDP from bank credit is now 0.2pp weaker than our previous baseline. The ECB can help minimize the damage…  The onus is on the ECB to achieve two things at the next meeting on 10 March. First, to set an appropriately accommodative policy stance given the worsen outlook for both growth and inflation. Note, since December, our headline and core HICP inflation forecasts for 2016 have fallen from 0.9% and 1.3% to 0.2% and 1.1% respectively (see page 2 for updated country inflation forecasts). Second, to set a policy stance that does not compound the pressures on a banking system that may be perceived as being more vulnerable. We presented a detailed discussion of the ECB’s options in last week’s Focus Europe (pages 8-10)2. Suffice to say, the choice of policies will be affected by conditions in the banking system. Our expectation prior to this episode of banking stress in recent weeks was a 10bp deposit rate cut and a temporary acceleration in the pace of purchases. The bank stress implies that a further deposit rate cut may be unwise without a system of exemptions. A refi cut, for example targeted at TLTROs, would be more effective. The bank stress also implies the ECB should offer some kind of supplementary liquidity tender. Excess liquidity is running at about EUR700bn, but if there was any sense of fragmentation re-emerging between strong and weak banks, it would be in the ECB’s interest to remove all doubt about bank access to liquidity. Finally, the justification for more QE has increased given the widening of credit and sovereign spreads. More QE would reduce the risk of a negative feedback loop between banks and sovereigns. The ECB is conducting a technical review of the asset purchase programme (APP). This might result in some changes. In terms of broadening the eligible asset base, we suspect the ECB will remain in the sovereign/quasi-sovereign space for now. Corporate bonds are possible but not very impactful. Purchasing unsecured bank debt might not be inconsistent with the Treaty but would be complex and politically controversial. Stresses would have to increase markedly to bring this option onto the table.  There is no relaxation of regulation, however. Both Mario Draghi, President of the ECB, and Daniele Nouy, head of the ECB Single Supervisory Mechanism (SSM), were consistent in their messages this week that (a) the new regulatory regime is resulting is a more stable and sustainable banking system - there is no sense that regulation is a net cost - and (b) “all else unchanged” there will be no significant additional capital requirements imposed on banks. Benoit Coeure, ECB Executive Board Member, said that if bank profits are under pressure the onus would be on governments to implement structural reforms and growth-friendly fiscal policies. In short, no change in ECB message. …but negative feedback loops cannot be ruled out either  Last week we showed how sensitive the economic cycle can be to the bank credit cycle. This negative dynamic can become self-reinforcing. One direction is via the private sector and another is through the public sector. A tightening of lending standards weakens demand, undermining growth and asset quality, triggering a second-order tightening in credit. At the same time, weaker demand undermines sovereign sustainability which can tightening bank funding cost and additionally contribute to second-order tightening.  Fiscal dynamics have deteriorated. The primary balance gap is the difference between the debt-stabilising primary balance and the primary balance. It captures the underlying dynamic of the public debt-to-GDP ratio. A negative gap means the public debt ratio is falling. Our previous forecast was for a primary balance gap of -0.6% of GDP in 2016, the first genuine decline in the public debt ratio since the start of the crisis. Following the growth and inflation revisions, the primary balance gap is expected to be positive again. In other words, the benefits of lower funding rates thanks to ECB QE are not enough to compensate for the loss of economic momentum. Moreover, if the scenario of zero net new bank credit were to materialize, 2016 could see the primary balance gap rise back to levels not seen since 2012. That would imply a euro area public debt-to-GDP ratio of about 98%." - source Deutsche Bank Whereas indeed we are waiting to Le Chiffre aka Mario Draghi to come up with new tricks in March to alleviate the renewed pressure on European banks, where we disagree with Deutsche Bank is that providing a new TLROs would provide much needed support for funding in a situation where some banking players are seeing their cost of capital rise thanks to a flattening of their credit curve (in particular Deutsche Bank as per our previous conversation), this intervention would in no way remove the troubled growing impaired assets from the likes of Italian banks. It is one thing to deal with the flow (funding) and another entirely to deal with the stocks (impaired assets). Whereas securitization of the lot seems to be latest avenue taken, you need to find a buyer for the various tranches of these new NPLs CDOs. Also more QE. will not deal with the stocks of impaired assets unless these assets are purchased directly by the ECB.  When it comes to credit conditions in Europe, not only do we closely monitor the ECB lending surveys, we also monitor on a monthly basis the “Association Française des Trésoriers d’Entreprise” (French Corporate Treasurers Association) surveys. In their latest survey, while it is difficult to assess for now a clear trend in the deterioration of financial conditions for French corporate treasurers, it appears to us that NIRP has already been impacting the margin paid on credit facilities given a small minority of French corporate treasurers are indicating that, since December, there is an increasing trend in margin paid on the aforementioned credit facilities: "Does the margin paid on your credit facilities has a tendency, to rise, fall or remains stable?" - source AFTE Going forward we will closely be monitoring these additional signals coming from French corporate treasurers to measure up the impact on the overall financial conditions as well as the impact NIRP has on the margins they are getting charged on their credit facilities. For now conditions for French corporate treasurers do not warrant caution on the microeconomic level. While we have recently indicated our medium to long term discomfort with the current state of affaires akin to 2007 in terms of credit markets, the recent "relief rally" witnessed so far is for us a manifestation of the "overshoot" we discussed last week. Some welcome stabilization was warranted, yet we do feel that the credit cycle has turned and that you should be selling into strength and move towards more defensive position, higher into the rating quality spectrum that is and rise your cash levels. When it comes to enticing banks to "lend" more, as far as our analogy is concerned we wonder how the "monkey" bankers are going to react if indeed additional NIRP is going to remove more "bananas". This brings us to our second point, namely that the flatter the yield curve, the less effective NIRP is. Whereas, Europe overall has been moving more into the NIRP phenomenon, with over $7 trillion worth of global government bonds now yielding “less” than zero percent, the FED in the US is weighting joining the NIRP club in 2016 apparently, NIRP being vaunted as the new "banana" tool in the box to stimulate the "monkeys". The issue of course at hand is that NIRP does matter and particularly when it comes to the asset side of a bank balance sheet as put forward by Deutsche Bank in their note from the 22nd of February entitled "Three things the market taught me this year": "Negative rates – much more complicated Negative rates look powerful at face value. Bank profits can be protected by exempting excess liquidity while market rates are pushed down. The turmoil in Japan points to three considerations that mute this view. First, the impact of negative rates on the asset side of banks’ balance sheet can matter much more than the charge on excess liquidity. Banks that own large amounts of fixed income assets relative to the size of their total balance sheet (and their excess liquidity) are hit the hardest as returns on these assets drop. Japan and the US stand out as economies where the cost to the banks is biggest, Switzerland and Sweden the least, while Europe is somewhere in between: Second, super-flat yield curves reduce the impact of negative rates. When bonds don’t offer risk premia, a perfect Keynesian liquidity trap exists: fixed income is the same as cash, and negative rates instantaneously transmit to the entire yield curve. The portfolio rebalancing into riskier assets declines as the marginal holders of zero-yielding bonds are naturally risk-averse. Japan’s yield curve, the flattest in the world, failed to steepen when the BoJ cut rates earlier this month – all yields just shifted down. Sweden, the UK and Europe stand out as yield curves where there’s still more risk premium to be squeezed, Japan, Canada and Norway the least: Third, sub-zero rates can send a negative forward-looking signal. Until the technological and institutional framework is designed to pass negative rates to depositors without triggering banknote withdrawal, there will eventually be a (negative) lower bound. As this is approached the signaling cost of easing being exhausted may be bigger than the benefit of lower rates. At the extreme  cash and bonds turn into “Giffen goods”: the substitution effect of lower return is more than offset by expected lower future income. Lower rates then end up raising, rather than lowering the demand for bonds as the saving rate goes up. The limitations of additional BoJ easing in addition to changing Japanese hedging behaviour, are some of the factors, that have led us to revise our USD/JPY forecasts for this year. We now think 2015 marked the peak in USD/JPY for this cycle and forecast a move down to as low as 105 this year." - source Deutsche Bank Exactly, this negative feedback-loop, doesn't stop the frenzy for bonds and the "over-allocation process. On the contrary, as the "yield frenzy" gather pace thanks to NIRP. This push yields lower and bond prices even higher. This is exactly what we discussed in our conversation "Le Chiffre" in October 2015: "The big benefactors of the Fed and Le Chiffre's gaming style, particularly since is brilliant 2012 bluff in the European Government bond poker game have been bonds. We came across this very interesting chart from Deutsche Bank (h/t Tracy Alloway from Bloomberg on Twitter) which clearly illustrates "overconfidence" and "over-allocation" to the bonds relative to the trend which are $755bn above the normal trend. This is entirely attributable to the distortions created by QEs:  - source Deutsche Bank (h/t Tracy Alloway). Furthermore the significant repricing in European equities (where many pundits had been "overweight" at the beginning of the year) has led to a significant switch from equities to bonds as indicated by Bloomberg in their article from the 22nd of February entitled "They'd Rather Get Nothing in Bonds Than Buy Europe Stocks": • "Estimates for Euro Stoxx 50 dividend yield at 4.3 percent • The region's government debt is yielding 0.6 percent The cash reward for owning European stocks is about seven times larger than for bonds. Investors are ditching the equities anyway. Even with the Euro Stoxx 50 Index posting its biggest weekly rally since October, managers pulled $4.2 billion from European stock funds in the period ended Feb. 17, the most in more than a year, according to a Bank of America Corp. note citing EPFR Global. The withdrawals are coming even as corporate dividends exceed yields on fixed-income assets by the most ever: Investors who leaped into stocks during a similar bond-stock valuation gap just four months ago aren’t eager to do it again: an autumn equity rally quickly evaporated come December. A Bank of America fund-manager survey this month showed cash allocations rose to a 14-year high and expectations for global growth are the worst since 2011. If anything, the valuation discrepancy between stocks and bonds is likely to get wider, said Simon Wiersma of ING Groep NV.  “The gap between bond and dividend yields will continue expanding,” said Wiersma, an investment manager in Amsterdam. “Investors fear economic growth figures. We’re still looking for some confirmations for the economic growth outlook.” Dividend estimates for sectors like energy and utilities may still be too high for 2016, Wiersma says. Electricite de France SA and Centrica Plc lowered their payouts last week, and Germany’s RWE AG suspended its for the first time in at least half a century. Traders are betting on cuts at oil producer Repsol SA, which offers Spain’s highest dividend yield. With President Mario Draghi signaling in January that more European Central Bank stimulus may be on its way, traders have been flocking to the debt market. The average yield for securities on the Bloomberg Eurozone Sovereign Bond Index fell to about 0.6 percent, and more than $2.2 trillion -- or one-third of the bonds -- offer negative yields. Shorter-maturity debt for nations including Germany, France, Spain and Belgium have touched record, sub-zero levels this month." - source Bloomberg In that instance, while the equity "banana" appears more enticing from a "yield" perspective, it seems that the "electric shock" inflicted to our investor "monkey" community has no doubt change their "psyche". The sell-off this year has set up the stage for an operant conditioning chamber (also known as the Skinner box): When the central bank monkey correctly performs the "central bank put" behavior, the chamber mechanism delivers positive investment returns to the community and a buying behavior. In some cases of the Skinner box investment experience, the mechanism delivers a punishment for an incorrect or missing responses (central bankers). Due to the lack of appropriate response or incorrect response (Bank of Japan with NIRP) from central bankers in 2016, the investor monkey community has been delivered a punishment in the form of a violent sell-off, leaving the investor monkey community less inclined in going again for the "equity banana" for fear of another "electric shock" hence the reach for bonds. When it comes to our outlook and stance relating to the credit cycle we would like to point out again towards chapter 5 of Credit Crisis authored by Dr Jochen Felsenheimer and Philip Gisdakis where they highlight the work of Hyman Minsky's work on the equity-debt cycle and particularly in the light of the Energy sector upcoming bust: "His cyclical theory of financial crises describes the fragility of financial markets as a function of the business cycle. In the aftermath of a recession, firms finance themselves in a very safe way. As the economy grows and expected profits rise, firms take on more speculative financing, anticipating profits and that loans can be repaid easily. Increased financing translates into rising investment triggering further growth of the economy, making lenders confident that they will receive a decent return on their investments. In such a boom period, lenders tend to abstain from guarantees of success, i.e; reflected in less covenants or in rising investments in low-quality companies. Even if lenders knowsthat the firms are not able to repay their debt, they believe these firms will refinance elsewhere as their expected profits rise. While this is still a positive scenario for equity markets, the economy has definitely taken on too much credit risk. Consequently the next stage of the cycle is characterized by rising defaults. This translates into tighter lending standards of banks. Here, the similarities to the subprime turmoil become obvious. Refinancing becomes impossible especially for lower-rated companies and more firms default. This is the beginning of a crisis in the real economy, while during the recession, firms start to turn to more conservative financing and the cycle closes again" - source Credit Crises, published in 2008, authored by Dr Jochen Felsenheimer and Philip Gisdakis The issue with NIRP and the relationship between credit spreads and safe-haven yields is that while the traditional pattern being lower government bond yields and a flatter yield curve is accompanied by wider spreads in "risk-off" scenarios, given more and more pundits (such as Hedge Funds)  have been playing the total return game, they have become less and less dependent on traditional risk-return optimized approach as they are less dependent on movements on the interest rate side. The consequence for this means that classical theories based on allocation become more and more challenged in a NIRP world because correlation patterns change in a crisis period particularly when correlations are becoming more and more positive (hence large standard deviations move). But if indeed, the behavior of credit is affected in relation to safe-haven yields by changes in correlations, then you might rightly ask yourself about the relationship of credit spreads and FX movements given the focus as of late has been around the surge in the US dollar and the fall in oil prices in conjunction of the rise of the cost in capital since mid 2014. In our next point we think that, from a credit perspective, once more you should focus your attention on the Japanese yen. Whereas everyone has been focusing on the importance of the strength of US dollar in relation to corporate earnings and in similar fashion in Europe previously the focused had been on the strength of the Euro, we think, from a credit perspective, the focus should rather be on the Japanese yen going forward. Once again we take our cue from chapter 5 of Credit Crisis authored by Dr Jochen Felsenheimer and Philip Gisdakis: "Many credit hedge funds not only implement leveraged investment strategies but also leveraged funding strategies, primarily using the JPY as a cheap funding source. A weaker JPY accompanied by tighter spreads is the best of all worlds for a yen funded credit hedge fund. However, these funds should be more linked to the JPY than the USD. One impact is obviously that the favorable growth outlook in Euroland triggers a strong EUR and tighter spreads of European companies (which benefit the most from the improving economic environment). However, the diverging fit between EUR spreads, the USD and the JPY, respectively, underpins the argument that technical factors as well as structural developments dominate fundamental trends at least in certain periods of the cycle. "  - source Credit Crises, published in 2008, authored by Dr Jochen Felsenheimer and Philip Gisdakis However, NIRP doesn't reduce the cost of capital. NIRP is a currency play. This is clearly the case in Japan and has been well described in Deutsche Bank's note from the 22nd of February entitled "Yen hedging cycle risks rapid reverse": "A lot of negative things have been said about negative rates, not least in Japan. Negative rates do not work by reducing financing costs materially, providing a ‘price of money’ stimulus. If negative rates do support activity, they primarily work through the exchange rate, adding to the portfolio substitution into risky asset. For Japan the biggest problem is that macro policies have been directing capital toward risky assets for the last 4+ years. There are inevitably diminishing returns to this strategy, not least because ‘value’ matters. Value matters when it comes to the underlying domestic and foreign ‘risky’ asset, and the value of the exchange rate. Specifically on the latter, the yen even after the recent appreciation is still close to 20% cheap in PPP terms. It is also cheap on a FEER and BEER basis, helped by a terms of trade shock that is seen lifting the Current Account surplus to near 5% of GDP in 2016. Figure 2 shows that in the last year, Japan has had the most favorable terms of trade shock of any major economy. While FDI can recycle up to half of the C/A surplus, the question is whether other BoP components, notably net portfolio flows, will do the rest of the recycling, and at what exchange rate. For much of 2013 – H1 2015, vehicles like the GPIF were used to recycle (a much smaller) C/A surplus, that even briefly went into deficit in 2014. By June 2015, the GPIF’s portfolio of riskier assets inclusive of domestic stocks (22.3%), international bonds (13.1%), and international stocks (22.3%) was well within the desired base/benchmark ranges (see Figure 3). For the latest data available for Q3 2015, GPIF domestic and international equity holdings declined led by weaker equity prices and a stronger yen – price action that underscores the risky nature of these investments. As the C/A surplus grows and the above ‘structural’ pension shift toward capital flows abroad diminishes, there is a danger that we have already entered the realm where yen strength becomes self-fulfilling, as many of the hedging activities that were associated with a weak yen in the first four years of Abeconomics go into reverse. Prior to Abeconomics, hedging on Japan equity flows was limited. Since 2011 when BOJ policy encouraged yen weakness, foreign inflows into Japan equities typically included much higher currency hedge ratios, while fully hedged instruments became popular. Precise numbers are not available, but it is estimated that as much as a quarter of the stock of foreign holdings of Japan equities of Y183tr has a currency hedge – a hedge that quickly becomes much less attractive with a stronger yen. In contrast, for Japan investments abroad, FX hedge ratios declined. This particularly relates to USD investments, where expectations of USD gains increased rapidly in the Abeconomics years, and FX hedges on USD investments dropped. At the end of Q3 2015, Japan had a total of Y770trillion non-Central Banks assets abroad, inclusive of Y418tr portfolio assets, of which Y153tr are equity and investment fund shares, and Y266tr are debt securities. Even if much of the investment abroad has only limited hedges, one observation is that a very small adjustment in hedge ratios can have a huge flow impact. A shift in the hedge ratio on foreign fixed income assets by 10% is roughly equivalent to a year’s C/A surplus. Secondly, to the extent that hedge ratios are very low, as is the case for, say, the GPIF, there are sizable potential losses for funds recently adding to foreign exposure, and an emerging disincentive to invest in the most risky assets abroad. Of the large players that actively hedge FX exposure, the life insurance companies’ activities can most closely be tracked through quarterly statements, and the time series can provide a useful standard to benchmark recent activity. Life insurance companies as of Q3 2015 had some Y65tr in foreign securities. As per Figure 4, their currency hedge ratios on dollar-based investments are estimated to have dropped to ~46% by the end of Q3 2015, the lowest levels recorded since the Great Financial Crisis in 2008. The hedge ratio is down from a peak of 79% in September 2009. Life insurance company hedge ratios have likely reached a cycle nadir at the end of 2015, as concerns about JPY appreciation start to rise. Among the other largest participants that have foreign portfolios of comparable size to the Lifers, both Toshins (foreign securities of ~77tr) and particularly public pension funds ( ~ Y57tr foreign securities) have very low currency hedge ratios and are heavily exposed to currency risk. Japan investments abroad, so actively encouraged by policymakers, are slowly being shown to have a familiar ‘catch’ – interest parity! Nominal yields may be more attractive abroad, but the long-term currency risks are enormous, at least when placed in the context of a yen that is still significantly undervalued. A crucial element of hedging activity is the expected exchange rate. Here three bigger macro forces are at play for the remainder of 2016: i) Japan/BOJ policy; ii) the Fed; and iii) China FX policy. Firstly on BOJ intervention, the market should not expect any official BOJ intervention barring extreme FX volatility. It would run counter to G20 rules and risk a serious rift with the US. On rates policy, adding significantly to NIRP looks increasingly unpalatable, with our Tokyo Economics team expecting only one more 10bp cut in Q3. The next set of actions will likely need to revolve around ‘qualitative QE’ and the buying of more risky assets, notably securitized products. On the Fed, we expect USD/JPY to remain sensitive to Fed expectations, but not to the point where more Fed tightening is likely to lead to new USD/JPY highs. The yen has a history of doing well in 5 of the last 7 Fed tightening cycles, although it did weaken in the two big USD upswings." - source Deutsche Bank As we posited in our conversation "Information cascade" back in March 2015, you should very carefully look at what the GPIF, and their friends are doing: "Go with the flow: We also added more recently in our conversation "The Ninth Wave" the following: So from a "flow" perspective and like any trained "monkey" looking to reach out for "bananas", at least the slippery type, whereas other "monkeys" are focusing on the US dollar and Oil related woes, we'd rather for now focus our attention onto the Japanese yen, and the allocation implications of a stronger yen. For us, like others, a PBOC devaluation move on the Yuan would send a deflationary impulse worldwide but, in terms of risk assets, it would have serious consequences on Japanese asset allocations and would lead to an acceleration in capital repatriation (this would mean liquidation of some existing positions rest assured) as indicated in Deutsche Bank's note: "Even modest JPY gains against the USD should translate to a strong yen against all the other G10 currencies and EMG Asia FX, not least because of global macro risks elsewhere. Nothing is capable of lifting the yen trade weighted index more than a speed up in the Rmb’s depreciation rate, leading to knock-on devaluations in EM Asia. This risk alone should encourage higher Japan hedge ratios for investment abroad, inclusive of the stock of Japan FDI assets abroad. A risk-off China shock would tend to concentrate JPY gains against currencies of other G4, but initially would likely include additional yen strength against all currencies. It should also drive the Nikkei sharply lower. The Nikkei and yen have a long and sometimes tortured history of moving in lock-step. A stronger yen has hurt the Nikkei for obvious reasons, but a weaker Nikkei also tends to lead to a repatriation of capital and a stronger yen. Interestingly, the current Nikkei levels are already consistent with a USD/JPY below Y105."  - source Deutsche Bank When it comes to the year of the Fire Monkey, the slippery banana type, no doubt could come from Japanese investors hurt by the violent appreciation of the Japanese yen, which has indeed been a significant "sucker punch" when it comes to the large standard deviation move experienced by the Japanese yen versus the US dollar. If Mrs Watanabe goes into "liquidation" mode, things could indeed become interesting to say the least. When it comes to Minsky and the equity-credit cycle, whereas central banks can affect the amplitude and the duration of the cycle, in no way can they alter the character of the cycle. In our final chart, we once again indicate our 2007 feeling thanks to the rise in leverage, tightening financial conditions with the issuance markets closing down on the weaker players, which bode poorly from a risk-reward perspective. • Final chart: US corporate sector leverage approaching crisis peak Like many pundits, we have voiced our concerns on the increasing leverage thanks to buybacks financed by debt issuance and the lack of the use of proceeds for investment purposes. Our final chart comes from the same Deutsche Bank note from the 22nd of February entitled "Three things the market taught me this year" quoted previously and displays the US corporate sector leverage which is approaching crisis peak: "US deleveraging – not that great US consumer deleveraging stands out as one of the major achievements of the Yellen Fed. Yet the corporate picture looks much less impressive. Total amount of US corporate debt has approached the highs seen in the financial crisis (chart 3).  Not only that but the bulk of the leverage has been directed towards corporate stock buybacks (chart 4), explaining how low investment but high borrowing have existed at the same time. Persistent volatility in the US credit market has highlighted vulnerabilities that weren’t a concern last year. - source Deutsche Bank While a respite is always welcome, when it comes to the rally seen recently, as far as the Monkey and banana problem is concerned as everyone is hoping from additional tricks from our "Generous gamblers" aka our central bankers, this rally might have some more room ahead but then again, it doesn't change our belief in the stage of the credit cycle and our focus on what's the Japanese yen will be doing. "Life is full of banana skins. You slip, you carry on." - Daphne Guinness, British artist Stay tuned! Sunday, 14 February 2016 Macro and Credit - The disappearance of MS München "Hope, the best comfort of our imperfect condition." - Edward Gibbon, English historian While thinking about correlations in particular and risk in general, we reminded ourselves of one of our pet subject we have touched in different musings, namely the fascinating destructive effect of "Rogue waves". It is a subject we discussed in details, particularly in our post "Spain surpasses 90's perfect storm": "We already touched on the subject of "Rogue Waves" in our conversation "the Italian Peregrine soliton", being an analytical solution to the nonlinear Schrödinger equation (which was proposed by Howell Peregrine in 1983), and being as well "an attractive hypothesis" to explain the formation of those waves which have a high amplitude and may appear from nowhere and disappear without a trace, the latest surge in Spanish Nonperforming loans to a record 10.51% and the unfortunate Sandy Hurricane have drawn us towards the analogy of the 1991 "Perfect Storm". Generally rogues waves require longer time to form, as their growth rate has a power law rather than an exponential one. They also need special conditions to be created such as powerful hurricanes or in the case of Spain, tremendous deflationary forces at play when it comes to the very significant surge in nonperforming loans.", source Macronomics, October 2012 You might already asking yourselves why our title and where we are going with all this? The MS München was a massive 261.4 m German LASH carrier of the Hapag-Lloyd line that sank with all hands for unknown reasons in a severe storm in December 1978. The most accepted theory is that one or more rogue waves hit the München and damaged her, so that she drifted for 33 hours with a list of 50 degrees without electricity or propulsion.  The München departed the port of Bremerhaven on December 7, 1978, bound for Savannah, Georgia. This was her usual route, and she carried a cargo of steel products stored in 83 lighters and a crew of 28. She also carried a replacement nuclear reactor-vessel head for Combustion Engineering, Inc. This was her 62nd voyage, and took her across the North Atlantic, where a fierce storm had been raging since November. The München had been designed to cope with such conditions, and carried on with her voyage. The exceptional flotation capabilities of the LASH carriers meant that she was widely regarded as being practically unsinkable (like the Titanic...). That was of course until she encountered "non-linear phenomena such as solitons. While a 12-meter wave in the usual "linear" model would have a breaking force of 6 metric tons per square metre (MT/m2), although modern ships are designed to tolerate a breaking wave of 15 MT/m2, a rogue wave can dwarf both of these figures with a breaking force of 100 MT/m2. Of course for such "freak" phenomenon to occur, you need no doubt special conditions, such as the conjunction of fast rising CDS spreads (high winds), global tightening financial conditions and NIRP (falling pressure towards 940 MB), as well as rising nonperforming loans and defaults (swell). So if you think having a 99% interval of confidence in the calibration of you VaR model will protect you againtst multiple "Rogue Waves", think again... Of course the astute readers would have already fathomed between the lines that our reference to the giant ship MS München could be somewhat a veiled analogy to banking giant Deutsche Bank. It could well be... But given our recent commentaries on the state of affairs in the credit space, we thought it would be the right time to reach again for a book collecting dust since 2008 entitled Credit Crisis authored by Dr Jochen Felsenheimer (which we quoted on numerous occasions on this very blog for good reasons) and Philip Gisdakis. Before we go into the nitty gritty of our usual ramblings, it is important we think at this juncture to steer you towards chapter 5 entitled "The Anatomy of a Credit Crisis" and take a little detour worth our title analogy to "Rogue Waves" which sealed the fate of MS München. What is of particular interest to us, in similar fashion to the demise of the MS München is page 215 entitled "LTCM: The arbitrage saga" and the issue we have discussing extensively which is our great discomfort with rising positive correlations and large standard deviations move. This amounts to us as increasing rising instability and the potential for "Rogue Waves" to show up in earnest: "LTCM's trading strategies generally showed no or almost very little correlation. In normal times or even in crises that are limited to a specific segment, LTCM benefited from this high degree of diversification. Nevertheless, the general flight to liquidity in 1998 caused a jump in global risk premiums, hitting the same direction. All (in normal times less-correlated) positions moved in the same direction. Finally, it is all about correlation! Rising correlations reduces the benefit from diversification, in the end hitting the fund's equity directly. This is similar with CDO investments (ie, mezzanine pieces in CDOs), which also suffer from a high (default) correlation between the underlying assets. Consequently, a major lesson of the LTCM crisis was that the underlying Covariance matrix used in Value-at-Risk (VaR) analysis is not static but changes over time." - source Credit Crises, published in 2008, authored by Dr Jochen Felsenheimer and Philip Gisdakis You might probably understand by now from our recent sailing analogy (The Vasa ship) and wave analogy (The Ninth Wave) where we are heading: A financial crisis is more than brewing.  Moving back to the LTCM VaR reference, the Variance-Covariance Method assumes that returns are normally distributed. In other words, it requires that we estimate only two factors - an expected (or average) return and a standard deviation. Value-at-Risk (VaR) calculates the maximum loss expected (or worst case scenario) on an investment, over a given time period and given a specified degree of confidence.  LTCM and the VaR issue reminds us of a regular quote we have used, particularly in May 2015 in our conversation "Cushing's syndrome": So what is VaR really measuring these days? This what we had to say about VaR in our May 2015 conversation "Cushing's syndrome" and ties up nicely to our world of rising positive correlations. Your VaR measure doesn't measure today your maximum loss, but could be only measuring your minimum loss on any given day. Check the recent large standard deviation moves dear readers such as the one on the Japanese yen and ask yourself if we are anymore in a VaR assumed "normal market" conditions: Therefore this week's conversation we will look at what positive correlations entails for risk and diversification and also we will look at the difference cause of financial crisis and additional signs we are seriously heading into one like the MS München did back in 1978, like we did in 2008 and like we are most likely heading in 2016 with plenty of menacing "Rogue Waves" on the horizon. So fasten your seat belt for this long conversation, this one is to be left for posterity. • Credit - The different types of credit crises and where do we stand • A couple of illustrations of on-going nonlinear "Rogue Waves" in the financial world of today • The overshooting phenomenon • The Yuan Hedge Fund attack through the lense of the Nash Equilibrium Concept Rising positive correlations, are rendering "balanced funds" unbalanced and as a consequence models such as VaR are becoming threatened by this sudden rise in non-linearity as it assumes normal markets. The rise in correlations is a direct threat to diversification, particularly as we move towards a NIRP world: "When it comes to a macro-driven market as "central banks' put" are losing their "magic", correlations unfortunately are still moving higher, which, we think is a sign of great instability brewing.The correlation between macro variables such as bund yields, FX and oil and equity market factors (Momentum, Value, Growth, Risk) is now higher than the correlation between macro variables and the market. There lies the crux of central banks interventions. There is now deeper inter-linkages in the macro economy as well as financial markets globally post crisis." - source Macronomics, January 2016 When it comes to the classification of credit crises and their potential area of origins both the authors  for the book "Credit Crisis" shed a light on the subject: • "Currency crisis: A speculative attack on the exchange rate of a currency which results in a sharp devaluation of the currency; or it forces monetary authorities to intervene in currency markets to defend the currency (eg. by sharply hiking interest rates). • Foreign Debt Crisis: a situation where a country is not able to service its foreign debt. • Banking crisis: Actual or potential bank runs. Banks start to suspend the internal convertibility of their liabilities or the government has to bail out the banks. • Systemic Financial crisis: Severe disruptions of the financial system, including a malfunctioning of financial markets, with large adverse effect on the real economy. It may involves a currency crisis and also a banking crisis, although this is not necessarily true the other way around. In many cases, a crisis is characterized by more than one type, meaning we often see a combination of at least two crises. These involve strong declines in asset values, accompanied by defaults, in the non-financials but also in the financials universe. The effectiveness of government support or even bailout measures combined with the robustness of the economy are the most important determinants of the economy's vulneability, and they therefore have a significant impact on the severity of the crisis. In addition, a crucial factor is obviously the amplitude of asset price inflation that preceded the crisis. Depending on the type of crisis, there are different warning signals, such as significant current account imbalances (foreign debt crisis), inefficient currency pegs (currency crisis), excessive lending behavior (banking crisis), and a combination of excessive risk taking and asset price inflation (systemic financial crisis). A financial crisis is costly, as they are fiscal costs to restructure the financial system. There is also a tremendous loss from asset devaluation, and there can be a misallocation of resources, which in the end, depresses growth. A banking crisis is considered to be very costly compared with, for example, a currency crisis. We classify a credit crisis as something between a banking crisis and a systematic financial crisis. A credit crisis affects the banking system or arises in the financial system; the huge importance of credit risk for the functioning of the financial system as a whole bears also a systematic component. The trigger event is often an exogenous shock, while the pre-credit crisis situation is characterized by excessive lending, excessive leverage, excessive risk taking, and lax lending standards. Such crises emerge in periods of very high expectations on economic development, which in turns boosts loan demand and leverage in the system. When an exogenous shock hits the market, it triggers an immediate repricing of the whole spectrum of credit-risky assets, increasing the funding costs of borrowers while causing an immense drop in the asset value of credit portfolios. A so-called credit crunch scenario is the ugliest outcome of a credit crisis. It is characterized by a sharp reduction of lending activities by the banking sector. A credit crunch has a severe impact on the real economy, as the basic transmission mechanism of liquidity (from central banks over the banking sector to non-financial corporations) is distorted by the fact that banks do a liquidity squeeze, finally resulting in rising default rates. A credit crunch is a full-fledged credit crisis, which includes all major ingredients for a banking and a systemic crisis spilling over onto several parts of the financial market and onto the real economy. A credit crunch is probably the most costly type of financial crisis, also depending on the efficiency of regulatory bodies, the shape of the economy as a whole, and the health of the banking sector itself." - source Credit Crises, published in 2008, authored by Dr Jochen Felsenheimer and Philip Gisdakis The exogenous shock started in earnest in mid-2014 which saw a conjunction of factors, a significant rise in the US dollar that triggered the fall in oil prices, the unabated rise in the cost of capital. If we were to build another schematic of the current market environment, here what we think it should look like to name a few of the issues worth looking at: - source Macronomics So if you think diversification is a "solid defense" in a world of "positive correlations", think again, because here what the authors of "Credit Crisis" had to say about LTCM and tail events (Rogue Waves): "Even if there are arbitrage opportunities in the sense that two positions that trade at different prices right now will definitely converge at a point in the future, there is a risk that the anomaly will become even bigger. However typically a high leverage is used for positions that have a skewed risk-return profile, or a high likelihood of a small profit but a very low risk of a large loss. This equals the risk-and-return profile of credit investments but also the risk that selling far-out-of-the-money puts on equities. In case of a tail event occurs, all risk parameters to manage the overall portfolio are probably worthless, as correlation patterns change dramatically during a crisis. That said, arbitrage trades are not under fire because the crisis has an impact on the long-term-risk-and-return profile of the position. However, a crisis might cause a short-term distortion of capital market leading to immense mark-to-market losses. If the capital adequacy is not strong enough to offset the mark-to-market losses, forced unwinding triggers significant losses in arbitrage portfolios. The same was true for many asset classes during the summer of 2007, when high-quality structures came under pressure, causing significant mark-to-market losses. Many of these structures did not bear default risk but a huge liquidity risk, and therefore many investors were forced to sell." source Credit Crises, published in 2008, authored by Dr Jochen Felsenheimer and Philip Gisdakis You probably understand by now why we have raised the "red flag" so many times on our fear in the rise of "positive correlations". They do scare us, because they entail, larger and larger standard deviation moves and potentially trigger "Rogue Waves" which can wipe out even the biggest and most reputable "Investment ships" à la MS München.  The big question is not if we are in a bubble again but if this "time it's different". It is not. It's worse, because you have all the four types of crisis evolving at the same time. Here is what Chapter 5 of "Credit Crisis" is telling us about the causes of the bubble: "A mainstream argument is that the cause of the bubbles is excessive monetary liquidity in the financial system. Central banks flood the market with liquidity to support economic growth, also triggering rising demand for risky assets, causing both good assets and bad assets to appreciate excessively beyond their fundamentally fair valuation. In the long run, this level is not sustainable, while the trigger of the burst of the bubble is again policy shifts of central banks. The bubble will burst when central banks enter a more restrictive monetary policy, removing excess liquidity and consequently causing investors to get rid of risky assets given the rise in borrowing costs on the back of higher interest rates. This is the theory, but what about the practice? The resurfacing discussion about rate cuts in the United States and in the Euroland in mid-2005 was accompanied by expectations that inflation will remain subdued. Following this discussion, the impact of inflation on credit spreads returned to the spotlight. An additional topic regarding inflation worth mentioning is that if excess liquidity flows into assets rather than into consumer goods, this argues for low consumer price inflation but rising asset price inflation. In late 2000, the Fed and the European Central Banks (ECB) started down a monetary easing path, which was boosted by external shocks (9/11 and the Enron scandal), when central banks flooded the market with additional liquidity to avoid a credit crunch. Financial markets benefited in general from this excess liquidity, as reflected in the positive performance of almost all asset classes in 2004, 2005, and 2006, which argued for overall liquidity inflows but not for allocation shifts. It is not only excess liquidity held by investors and companies that underpins strong performing assets in general, but also the pro-cyclical nature of banking. In a low default rate environment, lending activities accelerate, which might contribute to an overheating of the economy accompanied by rising inflation. From a purely macroeconomic viewpoint, private households have two alternatives to allocate liquidity: consuming or saving. The former leads to rising price inflation, whereas the latter leads to asset price inflation." - source Credit Crises, published in 2008, authored by Dr Jochen Felsenheimer and Philip Gisdakis  Where we slightly differ from the author's take in terms of liquidity allocation is in the definition of "saving".  The "Savings Glut" view of economists such as Ben Bernanke and Paul Krugman needs to be vigorously rebuked. This incorrect view which was put forward to attempt to explain the Great Financial Crisis (GFC) by the main culprits was challenged by economists at the Bank for International Settlements (BIS), particularly in one paper by Claudio Borio entitled "The financial cycle and macroeconomics: What have we learnt?".  "The core objection to this view is that it arguably conflates “financing” with “saving” –two notions that coincide only in non-monetary economies. Financing is a gross cash-flow concept, and denotes access to purchasing power in the form of an accepted settlement medium (money), including through borrowing. Saving, as defined in the national accounts, is simply income (output) not consumed. Expenditures require financing, not saving. The expression “wall of saving” is, in fact, misleading: saving is more like a “hole” in aggregate expenditures – the hole that makes room for investment to take place. … In fact, the link between saving and credit is very loose. For instance, we saw earlier that during financial booms the credit-to-GDP gap tends to rise substantially. This means that the net change in the credit stock exceeds income by a considerable margin, and hence saving by an even larger one, as saving is only a small portion of that income." - source BIS paper, December 2012 Their paper argues that it was unrestrained extensions of credit and the related creation of money that caused the problem which could have been avoided if interest rates had not been set too low for too long through a "wicksellian" approach dear to Charles Gave from Gavekal Research.  Borio claims that the problem was that bank regulators did nothing to control the credit booms in the financial sector, which they could have done. We know how that ended before. But, guess what: We have the same problem today and suprise, it's worse. Look at the issuance levels reached in recent years and the amount of cov-lite loans issued (again...). Look at mis-allocation of capital in the Energy sector and its CAPEX bubble. Look at the $9 trillion debt issued by Emerging Markets Corporates. We could go on and on. Now the credit Fed induced credit bubble is bursting again. One only has to look at what is happening in credit markets (à la 2007). By the way Financial Conditions are tightening globally and the process has started in mid 2014. CCC companies are now shut out of primary markets and default rates will spike. Credit always lead equities...The "savings glut" theory of Ben Bernanke and the FED is hogwash: "Asset price inflation in general, is not a phenomenon which is limited to one specific market but rather has a global impact. However, there are some specific developments in certain segments of the market, as specific segments are more vulnerable against overshooting than others. Therefore, a strong decline in asset prices effects on all risky asset classes due to the reduction of liquidity. This is a very important finding, as it explains the mechanism behind a global crisis. Spillover effects are liquidity-driven and liquidity is a global phenomenon. Against the background of the ongoing integration of the financial markets, spillover effects are inescapable, even in the case there is no fundamental link between specific market segments. How can we explain decoupling between asset classes during financial crises? During the subprime turmoil in 2007, equity markets held up pretty well, although credit markets go hit hard." - source Credit Crises, published in 2008, authored by Dr Jochen Felsenheimer and Philip Gisdakis As a reminder, a liquidity crisis always lead to a financial crisis. That simple, unfortunately. This brings us to lead you towards some illustration of rising instability and worrying price action and the formation of "Rogue Waves" we have been witnessing as of late in many segments of the credit markets. Rogue waves present considerable danger for several reasons: they are rare, unpredictable, may appear suddenly or without warning, and can impact with tremendous force. Looking at the meteoric rise in US High yield spreads in the Energy sector is an illustration we think about the destructive power of a High Yield "Rogue Wave": - source Thomson Reuters Datastream (H/T Eric Burroughs on Twitter) When it comes to the "short gamma" investor crowd and with Contingent Convertibles aka "CoCos" making the headlines, the velocity in the explosion of spreads has been staggering: - graph source Barclays (H/T TraderStef on Twitter) When it comes to the unfortunate truth about wider spreads, what the flattening of German banking giant Deutsche bank is telling you is that it's cost of capital is going up, this is what a flattening of credit curve is telling you: Also the percentage of High Yield bonds trading at Distressed levels is at the highest level since 2009 according to S&P data: 2015: 20.1%* 2013: 11.2% 2011: 16.8% 2009: 23.2% - source H/T - Lawrence McDonald - Twitter feed In our book a flattening of the High Yield curve is a cause for concern as illustrated by the one year point move on the US CDS index CDX HY (High Yield) series 25: - source CMA part of S&P Capital IQ This is a sign that cost of capital is steadily going up. Also the basis being the difference between the index and the single names continues to be as wide as it was during the GFC. A basis going deeper into negative territory is a main sign of stress. We have told you recently we have been tracking the price action in the Credit Markets and particularly in the CMBS space. What we are seeing is not good news to say the least and is a stark reminder of what we saw unfold back in 2007. On that subject we would like to highlight Bank of America Merrill Lynch's CMBS weekly note from the 12th of February entitled "The unfortunate truth about wider spreads": "Key takeaways • We anticipate that spread volatility, liquidity stress and credit tightening will persist. Look for wider conduit spreads. • While CMBX.BBB- tranche prices fell sharply this week we think further downside exists, particularly in series 6&7. As investors ponder the likelihood that economic growth may slow and that CRE prices may have risen too quickly (Chart 3), recent CMBX price action indicates that a growing number of investors may have begun to short it since it is a liquid, levered way to voice the opinion that CRE is considered to be a good proxy for the state of the economy. In the past, this type of activity began by investors shorting tranches that were most highly levered to a deteriorating economy and could fall the most if fundamentals eroded. This includes the lower rated tranches of CMBX.6-8, which, as of last night’s close, have seen the prices for their respective BBB-minus and BB tranches fall by 13-17 points for CMBX.6 (Chart 4), 14-20 points for CMBX.7 (Chart 5) and 17-19 points for CMBX.8 (Chart 6) since the beginning of the year. We agree that underwriting standards loosened over the past few years, which, all else equal, could imply loans in CMBX.8 have worse credit metrics compared to either the CMBX.6 or CMBX.7 series. Despite this, and although prices have already fallen considerably, for several reasons we think it makes sense to short the BBBminus tranche from either CMBX.6 or CMBX.7 instead of the CMBX.8. First, the dollar price of the BBB-minus tranche from CMBX.6 and CMBX.7 is materially higher that of CMBX.8 (Chart 7).  Additionally, although the CMBX.8 does have more loans with IO exposure than series 6 or 7 do, we think this becomes more meaningful when considering maturity defaults. By contrast, the earlier series not only have lower subordination attachment points at the BBB-minus tranche, but they also have more exposure to the retail sector, which could realize faster fundamental deterioration if the economy does contract." - source Bank of America Merrill Lynch Now having read seen the movie "The Big Short" and also read the book and also recently read in Bloomberg about Hedge Fund pundits thinking about shorting Subprime Auto-Loans, as the next new "big kahuna" trade, we would like to make another suggestion.  If you want to make it big, here is what we suggest à la "Big Short", given last week we mentioned that Italian NPLs have now been bundled up into a new variety of CDOs according to Euromoney's article entitled "Italy's bad bad bank" from February 2016 and that the Italian state guarantees the senior debt of such operations and thinks it is unlikely ever to have to honour the guarantee (as equity and subordinated debt tranches will take the first hit from any shortfall to the price the SPV paid for the loans), maybe you want to find someone stupid enough to sell you protection on the senior tranche of these "new CDOs". In essence, like in the "Big Short", if the whole of the capital structure falls apart, your wager might make a bigger return because of the assumed low probability of such a "tail risk" to ever materialize. and will be cheaper to implement in terms of negative carry than, placing a bet on the lower part of the capital structure. This is just a thought of course... Moving back to the disintegration of the CMBS space, Bank of America Merrill Lynch made some additional interesting points on the fate of SEARS and CMBS: "To this point, Sears’s management announced this week that revenues for the year ending January 31, 2016, decreased to about $25.1 billion (Chart 8) and that the company would accelerate the pace of store closings, sell assets and cut costs. Why could CMBX.6 be more negatively impacted by the negative Sears news than some of the other CMBX series? Among the more recently issued CMBX series (6-9), CMBX.6 has the highest percentage of retail exposure. When we focus solely on CMBX.6 and CMBX.7, which have the highest percentage exposure to retail among the postcrisis series, we see that although the headline exposure to retail properties is similar, CMBX.6 has considerably more exposure to B/C quality malls than CMBX.7 does" - source Bank of America Merrill Lynch Not really this is all part of what is known as the overshooting phenomenon. • The overshooting phenomenon The overshooting phenomenon is closely related to the bubble theory we have discussed earlier on through the comments of both authors of the book "Credit Crisis. The overshooting paper  mentioned below in the book is of great interest as it was written by Rudi Dornbusch, a German economist who worked for most of his career in the United States, who also happened to have had Paul Krugman and Kenneth Rogoff as students: "Closely linked to the bubble theory, Rudiger Dornbusch's famous overshooting paper set a milestone for explaining "irrational" exchange rate swings and shed some light on the mechanism behind currency crises. This paper is one of the most influential papers writtten in the field of international economics, while it marks the birth of modern international macroeconomics. Can we apply some of the ideas to credit markets? The major input from the Dornbusch model is not only to better understand exchange rate moves; it also provides a framework for policymakers. This allow us to review the policy actions we have seen during the subprime turmoil of 2007. The background of the model is the transition from fix to flexible exchange rates, while changes in exchange rates did not simply follow the inflation differentials as previous theories suggest. On the contrary, they proved more volatile than most experts expected they would be. Dornsbusch explained this behavior of exchange rates with sticky prices and an instable monetary policy, showing that overshooting of exchange rates is not necessarily linked to irrational behavior of investors ("herding"). Volatility in FX markets is a necessary adjustment path towards a new equilibrium in the market as a response to exogenous shocks, as the price of adjustment in the domestic markets is too slow. The basic idea behind the overshooting model is based on two major assumptions. First, the "uncovered interest parity" holds. Assuming that domestic and foreign bonds are perfect substitutes, while international capital is fully mobile (and capital markets are fully integrated), two bonds (a domestic and a foreign one) can only pay different interest rates if investors expect compensating movement in exchange rates. Moreover, the home country is small in world capital markets, which means that the foreign interest rate can be taken as exogenous. The model assumes "perfect foresight", which argues against traditional bubble theory. The second major equation in the model is the domestic demand for money. Higher interest rates trigger rising opportunity costs of holding money, and hence lower demand for money. In the contrary, an increase in output raises  demand for money while demand for money is proportional to the price level.  In order to explain what overshooting means in this context, we have to introduce additional assumptions. First of all, domestic prices do not immediately follow any impulses from the monetary side, while they adjust only slower over time, which is a very realistic assumption. Moreover, output is assumed to be exogenous, while in the long run, a permanent rise in money supply causes a proportional rise in prices and in exchange rates. The exogenous shock to the system is now defined as unexpected permanent increase in money supply, while prices are sticky in the short term. And as also output is fixed, interest rates (on domestic bonds) have to fall to equilibrate the system. As interest-rate parity holds, interest rates can only fall if the domestic currency is expected to appreciate. As the assumption of the model is that in the long run rising money supply must be accompanied by a proportional depreciation in the exchange rate must be larger than the long term depreciation! That said the exchange rate must overshoot the long-term equilibrium level. The idea of sticky prices is in the current macroeconomic discussion fully accepted, as it is a necessary assumption to explain many real-world data. This is exactly what we need to explain the link to the credit market. The basic assumption of the majority of buy-and-hold investors is that credit spreads are mean reverting. Ignoring default risk, spreads are moving around their fair value through the cycle. Overshooting is only a short-term phenomenon and it can be seen as a buying opportunity rather than the establishment of a lasting trend. This is true, but one should not forget that this is only true if we ignore default risk. This might be a calamitous assumption. Transferring this logic to the first subprime shock in 2007, it is exactly what happened as an initial reaction regarding structured credit investments. For example, investment banks booked structured credit investments in marked-to-model buckets (Level 3 accounting) to avoid mark-to-market losses.   A credit crisis can be the trigger point of overshooting in other markets. This is exactly what we have observed during the subprime turmoil of 2007. This is a crucial point, especially from the perspective of monetary policy makers. Providing additional liquidity would mean that there will be further distortions. Healing a credit crunch at the cost of overshooting in other markets. Consequently liquidity injections can be understood as a final hope rather than the "silver bullet" in combating crises. In the context of the overshooting approach, liquidity injections could help to limit some direct effects from credit crises, but they will definitely trigger spillover effects onto other markets. In the end, the efficiency of liquidity injections by central banks depends on the benefit on the credit side compared to the cost in other markets. In any case, it proved not to be the appropriate instrument as a reaction to the subprime crisis in 2007" - source Credit Crises, published in 2008, authored by Dr Jochen Felsenheimer and Philip Gisdakis On that subject we would like to highlight again Bank of America Merrill Lynch's CMBS weekly note from the 12th of February entitled "The unfortunate truth about wider spreads": "As spreads widened over the past few weeks, a significant number of conversations we’ve had with investors have revolved around the concern that the recent spread widening may not represent a transient opportunity to add risk at wider levels, but instead could represent a new reality earmarked by tighter credit standards, lower liquidity and higher required returns for a given level of risk. While it may be easy to look at CRE fundamentals and dismiss the recent spread widening as being due to market technicals, it is important to realize that while that may be true today, if investors are pricing in what they expect could occur in the future, there may be some validity to the recent spread moves. As a case in point, given the recent new issue CMBS spread widening, breakeven whole loan spreads have widened substantially over the past two months (Chart 16). Not only do wider whole loan breakeven spreads result in higher coupons to CMBS borrowers, which, effectively tightens credit standards, but it also can reduce the profitability of CMBS originators, which may cause some of them to exit the business. As a case in point, this week Redwood Trust, Inc. announced it is repositioning its commercial business to focus solely on investing activities and will discontinue commercial loan originations for CMBS distribution. Marty Hughes, the CEO of Redwood said: "We have concluded that the challenging market conditions our CMBS conduit has faced over the past few quarters are worsening and are not likely to improve for the foreseeable future. The escalation in the risks to both source and distribute loans through CMBS, as well as the diminished economic opportunity for this activity, no longer make our commercial conduit activities an accretive use of capital."  If, as we wrote last week, CRE portfolio lenders also tighten credit standards, it stands to reason that some proportion of borrowers that would have previously been able to successfully refinance may no longer be able to do so. The upshot is that it appears that we have entered into a phase where it becomes increasingly possible that negative market technicals and less credit availability form a feedback loop that negatively affects CRE fundamentals. To this point, although a continued influx of foreign capital into trophy assets in gateway markets can support CRE prices in certain locations, it won’t help CRE prices for properties located in many secondary or tertiary markets. If borrowers with “average” quality properties located away from gateway markets are faced with higher borrowing costs and more stringent underwriting standards, the result may be fewer available proceeds and wider cap rates." - source Bank of America Merrill Lynch This is another sign that credit will no doubt overshoot to the wide side and that you will, rest assured see more spillover in other asset classes. Given credit leads equities, you can expect equities to trade "lower" for "longer" we think. Furthermore, Janet Yellen's recent performance is confirming indeed the significant weakening of the Fed "put" as described in Bank of America Merrill Lynch's note: "With Fed Chair Yellen’s Humphry Hawkins testimony, in which she stressed the notion that the Fed’s decision to raise rates is not on a predetermined course, the probability that the Fed would raise interest rates at its March 2016 plummeted as did the probability of rate hikes over the next year. During her testimony, however, the Fed Chair mentioned that the current global turmoil could cause the Fed to alter the timing of upcoming rate hikes, not abandon them.  As a result, risky asset prices broadly fell and a flight to quality ensued due to the uncertainty of the timing of future rate hikes, the notion that the Fed put may be further out of the money than was previously anticipated and the prospect that a growing policy divergence among global central banks could contribute to a U.S. recession. While delaying the next rate hike may be viewed positively in the sense that it could help keep risk free rates low, which would allow a greater number of borrowers to either refinance or acquire new properties, we think it is likely that many investors will view it as a canary in the coalmine that presages slower economic growth, more capital market volatility, wider credit spreads and lower asset prices. Ultimately, the framework that has been put in place by regulators over the past few years effectively severely limits banks’ collective abilities to provide liquidity during periods of stress. As global economic concerns have increased, investors and dealers alike have become increasingly aware of the extremely limited amount of liquidityavailable, which has manifested through a surge  in liquidity stress measures (Chart 21) and wider spreads across risky asset classes.  - source Bank of America Merrill Lynch When it comes to rising risk, it certainly looks to us through the "credit lense" that indeed it certainly feels like 2007 and that once again we are heading towards a Great Financial Crisis version 2.0. For us, it's a given. When it comes to the much talked about Kyle Bass significant "short yuan" case, we would like to offer our views through the lens of the Nash Equilibrium Concept in our next point. Hyman Capital’s Kyle Bass  has recently commented on the $34 trillion experiment and his significant currency play against the Chinese currency (a typical old school Soros type of play we think). Indirectly, our HKD peg break idea which  we discussed back in September t2015 our conversation "HKD thoughts - Strongest USD peg in the world...or most convex macro hedge?", we indicated that the continued buying pressure on the HKD had led the Hong-Kong Monetary Authority to continue to intervene to support its peg against the US dollar. At the time, we argued that the pressure to devalue the Hong-Kong Dollar was going to increase, particularly due to the loss of competitivity of Hong-Kong versus its peers and in particular Japan, which has seen many Chinese turning out in flocks in Japan thanks to the weaker Japanese Yen. This Yuan trade is of interest to us as we won the "best prediction" from Saxo Bank community in their latest Outrageous Predictions for 2016 with our call for a break in the HKD currency peg as per our September conversation and with the additional points made in our recent "Cinderella's golden carriage". We also read with interest Saxo Bank's French economist Christopher Dembik's take on the Yuan in his post "The Chinese yuan countdown is on". Overall, we think that if the Yuan goes, so could the Hong Dollar peg. Therefore we would like again to quote once again the two authors of the book "Credit Crisis" and their Nash Equilibrium reasoning in order to substantiate the probability of this bet paying off: "Financial panic models are based on the idea of a principle-agent: There is a government which is willing to maintain the current exchange rate using its currency reserves. Investors or speculators are building expectations regarding the ability of the government to maintain the current exchange-rate level. An as answer to a speculative attack on the currency, the government will buy its own currency using its currency reserves. There are three possible outcomes in this situation. First, currency reserves are big enough to combat the speculative attack successfully, and the government is able to keep the current exchange rate. In this case there will be no attack as speculators are rational and able to anticipate the outcome. Second, the reserves of central banks are not large enough to successfully avert the speculative attack, even if only one speculator is starting the attack. Thus, the attack will occur and will be successful. The government has to adjust the exchange rate. Third, the attack will only be successful if speculators join forces and start to attack the currency simultaneously. In this case, there are two possible equilibriums, a "good one" and a "bad one". The good one means the government is able to defend the currency peg, while the bad one means that the speculators are able to force the government to adjust the exchange rate. In this simple approach, the amount of currency reserves is obviously the crucial parameter to determine the outcome, as a low reserve leads to a speculative attack while a high reserve prevents attacks. However, the case of medium reserves, in which a concerted action of speculators is needed is the most interesting case. In this case, there are two equilibriums (based on the concept of the Nash equilibrium): independent from the fundamental environment, both outcomes are possible. If both speculators believe in the success of the attack, and consequently both attack the currency, the government has to abandon the currency peg. The speculative attack would be self-fulfilling. If at least one speculator does not believe in the success, the attack (if there is one) will not be successful. Again, this outcome is also self-fulfilling. Both outcomes are equivalent in the sense of our basic equilibrium assumption (Nash). It also means that the success of an attack depends not only on the currency reserves of the government, but also on the assumption what the other speculator is doing. This is interesting idea behind this concept: A speculative attack can happen independent from the fundamental situation. In this framework, any policy actions which refer to fundamentals are not the appropriate tool to avoid a crisis. " - source Credit Crises, published in 2008, authored by Dr Jochen Felsenheimer and Philip Gisdakis If indeed the amount of currency reserves is obviously the crucial parameter when it comes to assessing the pay off for the Yuan bet, we have to agree with Deutsche Bank recent House View note from the 9th of February 2016 entitled "Still deep in the woods" that problems in China remains unresolved: "The absence of new news has helped divert attention away from China – but the underlying problem remains unresolved • After surprise devaluation in early January, China has stopped being a source of new bad news • Currency stable since, though authorities no longer taking cues from market close to set yuan level* • Macro data soft as expected, pointing to a gradual deceleration not a sharp slowdown • Underlying issue of an overvalued yuan remains unresolved, current policy unsustainable long-term −At over 2x nominal GDP growth, credit growth remains too high −FX intervention to counter capital outflows – at the expense of foreign reserves - source Deutsche Bank When it comes to the risk of a currency crisis breaking and the Yuan devaluation happening, as posited by the Nash Equilibrium Concept, it all depends on the willingness of the speculators rather than the fundamentals as the Yuan attacks could indeed become a self-fulfilling prophecy in the making. This self-fulfilling process is as well a major feature of credit crises and a prominent feature of credit markets (CDS) as posited again in Chapter 5 of the book from Dr Jochen Felsenheimer and Philip Gisdakis: "Self-fulfilling processes are a major characteristics of credit crises and we can learn a lot from the idea presented above. The self-fulfilling process of a credit crisis is that short-term overshooting might end up in a long-lasting credit crunch - assuming that spreads jump initially above the level that we would consider "fundamentally justified; for instance reflected in the current expected loss assumption. That said, the implied default rate is by far higher than the current one (e.g., the current forecast of the future default rate from rating agencies or from market participants in general). However the longer the spreads remains at an "overshooting level", the higher the risk that lower quality companies will encounter funding problems, as liquidity becomes more expensive for them. this can ultimately cause rising default rate at the beginning of the crisis; a majority of market participants refer to it as short-term overshooting. Self fulfilling processes are major threat in a credit crisis, as was also the case during the subprime meltdown. If investors think that higher default rates are justified, they can trigger rising default rates just by selling credit-risky assets and causing wider spreads. This is independent from what we could call the fundamentally justified level! The other interesting point is that the assumption of concerted action is not necessary in credit markets to trigger a severe action. If we translate the role of the government (defending a currency peg) into credit markets, we can define a company facing some aggressive investors who can send the company into default. Buying protection on an issuer via Credit Default Swaps (CDS) leads to wider credit spreads of the company, which can be seen as an impulse for the self-fulfilling process described above. If some players are forced to hedge their exposure against a company by buying protection on the name, the same mechanism might be put to work." - source Credit Crises, published in 2008, authored by Dr Jochen Felsenheimer and Philip Gisdakis As we highlighted above with the flattening of MS München and/or Deutsche Bank and the flattening of the CDX HY curve, the flattening trend means that the funding costs for many companies is rising across all maturities: "Such a technically driven concerted action of many players, consequently can also cause an impulse for a crisis scenario, as in the case for currency markets in financial panic models" - source Credit Crises, published in 2008, authored by Dr Jochen Felsenheimer and Philip Gisdakis So there you go, you probably understand by now the disappearance of MS München due to a conjunction of "Rogue Waves": "The laws of probability, so true in general, so fallacious in particular." - Edward Gibbon, English historian And this dear readers is the story of VaR in a world of rising "positive correlations" but we are ranting again... Stay tuned! View My Stats
01cd3f77d4a21508
My Hidden Weirdness My definition of the accompanying text (below) is that the text  is a fine art picture. . So I can frame it and hang it in my salon (a picture) and I can read it like the page of a book (a text). There is nothing special about it in our every day (classical ) world. But when people visit my salon they ask me “from where did you get this picture” and when they take a closer look they discover the text and read…. and suddenly they ask me “what is it a picture or a text on quantum mechanics ?”… and my answer its “both”…….That’s weird….now in an oblique manner the text below is weirder because in the quantum world the word “both” has a precise meaning : wave and particle and all that follow from it… but I  was told that I am a part of the quantum world… So am I weird ?…. Quantum mechanics states that you cannot precisely measure both position and momentum. Just because you can’t measure it, doesn’t mean it doesn’t have position and momentum at the same time. The theory seems based on this principle, but why? Viktor T. Toth Viktor T. Toth, IT pro, part-time physicist No, quantum mechanics does not state that you cannot simultaneously measure both position and momentum precisely. It is a consequence of the theory, but it is not what the theory is based on. Quantum mechanics states that a classical position, classical momentum, or other classical observables do not exist except in the rare cases when the quantum object interacts with something classical (such as an instrument.) When you look at the mathematics (and you have to look at the mathematics; quantum mechanics cannot be intuited) something amazing emerges. The formal equations of quantum mechanics, such as the Schrödinger equation, can be “derived” easily from classical physics. However, this equation offers many more solutions than its classical counterpart. Quantum mechanics begins when we look at these solutions and accept them as valid descriptions of reality, despite the fact that they seemingly make no intuitive sense, certainly not in the context of classical physics. Now you may wonder, what on Earth possesses us to go down this rabbit hole? Very simple: physics is based on experiment and observation. And we found that this is how the physical world works. When we look at this much richer world of quantum solutions, we find that indeed, most of the time that particle does not have a classical position or a classical momentum. Moreover, the math tells us, when it is confined to a classical position by a measurement, its classical momentum does not exist; it remains in a superposition of states. So when you think of an electron inside a cathode ray tube, going from the cathode to the screen while mysteriously going through two holes at the same time, and ask yourself, “What was the electron’s path?”, unfortunately the only legitimate answer sounds just as mysterious as the little boy telling Neo in the film The Matrix that there is no spoon: There is no (classical) path. It’s not that we cannot measure it. It truly does not exist. And whether we like it or not, that’s the way Nature works. But there is one advantage that we have over a piece of fiction like The Matrix: our outlandish statement is grounded in firm mathematics that leads to testable predictions, through which our outlandish claims can be  (and have been, countless times) verified and validated.
3418579ff8345ac8
Skip to main content Chemistry LibreTexts 8.2: The Wavefunctions • Page ID • [ "article:topic", "Author tag:Zielinski", "authorname:zielinskit", "showtoc:no" ] The solutions to the hydrogen atom Schrödinger equation are functions that are products of a spherical harmonic function and a radial function. \[ \psi _{n, l, m_l } (r, \theta , \varphi) = R_{n,l} (r) * Y^{m_l}_l (\theta , \varphi) \label {8-20}\] The wavefunctions for the hydrogen atom depend upon the three variables r, \(\theta\), and \(\varphi \) and the three quantum numbers n, \(l\), and \(m_l\). The variables give the position of the electron relative to the proton in spherical coordinates. The absolute square of the wavefunction, \(| \psi (r, \theta , \varphi )|^2\), evaluated at \(r\), \(\theta \), and \(\varphi\) gives the probability density of finding the electron inside a differential volume \(d \tau\), centered at the position specified by r, \(\theta \), and \(\varphi\). Exercise \(\PageIndex{1}\) What is the value of the integral \[ \int \limits _{ all space} | \psi (r, \theta , \varphi )|^2 d \tau \, ?\] The quantum numbers have names: n is called the principle quantum number, \(l\) is called the angular momentum quantum number, and \(m_l\) is called the magnetic quantum number because (as we will see in Section 8.4) the energy in a magnetic field depends upon \(m_l\). Often \(l\) is called the azimuthal quantum number because it is a consequence of the \(\theta\)-equation, which involves the azimuthal angle \(\Theta \), referring to the angle to the zenith. These quantum numbers have specific values that are dictated by the physical constraints or boundary conditions imposed upon the Schrödinger equation: \(n\) must be an integer greater than 0, \(l\) can have the values 0 to n‑1, and \(m_l\) can have \(2l + 1\) values ranging from \(-l\) ‑ to \(+l\) in unit or integer steps. The values of the quantum number \(l\) usually are coded by a letter: s means 0, p means 1, d means 2, f means 3; the next codes continue alphabetically (e.g., g means \(l = 4\)). The quantum numbers specify the quantization of physical quantities. The discrete energies of different states of the hydrogen atom are given by \(n\), the magnitude of the angular momentum is given by \(l\), and one component of the angular momentum (usually chosen by chemists to be the z‑component) is given by \(m_l\). The total number of orbitals with a particular value of n is n2. Exercise \(\PageIndex{2}\) Consider several values for n, and show that the number of orbitals for each n is \(n^2\). Exercise \(\PageIndex{3}\) Construct a table summarizing the allowed values for the quantum numbers n, \(l\) , and \(m_l). for energy levels 1 through 7 of hydrogen. Exercise \(\PageIndex{4}\) The notation 3d specifies the quantum numbers for an electron in the hydrogen atom. What are the values for n and \(l\) ? What are the values for the energy and angular momentum? What are the possible values for the magnetic quantum number? What are the possible orientations for the angular momentum vector? The hydrogen atom wavefunctions, \(\psi (r, \theta , \varphi )\), are called atomic orbitals. An atomic orbital is a function that describes one electron in an atom. The wavefunction with n = 1, \(l=1\), and \(m_l\) = 0 is called the 1s orbital, and an electron that is described by this function is said to be “in” the ls orbital, i.e. have a 1s orbital state. The constraints on n, \(l\ \(l)\), and \(m_l\) that are imposed during the solution of the hydrogen atom Schrödinger equation explain why there is a single 1s orbital, why there are three 2p orbitals, five 3d orbitals, etc. We will see when we consider multi-electron atoms in Chapter 9 that these constraints explain the features of the Periodic Table. In other words, the Periodic Table is a manifestation of the Schrödinger model and the physical constraints imposed to obtain the solutions to the Schrödinger equation for the hydrogen atom. Visualizing the variation of an electronic wavefunction with \(r\), \(\theta\), and \(\varphi\) is important because the absolute square of the wavefunction depicts the charge distribution (electron probability density) in an atom or molecule. The charge distribution is central to chemistry because it is related to chemical reactivity. For example, an electron deficient part of one molecule is attracted to an electron rich region of another molecule, and such interactions play a major role in chemical interactions ranging from substitution and addition reactions to protein folding and the interaction of substrates with enzymes. Visualizing wavefunctions and charge distributions is challenging because it requires examining the behavior of a function of three variables in three-dimensional space. This visualization is made easier by considering the radial and angular parts separately, but plotting the radial and angular parts separately does not reveal the shape of an orbital very well. The shape can be revealed better in a probability density plot. To make such a three-dimensional plot, divide space up into small volume elements, calculate \(\psi * \psi \) at the center of each volume element, and then shade, stipple or color that volume element in proportion to the magnitude of \(\psi * \psi \). Do not confuse such plots with polar plots, which look similar. Probability densities also can be represented by contour maps, as shown in Figure (\PageIndex{1}\). Figure (\PageIndex{1}\): Contour plots in the x-y plane for the 2px and 3px orbitals of the hydrogen atom. The plots map lines of constant values of R(r)2; red lines follow paths of high R(r)2, blue for low R(r)2. The angular function used to create the figure was a linear combination of two Spherical Harmonic functions (see Problem 10 at the end of this chapter.) Another representational technique, virtual reality modeling, holds a great deal of promise for representation of electron densities. Imagine, for instance, being able to experience electron density as a force or resistance on a wand that you move through three-dimensional space. Devices such as these, called haptic devices, already exist and are being used to represent scientific information. Similarly, wouldn’t it be interesting to “fly” through an atomic orbital and experience changes in electron density as color changes or cloudiness changes? Specially designed rooms with 3D screens and “smart” glasses that provide feedback about the direction of the viewer’s gaze are currently being developed to allow us to experience such sensations. Methods for separately examining the radial portions of atomic orbitals provide useful information about the distribution of charge density within the orbitals. Graphs of the radial functions, R(r), for the 1s, 2s, and 2p orbitals plotted in Figure \(\PageIndex{2}\). Figure \(\PageIndex{2}\): Radial function, R(r), for the 1s, 2s, and 2p orbitals. The 1s function in Figure \(\PageIndex{2}\) starts with a high positive value at the nucleus and exponentially decays to essentially zero after 5 Bohr radii. The high value at the nucleus may be surprising, but as we shall see later, the probability of finding an electron at the nucleus is vanishingly small. Next notice how the radial function for the 2s orbital, Figure \(\PageIndex{2}\), goes to zero and becomes negative. This behavior reveals the presence of a radial node in the function. A radial node occurs when the radial function equals zero other than at \(r = 0\) or \(r = ∞\). Nodes and limiting behaviors of atomic orbital functions are both useful in identifying which orbital is being described by which wavefunction. For example, all of the s functions have non-zero wavefunction values at \(r = 0\), but p, d, f and all other functions go to zero at the origin. It is useful to remember that there are \(n-1-l\) radial nodes in a wavefunction, which means that a 1s orbital has no radial nodes, a 2s has one radial node, and so on. Exercise \(\PageIndex{5}\) Examine the mathematical forms of the radial wavefunctions. What feature in the functions causes some of them to go to zero at the origin while the s functions do not go to zero at the origin? Exercise \(\PageIndex{6}\) What mathematical feature of each of the radial functions controls the number of radial nodes? Exercise \(\PageIndex{7}\) At what value of r does the 2s radial node occur? Exercise \(\PageIndex{8}\) Make a table that provides the energy, number of radial nodes, and the number of angular nodes and total number of nodes for each function with n = 1, 2, and 3. Identify the relationship between the energy and the number of nodes. Identify the relationship between the number of radial nodes and the number of angular nodes. The quantity \(R (r) ^* R(r)\) gives the radial probability density; i.e., the probability density for the electron to be at a point located the distance r from the proton. Radial probability densities for three types of atomic orbitals are plotted in Figure (\PageIndex{3}\). Figure (\PageIndex{3}\): Radial probability densities for the 1s, 2s, and 2p orbitals. When the radial probability density for every value of r is multiplied by the area of the spherical surface represented by that particular value of r, we get the radial distribution function. The radial distribution function gives the probability density for an electron to be found anywhere on the surface of a sphere located a distance r from the proton. Since the area of a spherical surface is \(4 \pi r^2\), the radial distribution function is given by \(4 \pi r^2 R(r) ^* R(r)\]. Radial distribution functions are shown in Figure (\PageIndex{4}\). At small values of r, the radial distribution function is low because the small surface area for small radii modulates the high value of the radial probability density function near the nucleus. As we increase \(r\), the surface area associated with a given value of r increases, and the \(r^2\) term causes the radial distribution function to increase even though the radial probability density is beginning to decrease. At large values of \(r\), the exponential decay of the radial function outweighs the increase caused by the \(r^2\) term and the radial distribution function decreases. Figure (\PageIndex{4}\): The radial distribution function for the 1s, 2s, and 2p orbitals. Exercise \(\PageIndex{9}\) Write a quality comparison of the radial function and radial distribution function for the 2s orbital. See Figure (\PageIndex{5}\) Figure (\PageIndex{5}\): Comparison of a) the radial distribution function and b) the radial probability density for the 2s orbital.
e4a11d691194e16b
5 Jun / 2017 Welcome To Mathematics In Cambridge (2) The Department of Mathematics invitations functions for a place at the Assistant or Affiliate Professor degree, in the area of Algebra broadly construed, including algebraic geometry, quantity theory, and connections to combinatorics. Utilized mathematics issues itself with mathematical methods which might be sometimes used in science, engineering, enterprise, and industry. Numbers do not appear half the time when writing. I confirm that I am over sixteen years outdated and I am completely satisfied to obtain newsletters and up-to-date information about Prime Universities, Top MBA and QS Leap.Mathematic Evidence for more advanced mathematics doesn’t seem until round 3000 BC , when the Babylonians and Egyptians began utilizing arithmetic , algebra and geometry for taxation and different financial calculations, for constructing and construction, and for astronomy 19 The earliest makes use of of mathematics were in buying and selling , land measurement , painting and weaving patterns and the recording of time.Mathematic Planet Math An online mathematics encyclopedia underneath construction, specializing in fashionable mathematics. Pure mathematics is summary and based in concept, and is thus not constrained by the limitations of the physical world. Discrete mathematics is the mathematical language of computer science, as it consists of the research of algorithms.Mathematic Advanced examine of mechanics includes quantum mechanics and relativity, covering subjects akin to electromagnetism, the Schrödinger equation, the Dirac equation and its transformation properties, the Klein-Gordon equation, pair manufacturing, Gamma matrix algebra, equivalence transformations and destructive power states. Often this requires completing a postgraduate qualification in instructing, though this is dependent upon the extent and kind of institution you train at. Duties will involve instructing college students, creating lesson plans, assigning and correcting homework, managing students within the classroom, communicating with college students and parents and helping student prepare for standardized testing.
70cc98bc980bbcd2
Top 10 physicists of all time 6:04 am 7 May, 2012 Physics deals with analyzing all of nature to help us better understand the universe that we live in. Besides being one of the oldest quests of man, knowing and understanding how things work is a basic need of the world as it is today. Physicists everywhere have always tried to answer the questions of how and when our universe started off and what really makes it tick. Great theorists, experimentalists and thinkers have molded the shape of our world as it exists today with the help of their theories and experiments. Although most people wouldn’t think of it but a lot of the technology that comes alive is given birth to by theorists who make the breakthroughs through observation and calculation. Let us now look at top 10 physicists of all time and look at what their contributions are in shaping the macro and the micro world in which we so casually live in.  10. Werner Heisenberg: Werner Heisenberg was a German physicist who made significant contributions to many areas of physics. His theoretical work makes up the base for one of the most important cogs in quantum theory, the uncertainty principle. He was born on the 5th of December 1901 and died on the 5th of February 1976. For his contributions to physics he was awarded the Nobel Prize in 1932. Besides pioneering the work behind the Uncertainty Principle he has also made significant contributions in the areas of nuclear physics, particle physics and quantum field theory. (img source: Top 10 physicists of all time: Werner Heisenberg 9. Ernest Rutherford: Ernst Rutherford is known to the world as the father of nuclear physics. He was born on the 30th of August 1871 and died on the 19th of October 1937 in New Zealand. His contribution to chemistry is also significant and he received the Nobel Prize for chemistry in 1908. His gold foil experiment was the first practical splitting of an atom and he also discovered one of the basic sub atomic particles namely the proton. (img source: Top 10 physicists of all time: Ernest Rutherford 8. Paul Dirac: Paul Dirac was an English theoretical physicist who laid the foundations for more than one study of physics. In his lifetime which spanned from 8th August 1902 to 20 October 1984 he held many positions across many esteemed universities around the world. He was also awarded the Nobel Prize in physics along with Erwin Schrodinger for their work in the formulation of one of the earliest version of the atomic theory. He also formulated a famous equation known as the Dirac Equation and has also contributed significantly to the discovery on anti matter. (img source: Top 10 physicists of all time: Paul Dirac 7. Erwin Schrodinger: Erwin Schrodinger was an Austrian physicist and is widely considered one of the founding fathers of quantum mechanics. His most famous work is the Schrödinger equation which describes how the state of a physical system behaves with respect to time. He was awarded the Nobel Prize for physics in 1933. He also proposed the famous cat thought experiment which is one of the most elaborate as well as accurate thought experiments to exist in the realm of all science. He was also awarded the Max Planck Medal in 1937 which is awarded for outstanding contributions in theoretical physics. (img source: Top 10 physicists of all time: Erwin Schrodinger 6. Richard Feynman: Richard Feynman was born on the 11th of May 1911 and died on the 15th of February 1988 in the United States. He was one of the best known scientists and theoretical physicist of his time and is also rated as one of the top ten physicists of all time with a wide consensus. The most significant of his contributions are in the areas of particle physics, quantum electrodynamics and he has also laid the foundation stones for quantum computing and nanotechnology. He shared the Nobel Prize for physics with 2 other scientists for his contributions to the development of quantum electrodynamics. (img source: Top 10 physicists of all time: Richard Feynman 5. James Clerk Maxwell: James Clerk Maxwell was a Scottish physicist who is best known for his work in the area of classical electromagnetic theory. His work helped unite electricity, magnetic and optics into a single theory through a set of equations that are known as Maxwell’s Equations. This work is one of the most significant contributions to physics as we know it. He is also credited with laying the foundations of quantum mechanics. He was born on the 13th of June 1831 and died on the 5th of November 1879. (img source: Top 10 physicists of all time: James Clerk Maxwell 4. Michael Faraday: Faraday was a British born scientist, chemist and physicist who lived from 22 September 1791 to 25 August. He discovered the magnetic field and also discovered electromagnetic induction. Some of his experiments in physics and chemistry are regarded as the most groundbreaking work ever done in a lab anywhere and anytime. Although he did not have much higher education, he was highly intuitive and his powers of observation were unreal. (img source: Top 10 physicists of all time: Michael Faraday 3. Max Planck: Max Planck was a German physicist who is known as the father of quantum theory which initiated a revolution in physics as it existed. He was awarded the Nobel Prize for his contributions to quantum physics in 1918. He was a highly respected and looked upon physicist in his lifetime and one of the greatest honors in theoretical physics known as the Max Planck medal is named after his excellent contributions. One of his most important contributions was the Planck’s constant which is one of the most important equation constants in theoretical physics. (img source: Top 10 physicists of all time: Max Planck 2. Isaac Newton: Isaac Newton was an English scientist who is considered as one of the greatest men of science to have walked this earth. His contributions to the many branches of science that he delved into need no mention. His work with physics lays the foundation for classical mechanics and he formulated a theory that seemed to explain how our universe worked. He was also among the first to show that gravity was a significant force in our universe and that the earth was not the center of the universe. (img source: Top 10 physicists of all time: Issac Newton 1. Albert Einstein: German born Albert Einstein is one of the most well known names in science. One of the most famous equations of all time, which is known as the equivalence of mass and energy otherwise simply known as E = mc2. He is often known as the father of modern physics as his contributions in the field of relativity have shaped the way modern physics has come to be. His discoveries and theories were way ahead of his time and it took the world a good number of years to understand them. Even today we are still delving into the wonderful possibilities that Einstein’s theories opened up to the world. (img source: Top 10 physicists of all time: Albert Einstein
52326f212e39aa37
Is Spacetime Fractal and Quantum Coherent in the Golden Mean? By Mae-Wan Ho, Mohamed el Naschie & Giuseppe Vitiello Reproduced from: Global Journal of Science Frontier Research: A Physics and Space Science Volume 15 Issue 1 Version 1.0 Year 2015 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals Inc. (USA) Is Spacetime Fractal and Quantum Coherent in the Golden Mean? By Mae-Wan Hoα, Mohamed el Naschieσ & Giuseppe Vitielloρ Author α: Institute of Science in Society, Edge Institute International, London UK and Italy. e-mail: Author σ: Alexandria University, Egypt. Author ρ: University of Salerno, Edge Institute International, London UK and Italy. We consider the fabric of spacetime from a wide perspective: from mathematics, quantum physics, far from equilibrium thermodynamics, biology and neurobiology. It appears likely that spacetime is fractal and quantum coherent in the golden mean. Mathematically, our fractal universe is non-differentiable and discontinuous, yet dense in the infinite dimensional spacetime. Physically, it appears to be a quantum coherent universe consisting of an infinite diversity of autonomous agents all participating in co-creating organic, fractal spacetime by their multitudinous coupled cycles of activities. Biologically, this fractal coherent spacetime is also the fabric of conscious awareness mirrored in the quantum coherent golden mean brain states. Whitehead’s philosophy, discontinuous nondifferentiable spacetime, fractals, coupled activity cycles, deterministic chaos, quantum coherence and fractals, golden mean. GJSFR-A Classification: FOR Code: 020699, 020109 Strictly as per the compliance and regulations of : © 2015. Mae-Wan Ho, Mohamed el Naschie & Giuseppe Vitiello. This is a research/review paper, distributed under the terms of the Creative Commons Attribution-Noncommercial 3.0 Unported License, permitting all non commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. I. Introduction Is spacetime fractal and quantum coherent in the golden mean? This is too deep and fundamental a question to be answered definitively on the basis of our present knowledge. However, one could try to get a good idea on the properties of spacetime in a number of mathematical and physical features emerging from research activities in many fields, including mathematics, quantum physics, far from equilibrium thermodynamics, biology and neurobiology, which also hark back to some basic problems in philosophy. Our attempt in this paper is to bring together the relevant observations and findings without claiming to be complete, that support our speculation concerning the basic “fabric” of spacetime. In so doing, we connect the ubiquitous recurrence of the golden mean and fractal self-similarity in microscopic and macroscopic phenomena with the coherent state dynamics at the quantum level and effects at the macroscopic level. We start with the nature of the spacetime as perceived by Alfred North Whitehead (1861-1947) and others. II. Real Processes do not Happen at Points in a Spacetime Continuum As one of us (MWH) has commented [1], Whitehead lived through an exciting era in Western science when the fabric of physical reality – Newton’s flat, smooth, and static absolute spacetime – was being thoroughly ruffled by Albert Einstein’s theories of special and general relativity and by quantum mechanics. The modern observer no longer views nature from the outside, being irreducibly quantum-entangled with the known, possibly with all entities in the entire universe. These surprising lessons from nature became the basis of Whitehead’s perennial philosophy (a cosmogony) [2], ushering in a new age of the organism that inspired generations of scientists. He saw the universe as a super-organism encompassing organisms on every scale from galaxies to elementary particles, and argued it is only possible to know and understand nature both as an organism and with the sensitivity of an organism. Most important, though least understood was his rejection of the mechanical laws of classical physics and differential calculus for their failure to describe real processes. Not only do they leave out of account the all-important knowing experiencing organism, real processes occur in intervals of time associated with volumes of space. Absolutely nothing can happen at a point in an instant. Rather than a smooth, infinitely divisible continuum, spacetime is more likely discrete and discontinuous, and hence non-differentiable, as quantum physics has already discovered at the smallest scale. Unfortunately, mathematics had lagged behind physics. Both relativity theory and quantum theory inherited the predominant mathematics of classical mechanics. Roger Penrose’s The Road to Reality, A Complete Guide to the Laws of The Universe [3] charts the heroic and ingenious efforts of mathematical physicists to grasp hold of post Newtonian universe, which in the end they failed to do. The dream of uniting the two great theories of quantum physics and general relativity has remained unfulfilled; not least because these two modern theories are both based on the foundation of classical physics: a differentiable, continuous, spacetime manifold. The issue of continuity versus discontinuity of spacetime did not originate in Newtonian mechanics. It can be traced back to ancient Greek philosophy, especially in Zeno’s paradox of Achilles and the tortoise [4] (see Appendix 1). It is generally thought that Newton and Leibniz had both resolved Zeno’s paradox with differential calculus, by inventing infinitesimal space and time intervals. Whitehead [2] and Henri Bergson [5] were among those who would not have accepted this ‘resolution’. Zeno’s paradox was about the impossibility of motion as represented by an infinite sequence of static configurations of matter at points in time. Whitehead said in his Concept of Nature ([6] p. 15): “There is no holding nature still and looking at it.” The absolute, infinitely divisible time and space of Newtonian physics are both abstractions from ever flowing events of nature. He concurred with Bergson [5] in using the concept of ‘duration’([6], p. 53) for an interval of time experienced by the knower as a simultaneity encompassing ‘a complex of partial events’. We shall show later that ‘duration’ and ‘simultaneity’ can be given very specific meanings in terms of characteristic time of processes and coherence time. III. Mathematics of Discontinuity The Cantor set, discovered by Georg Cantor in 1883 [7], is fundamental for discontinuous mathematics. Its apparently paradoxical nature is that it is infinitely sub-divisible, but is completely discontinuous and nowhere dense. It is also a fractal with self-similar patterns on every scale [8]. The mathematics of non-differentiable and discontinuous spaces is among the most significant discoveries/inventions beginning with Cantor in the late 19th century, though it did not really take off until well into the 20th century, reaching its peak in the science and mathematics of complexity associated especially with Benoit Mandelbrot’s fractals [9] and Edward Lorenz’ deterministic chaos [10],[11]. We provide informal definitions of some mathematical terms that will be used in this paper in Appendix 2 (from [1]); some, like deterministic chaos, have no generally agreed definition. IV. Continuous Non-Differentiable Fractal Spacetime Garnet Ord was the first to propose a fractal spacetime and to coin the term for it [12]. His starting point was Richard Feynman’s observation [13] that when they are looked at on a sufficiently fine scale, the the paths of quantum mechanical particles are non-differentiable curves rather than straight lines. More-over, relativistic interaction with particles at sufficiently high energies produces non-conserved particle numbers. These and other anomalies have encouraged quantum mechanics to abandon the concept of a point particle and its trajectory in favour of wavepackets or field excitations. Feynmans formulation in terms of path integrals was an exception. In the same spirit, Ord set out to construct a continuous trajectory in spacetime that exhibits features analogous to those in relativistic quantum mechanics. He came up with a fractal trajectory exemplified by a Peano-Moore curve (Figure 1). It is plane-filling with a fractal (Hausdorff) dimension of 2 instead of the classical linear path that has a dimension of 1. Figure 1: A particle’s fractal trajectory at increasing resolution, at scale (a): s = λ, (b): s = λ/3, (c): s = λ/9. (from [13]) Ord showed that, among other things, such a fractal trajectory exhibits both an uncertainty principle and a de Broglie relation. On a microscopic scale, the presence of fractal time is interpreted in terms of the appearance of particle-antiparticle pairs when observation energies are of the order of mc2. On a macroscopic scale greater than the fractal wavelength, the free ‘fractalons’ appear to move on a classical one-dimensional trajectory. A more elaborate scale-relativity theory of fractal spacetime was proposed by Laurent Nottale [14], who was motivated by “the failure of a large number of attempts to understand the quantum behavior in terms of standard differentiable geometry” to look for a possible “quantum geometry”, i.e., fractals [15], (pp.4-5). His theory recovers quantum mechanics as mechanics on a nondifferentiable spacetime, and the Schrödinger equation is demonstrated as a geodesic equation. Ord and Nottale have both proposed fractal spacetimes that are continuous and nondifferentiable. A more radical cosmology proposed by one of us (MeN) is a fractal spacetime that is based on the golden mean and is neither continuous nor differentiable (see below). It is closest to our intuitive notion of organic (as opposed to mechanical) spacetime, and more intimately connected to our emerging understanding of biological spacetime [1]. V. E∞ Spacetime and Our 4-Dimensional Universe MeN’s work was first reviewed by MWH [1], who provided a simplified account that we recapitulate here. E∞ (E-infinity) is an infinite dimensional fractal spacetime. Yet its Hausdorff dimension is 4.236067977… This means that at ordinary scales, it looks and feels 4 dimensional (three of space and one of time), with the remaining dimensions ‘compacted’ in the remaining 0.236067977 “fuzzy tail” [16]. One imagine such a universe as a four dimensional hypercube with further four dimensional hypercubes nested inside like Russian dolls [17] (see figure 2). The exact Hausdorff dimension of the infinite dimensional hypercube is 4+ϕ3, where ϕ = (√5−1)/2, the golden mean. The dimension 4+ϕ3 can be expressed as the following self-similar continued fraction which converges to precisely 4+ϕ3 : Figure 2: A Euclidean representation of [einfinity-spacetime] E fractal spacetime as an infinite sequence of nested four dimensional hypercubes (redrawn from [16]) The 4-dimensional hypercube is the Euclidean representation of the E-infinity universe. It is a challenge to represent E in its proper non-Euclidean form. Another mathematical property of the Cantor set is that its cardinality (number of points or elements) is exactly the same as the original continuous line. Thus, the Cantor set is a perfect compromise between the discrete and the continuum; it is a discrete structure that has the same number of elements as the continuum. Mathematically, the E∞ universe is a random Cantor set extended to infinite dimensions. Remarkably, the Hausdorff dimension of this infinite extension is no larger than 4+ϕ3. Figure 3 illustrates the steps involved in deriving E universe. Recall that the standard Cantor set is obtained by dividing the unit interval into three parts and removing the middle part, leaving the end points. Then do the same to each of the two remaining parts. We repeat the process infinitely often, and in the end there remain only isolated points, or ‘Cantor dust’. The Menger-Urysohn dimension is 0, but the Hausdorff dimension is log 2/log 3. Figure 3: Fractals and dimensions in the derivation of E infinity spacetime If however at each step we remove not necessarily the middle section but any one of the three chosen at random, then again we are left with isolated points but now the Hausdorff dimension is the golden mean ϕ = (√5 − 1)/2 = 0.61803398 …..This Important result, proven by Mauldin and Williams in 1986 [18], is what makes it possible to derive the E universe with all its remarkable properties, as we shall see. We now construct the higher dimensional random Cantor spaces [16] (see Fig. 3). The 2-dimensional version is the Sierpinski gasket, with Hausdorff dimension of 1/ϕ ≈ 1.61803398; the 3-dimensional version is the Menger sponge, with Hausdorff dimension 2 + ϕ ≈ 2.61803398. These are both well-known geometric shapes. The 4-dimensional version, with Hausdorff dimension 4 + ϕ≈ 4.2360679, is given only an artist’s representation. The 4-dimensional version is the same as the E universe constructed from an infinite number of random Cantor sets, as will be made clear later. Note that the diagram representing the 4-dimensional Cantorial spacetime is space-filling with smaller and smaller spheres. This space-filling property makes the Cantorian spacetime non-differentiable and discontinuous, yet dense everywhere in spacetime. It recalls the quasi-periodic Penrose tiling in 2 dimensions of Euclidean (flat) space where the golden mean is key (see [19] and references therein). Branching processes based on the golden mean are also space-filling [20], as are spiral leave arrangement patterns with the golden angle between successive leaf primordial (see [21]). MeN has presented several different formal derivations of E spacetime; we give here the simplest [16],[17] which is based on the mathematical properties of Borel sets to which Cantor sets belong. The expectation value of the Hausdorff dimension of the Cantor set extended to infinity is simply a sum over n , for n = 0 to n = ∞, of n multiplied by the Hausdorff dimension of the random Cantor set raised to the power n. where the superscript in dc(0) refers to the Menger-Urysohn dimension of the random Cantor set, which is 0, while the corresponding Hausdorff dimension dc(0) is ϕ.. Summing up the infinite number of terms gives the answer 4 + ϕ3 exactly, as follows: The intersection rule of sets, the ‘bijection formula’, relates the Menger-Urysohn dimension to the Hausdorff dimension. It shows that we can lift dc(0) to any Menger-Urysohn dimension to arrive at the correct Hausdorff dimension dc{n) as follows: Taking dc(0) = ϕ , and lifting to n = 4 dimensions gives Thus the expectation value of the Hausdorff dimension of E universe is the same as that of a universe with a Menger-Urysohn dimension of 4. That is why E is a hierarchical universe that looks and feels 4 dimensional. VI. How E Relates to Penrose Tiling, and the Fi-Bonacci Sequence E universe connects with Penrose tiling and the Fibonacci sequence (see [19]) through E [17], [21] and the golden mean. The golden mean is an irrational number and like any irrational number it can be approximated by a fraction; for example \frac{22}{7} is quite close to π. The usual way of obtaining such an approximation, using continued fractions, converges more slowly for ϕ than for any other number, and in this sense is the most irrational number there is. In Noncommutative Geometry [23], Alain Connes identified Penrose’s fractal tiling as a mathematical quotient space (a space of points ‘glued together’ by an equivalence relationship), with the dimensional function: where a, b are integers and ϕ = (√5 − 1)/2. Writing Dn(an, bn) where both an and bn satisfy the Fibonacci recurrence relation xn+2 = xn + xn+1 and starting with, D0 = D(0, 1) and D1 = D(1, 0) the following dimensional hierarchy is obtained: It is notable that for D4 (dimension 4), the Fibonacci number is (1/ϕ)3 = 4+ϕ3), exactly the Hausdorff dimension of a Menger-Urysohn 4-dimensional space. By induction, and we get back the bijection formula from E algebra (see Eq (3) above): Summing random Cantor sets to infinity in the creation of E universe is evocative of spacetime being created by actions over all scales, from submicroscopic to macroscopic and beyond, as envisaged The Rainbow and the Worm, the Physics of Organisms by one of us (MWH) [24], following Whitehead [2] and Wolfram Schommers [25]. MeN has conjectured that E spacetime can also resolve major paradoxes within quantum theory and produce new results as described elsewhere ([26],[27],[28]). Here, we move on to the role of cycles in the organization of spacetime and why the golden mean seems to be built into the fabric of life and the universe. VII. Cycles Everywhere for Stability and Autonomy Nature abounds with cycles and oscillations, from subatomic vibrations to planetary motion, solar cycles and galactic rotations. Some, like Penrose and Gurzadyan hold that the Universe cycles through deaths and rebirths, based on data collected by NASAs WMAP (Wilkinson Microwave Anisotropy Probe) and the BOOMERanG balloon experiment in Antarctica [29]. In a recent review, MWH [31] considers the importance of cycles and the golden mean for natural processes. We provide a brief account here and in the following two sections. Cycles are intimately tied to the study of dynamical systems, beginning with celestial mechanics. Newton tried to describe the planetary cycles in terms of his laws of motion more than 300 years ago. Dynamical systems can be treated mathematically as oscillators. A harmonic oscillator has a certain natural frequency. When perturbed by an external force with the same frequency, resonance occurs and the motion of the oscillator becomes unbounded or unstable. For a typical nonlinear oscillator, this happens whenever the frequency of the perturbing force is a rational multiple of the natural frequency of the oscillator. Andrey Kolmogorov (1903-1987), Vladimir Arnold (1937-2010) and Jürgen Moser (1928-1999) were responsible for the Kolmogorov Arnold and Moser (KAM) theorem. The KAM theorem is very important for understanding how cyclic activities (or oscillators) interact with one another [30]. They were investigating the behaviour of integrable Hamiltonian systems. The trajectories of Hamiltonian systems in phase space are confined to a doughnut shaped surface, an invariant torus. Different initial conditions will trace different invariant concentric tori in phase space, separated by unstable chaotic regions, where the motion is irregular and unpredictable. An important consequence of the KAM theorem is that for a large set of initial conditions, the motion remains perpetually quasi-periodic, and hence stable. KAM theory has been extended to non-Hamiltonian systems and to systems with fast and slow frequencies. The KAM theorem become increasingly difficult to satisfy for complex systems with more degrees of freedom; as the number of dimensions of the system increases, the volume occupied by the tori decreases. Those KAM tori that are not destroyed by perturbation become invariant Cantor sets, or Cantori; and the frequencies of the invariant Cantori approximate the golden mean [32]. The golden mean effectively enables multiple oscillators within a complex system to co-exist without blowing up the system. But it also leaves the oscillators within the system free to interact globally (by resonance), which may have important applications even in the study of the brain (see later). To get a better picture, we look at the circle map. VIII. Cycles, Quasi-Periodicity, Golden Mean and Chaos Cycles are often represented by the circle map, a graph that maps the circle onto itself. The simplest form is [33],[34] : where the variable θn+1 is computed mod 1 (meaning counting only partial circles, as full circles get back to the starting point), K is the coupling strength and Ω is the external driving frequency. This map is used to describe oscillatory systems from solid state physics to heart rhythms. The circle map tracks the universal behaviour of dynamical systems associated with transition from cycles to chaos via quasi-periodicity. The most studied circle map involves a ratio of basic frequencies ω = ϕ = (√5 − 1)/2, the golden mean critical point at K = 1 and Ω = Ωc = 0.60666106347011 (≈ ϕ = 0.618033989 … ), reported in many experiments. Circle maps contain some key features. Arnold tongues are regions in the phase space of circle maps with locally constant rational rotation (winding) numbers between the driver and the natural oscillator frequencies, p/q. They were first investigated for a family of dynamical systems defined by Kolmogorov, who proposed this family as a simplified model for driven mechanical rotors described by equation (9). The circle map of the equation exhibits certain regions in the parameters space when it is locked to the driving frequency (phase-locked, or synchronized). Here, θ is interpreted as the polar angle such that its value lies between 0 and 1; the two parameters are K, the coupling strength between the driver and the rotor, and Ω, the driver frequency. A typical map with Arnold tongues is given in Figure 4. Figure 4: Some Arnold tongues in the standard circle map ϵ = K/2π versus Ω For small to intermediate values of K, (0 < K < 1), and certain values of Ω, the map exhibits phase locking. In the phase-locked regions, θn advances essentially in rational multiples of n; although it may do so chaotically on the small scale. The phase-locked regions, called Arnold tongues, are shaded yellow in Figure 4; while the quasi-periodic regions are white. Each yellow V region touches down to a rational value of Ω = p/q in the limit as K→0. The values of (K,Ω) in one of these regions will all result in a motion with rotation number ω = \frac{p}{q}. For example, all values of (K, Ω) in the middle V region correspond to ω = \frac{1}{2}. In other words, the sequence stays locked on to the signal despite significant noise or perturbation. This ability to lock on in the presence of noise is central to the utility of phase-locked loop electronic circuits. The circle map in Figure 4 is invertible or symmetrical around the mid line. For K > 1, the circle map is no longer invertible. In Figure 5a [35], the circle map is extended to K = 4. Arnold tongues of synchronization are in grey with winding numbers indicated inside the tongue. The white regions are quasiperiodic, and the stippled regions appearing beyond the line K = 1 represent chaos. This map also depicts fractal self-similarity on different scales. Fractal self-similarity and chaos are closely related. A chaotic system has a fractal dimension and exhibits self-similarity over many scales. Chaos does not mean random. Mathematically, chaos is sensitive to initial conditions, but it is globally determined in the sense that tends toward a strange attractor (see Appendix 2) Figure 5: Extended circle map (see text for details) The golden mean critical point (GM) is where the curve of constant irrational winding number ϕ = (√5 − 1)/2 terminates on the line K= 1 (see Fig 5b), and quasi-periodic behaviour undergoes transition to chaos. This point is marked by an infinite sequence of unstable orbits with periods given by the Fibonacci numbers. The golden mean is thus located at the edge of chaos, and has a role in keeping the system of oscillators active without interfering with one another as well as away from the state of chaos. There are claims that the planetary orbits around the sun exhibit golden ratios or ratios according to the Fibonacci sequence numbers, as many people have commented (see [36] [37] for example). So is our solar system stable? The question is whether it will remain stable as such, at least for billions of years, or transit to chaos much sooner than that. Some astrophysicists claim however, that the planetary orbits are chaotic and sensitive to initial conditions but this means only that they are unpredictable for longer than 100 million years into the future [38] so there is no immediate cause for alarm. IX . Chaos and Strange Attractors Figure 6: The Lorenz attractor (from [40]) Edward Lorenz is the generally acknowledged father of chaos theory [39]. According to one account, during the winter of 1961, Lorenz was running a climate model on the computer described by 12 differential equations, when he decided to repeat one of the runs with a small change. Instead of calculating to six decimal digits, he rounded that to three to save computing time, fully expecting to get the same results. But he didn’t. That was the beginning of his discovery of the sensitive dependence on initial conditions of chaotic systems, which he described as the butterfly effect. It makes long term weather prediction impossible. His toy equations produced the Lorenz attractor (Figure 6) (the prototype of strange attractors associated with chaotic systems), which bears serendipitous resemblance to butterfly wings, and became the emblem of the chaos era that followed. Numerous strange attractors have been created, mostly as computer artwork, beginning with the book by Clint Sprott [40]. But chaos theory has found applications in meteorology, physics, engineering, economics, biology and medicine. The Lorenz attractor is a fractal, with self-similar structure on different scales. It has a fractal dimension of 2.06215 and lives in a space of at least 3 dimensions. It contains unstable periodic orbits that can be identified using various mathematical procedures [41], and can also be regarded as twisted or knotted periodic orbits [42]. Chaos theory has been taken up enthusiastically in every field including quantum physics, in the form of quantum chaos, which tries to build a bridge between chaos in classical mechanics and the wavelike motion of electrons in atoms and molecules. Martin Gutzwiller wrote [43]: “The phase space for a chaotic system can be organized at least partially around periodic orbits, even though they are sometimes quite difficult to find.” Chaos is typically found in turbulent flows of fluids, gases, and the atmosphere. Turbulence is traditionally regarded as one of the most intractable problems in physics and mathematics. Mary Selvam first proposed a theory of turbulent fluid flow based on fractal spacetime fluctuations in 1990 (see [44]). Selvam treats the fractal fluctuations on all spacetime scales as a superposition of a continuum of eddies or vortices. Large scale fluctuations result from the integration of smaller scale fluctuations within, and the growth trajectory traces an overall logarithmic spiral flow path with the quasi-periodic Penrose tiling pattern for internal structure. The ratio of radii or circulation speeds corresponding to the successive growth steps of the large eddy generating the geometry of the quasi-periodic Penrose tiling pattern is, of course, equal to the golden mean Φ = 1.618… ([31]). Treating turbulence as a continuum of discrete eddies or cycles with Penrose tiling pattern of growth captures key features of biological spacetime, as we shall see. X. Quantum Coherence and Circular Thermody-Namics of Organisms In The Rainbow and the Worm, the Physics of Organisms [24], MWH presented empirical evidence and theoretical arguments suggesting that organisms are quantum coherent, and derived a circular thermodynamics of organisms that enables organisms to transform and transfer energy with minimum dissipation, which is also implied by quantum coherence. Here, we concentrate on the circular thermodynamic, which depends on a coherent fractal organization of biological spacetime. The first thing to note is that organisms do not make their living by heat transfer. They are not heat engines, but isothermal systems far away from thermodynamic equilibrium, and depend on the direct transfer of molecular energy by proteins and other macromolecules acting as quantum molecular energy machines at close to 100% efficiency [24]. (It is well-known that enzymes speed up chemical reactions in organisms by a factor of 1010 to 1023 [45], but they cannot do that without the water that constitutes 70 to 80% of cells and tissues. It is widely recognized that water gives excitability to proteins, reduces the energy barrier between reactants and products and increases the probability of quantum tunnelling by a transient compression of the energy barriers. But there is more to how water actually organizes enzyme reactions in living organisms. Findings within the past decade suggests that interfacial water associated with macromolecules and membranes in cells and tissues is in a quantum coherent liquid crystalline state, and plays a lead role in creating and maintaining the quantum coherence of organisms, as elaborated in Living Rainbow H2O [46] and elsewhere [47],[48].) For isothermal processes, the change in Gibbs free energy ∆G (thermodynamic potential for doing work at constant temperature and pressure) is, where ∆H is the change in enthalpy (heat content), T is temperature in deg K, and ∆S is the change in entropy . Thermodynamic efficiency requires that ∆S approaches 0 (least dissipation) and ∆H = 0, or ∆G = 0 via entropy-enthalpy compensation, i.e., entropy and enthalpy changes cancelling each other out. We shall see how the organism accomplishes that. XI. A Fractal Hierarchy of Coupled Cycles For a system to keep far away from thermodynamic equilibrium – death by another name – it must capture energy and material from the environment to develop, grow and recreate itself from moment to moment in a life-cycle, to reproduce and provide for future generations (Figure 7). The key to understanding the thermodynamics of the living system is not so much energy flow as energy capture and storage to create a reproducing life-cycle. The dynamic closure implied by the life-cycle is the beginning of a circular thermodynamics that transforms and transfers energy and materials with maximum efficiency (least dissipation) (see [24], [49]). Figure 7: Life depends on energy capture and storage to form a self-reproducing life-cycle coupled to energy flow The life-cycle is a fractal hierarchy of self-similar cycles organised by the characteristic spacetimes of the processes involved. All real processes have characteristic spacetimes. In the organism, the heart (10−1m) beats in a second, nerve cells (10−4m) fire in a tenth of a second or faster, and protons (10−15m) and electrons (10−17m) move in 10−12 to 10−15s . Cells divide in minutes, and physiological processes have longer cycles of hours, a day, a month or a year. The coherent fractal hierarchy of living activities arises because processes with matching space times interact most strongly through resonance, and also link up to the entire hierarchy. That is why biological activities come predominantly in cycles or biological rhythms. (In the language of quantum physics, the organism is a superposition of coherent quantum activities over all spacetimes [24].) The possibility for cycles in the living world coupling and linking up to cycles in the physical universe is surely why life is possible, and indeed some would argue, as Whitehead did [2], that the entire universe is alive. The coupled cycles form a nested fractal self-similar structure. The lifecycle consists of smaller cycles each with still smaller cycles within, spanning characteristic spacetimes from sub-nanometre to metres and from 10−15s to hours and years (Figure 8). Figure 8: The fractal structure of coupled activity cycles in living systems XII. Minimum Entropy Production Cycles enable the activities to be coupled together, so that energy yielding processes can transfer energy directly to those requiring energy; and the direction can be reversed when necessary. This cooperativity and reciprocity resulting from the fractal hierarchy of coupled cycles is an extended form of Onsager reciprocity relation that conventionally applies strictly only to near-equilibrium steady state (see [24] for details). What it means in practice is that energy can be concentrated to any local point where it is needed, and conversely spread globally from any local point. In that way, the fractal hierarchy of coupled cycles maximizes both local autonomy and global cohesion, which is the hallmark of quantum coherence [24] according to the criterion of factorisability defined by Glauber [50]. To get an idea of such coupled cycles, one needs to look no further than charts of biochemical metabolic pathways [51]. Most if not all of the reactions go either way, depending on the local concentrations of reactants and products. In further accord with circular thermodynamics, biochemical recycling is ubiquitous; there are numerous scavenging or salvaging pathways for the recovery of building blocks of proteins, nucleic acids, glycolipids, and even entire proteins. The fractal hierarchy of coupled cycles confers dynamic stability as well as autonomy to the system on every scale. Thermodynamically, no net entropy is generated in the case of perfect cycles; and the system maintains its organization. The fractal structure effectively partitions the organism into a hierarchy of systems within systems defined by the extent of equilibration of (dissipated) energies. Thus, energies equilibrated or evenly spread within a smaller spacetime will still be out of equilibrium in the larger system encompassing the first, and hence capable of doing work. There are now two ways to mobilize energy efficiently with entropy change approaching zero: very slowly with respect to the characteristic time so it is reversible at every point, or very rapidly with respect to the characteristic time, so that in both cases the energy remains stored (in a coherent non-degraded form) as it is mobilized. Consequently, the organism simultaneously achieves the most efficient equilibrium and far-from-equilibrium energy transfer. The nested dynamical structure also optimises the kinetics of energy mobilisation. For example, biochemical reactions depend strictly on local concentrations of reactants, which are extremely high, as their extent of equilibration is typically at nanometre dimensions (in nanospaces). In the ideal – approached most closely by the healthy mature organism and the healthy mature ecosystem – an overall internal conservation of energy and compensation of entropy (Σ∆S=0) is achieved. In this state of balance, the system organization is maintained and dissipation minimized; i.e., the entropy exported to the environment also approaches zero, Σ∆S ≥ 0 (Figure 9). Figure 9: The zero-entropy ideal of circular thermodynamics Internal entropy compensation (and energy conservation) implies that there needs to be free variation in microscopic states within the macroscopic system; i.e., the internal microscopic detailed balance at every point of classical steady state theory is violated. (This is also the basis of the extension of Onsagers reciprocity relationship to far from equilibrium condition.) For an organism, this means that detailed energy balance is not necessary at every point. Most often, parts of it are in deficit, and severely so, when one needs to run from a tiger knowing the energy can be replenished after a successful escape. The same applies to ecosystems: all species are in a sense storing energy and resources (nutrients) for every other species via complex food webs and other symbiotic relationships. The above considerations give rise to the prediction that a sustainable system maximizes cyclic, non-dissipative flows while minimizing dissipative flows, i.e., it tends towards minimum entropy production even under far from equilibrium conditions, as conjectured by Ilya Prigogine [52]. Such a system has a hierarchy of coherent energy storage spacetimes so that within the coherence volume, there is no time separation, and within the coherence time, there is no space separation. Thus, organic spacetime exists as a hierarchy of simultaneities, or durations, precisely as envisaged by Bergson [5] and Whitehead [2],[6]. (See [24] for a more detailed discussion). The golden mean most likely enters into the fractal structure in the form of the golden fractal. As the most irrational of all numbers (see Section 6 above), it allows the maximum number of non-resonating activities to co-exist (representing maximum coherent energy storage). On the other hand, it is also arbitrarily close to a maximum number of rational numbers so specific resonances can easily be established for energy transfer. We shall see more clearly how the golden fractal of neuronal activities are the key to optimum intercommunication and information processing in the final section. The next section shows how fractals are mathematically isomorphic to quantum coherent states. XIII. Fractals and Quantum Coherence The coherent fractal structure maximizes global connectivity and local autonomy, the hallmark of quantum coherence, as mentioned earlier. One of the authors (GV) has shown that a functional representation of self-similarity – the most important property of fractal structures – is mathematically isomorphic with squeezed quantum coherent states [53, 54, 55], in which Heisenberg’s uncertainty is minimum. Quantum coherence thus seems to underly the ubiquitous recurrence of fractal self-similarity in Nature. Let us see how this works with the logarithmic spiral, and the special case of the golden spiral, as an example. The golden spiral and its relation to the Fibonacci sequence are of great interest because the Fibonacci sequence appears in many phenomena, ranging from botany to physiological and functional properties in living systems, as the “natural” expression in which they manifest themselves. The defining equation for the logarithmic spiral in polar coordinates (r, θ) is [56, 57]: with r0 and d arbitrary real constants and r0 > 0. In a log-log plot with abscissa θ = ln eθ the spiral is represented by a straight line with slope d The (fractal) self-similarity property of the logarithmic spiral is manifest in the constancy of the angular coefficient tan−1d: rescaling θ affects r/r0 by the power (r/r0)n. The logarithmic spiral is called the golden spiral when at θ = π/2 we have r/r0 = ed(π/2) = Φ, where Φ denotes the golden mean (1 + √ 5)/2. We may introduce the subscript g (denoting golden) and put dg ≡ (logΦ)/(π/2). The polar equation for the golden spiral is thus rg(θ) = r0edgθ. The radius of the golden spiral increases in geometrical progression with the ratio Φ as θ increases by π/2: rg(θ +/2) = r0edg(θ+/2) = r0edgθΦn and rg,n ≡ rg(θ = /2) = r0Φn, n = 0,1, 2, 3, …. The so called Fibonacci tiling provides a tool for a good approximate construction of the golden spiral. The Fibonacci tiling is obtained by drawing in an appropriate way [56] squares whose sides are in the Fibonacci progression, 1, 1, 2, 3, 5, 8, 13, … (the sequence is defined by the relation Fn = Fn−1 + Fn−2, with F0 = 0; F1 = 1). The Fibonacci spiral is then made from quarter-circles tangent to the interior of each square. It does not coincide exactly with the golden spiral because Fn/Fn−1 → Φ in the n → ∞ limit, but is not equal to Φ for n finite. The golden ratio Φ and its “conjugate” Ψ ≡ 1 − Φ = −1/Φ = (1−√5)/2 = –Φ are both solutions of the quadratic equation and of the recurrence equation xn − xn−1 − xn−2 = 0, which, for n = 2 , is the relation (13). This is satisfied also by the geometric progression with ratio Φ of the radii rg,n = r0Φn of the golden spiral. Eq. (13) is the characteristic equation of the differential equation r +  − r = 0, which admits as solution r(t) = r0eiωte+(t) with ω = ±i√5/2 and θ = −t/(2d) + c with c, r0 and d constants. By setting c = 0, r(t) = r0e∓√5t/2et/2 , i.e. rΦ(t) = r0eΦt and rψ(t) = r0eψt. Consider now the parametric equations of the logarithmic spiral Eq. (11): The point z = x + i y = r0ee on the spiral is completely specified in the complex z-plane only when the sign of is assigned. The completeness of the (hyperbolic) basis {e−dθ, e+dθ} requires that both the factors q = e±dθ must be considered. In many instances the so-called direct (q>1) and indirect (q<1) spirals are both realized in the same system (e.g. in phyllotaxis studies). We thus consider the points z1 = r0e−dθe−iθ and z2 = r0e+dθe+iθ. For convenience (see below) opposite signs for the imaginary exponent have been chosen. Using the parametrization θ = θ(t), z1 and z2 are found to be solutions of the equations respectively, provided the relation holds (where we have neglected an arbitrary additive constant). We see that θ(T) = 2π at T = 2πd/Γ. In Eqs. (15) a dot denotes derivation with respect to t; m, γ and κ are positive real constants. The notations Γ ≡ γ/2m and Ω= (1/m)(κ − γ2/4m) = Γ2/d2 , with κ > γ2/4m, have been used. The parametric expressions z1(t) ( and z2(t) for the logarithmic spiral are thus z1(t) = r0e−iΩte−Γt and z2(t) = r0e+iΩte+Γt, solutions of Eqs. (15a) and (15b). At t = nT, z1 = r0(e−2πd)n, z2 = r0(ed)n, with the integer n = 1, 2, 3… This suggests that t can be interpreted as the time parameter and the time-evolution of the direct and indirect spirals is described by the equations (15a) and (15b). The “two copies” (z1, z2) can be viewed as the evolution forward in time and backward in time, respectively. The “angular velocity” of (the growth of) the spiral is given by |dθ/dt| = |Γ/d|. By putting [z1(t)+ z2*(-t)]/2 and [z1(−t)+z2(t)]/2 = y(t), we reduce eqs. (15a) and (15b) to the pair of damped and amplified harmonic oscillators: respectively. The second of these is the time-reversed image (γ→−γ) of the first and the global system (x − y) is a closed system. We see that the oscillator x is a dissipative, open (non-hamiltonian) system and in order to set up the canonical formalism we need to double the degrees of freedom by introducing its time reversed image y [58]: the physical meaning of the requirement to consider both the elements of the basis {e−dθ, e+dθ} is that we need to consider the closed system (z1, z2). Only then can we set up the canonical formalism. If we did not close the system z1 with the system z2, we would not be able to define the conjugate momenta because the Lagrangian would not exist. We now remark that classical, deterministic systems presenting loss of information (dissipation) might, providing some constraints are imposed, behave according to quantum evolution [59, 60, 61]. In the present case, this means that the logarithmic spiral and its time-reversed double (the direct and the indirect spiral) manifest themselves as macroscopic quantum systems. The quantization of the system described by Eqs. (15a) and (15b) (or (17a) and (17b)), can be performed in the usual way and the result is that the system ground state |0(t)⟩ is a coherent squeezed state, which is produced by condensation of couples of (entangled) A and B modes (related to the (z1, z2 ) couple, see Ref. [58, 54, 55]) for details): (AB)n, n = 0, 1, 2 ….∞. The operator which generates |0(t)⟩ is recognized to be the two mode squeezing generator with squeezing parameter ζ = −Γt. It must be stressed that the correct mathematical framework to study quantum dissipation is the one of quantum field theory (QFT). The realization of the logarithmic spiral (and in general of fractals) in the many cases observed in nature involves indeed an infinite number of elementary degrees of freedom. We may define the fractal Hamiltonian, which turns out to be actually the fractal free energy for the coherent boson condensation process out of which the fractal is formed. We indeed find that time evolution is controlled by the entropy operator. This is to be expected because entropy controls time irreversibility and the breakdown of time-reversal symmetry that is characteristic of dissipation is clearly manifest in the formation process of fractals. In the case of the logarithmic spiral, the breakdown of time-reversal symmetry is associated with the chirality of the spiral: the indirect (right-handed) spiral is the time-reversed, but distinct, image of the direct (left-handed) spiral. These results can be also extended to other examples of deterministic fractals [54], such as the Koch curve, the Sierpinski gasket and carpet, the Cantor set, etc..We do not consider these cases here. We only recall that by resorting to the results reported above the isomorphism can be shown [55] to exist between classical and quantum electrodynamics, from one side, and the fractal self-similarity structure and squeezed coherent states, from the other side, provided some quite general conditions are satisfied. As suggested also in the discussion above (see also [24] and below), the paradigm of coherence seems to extend to large part, if not the whole of the universe [62, 54, 55]. XIV. Topology of Quantum Spacetime The isomorphism between quantum coherence and fractals goes considerably deeper than we have presented so far. In a series of earlier papers (see [63]-[66]), MeN showed that E theory gives predictions of the mass spectrum of elementary particles and quarks in high energy physics based on the golden mean that are in remarkable agreement with those generally accepted experimentally and theoretically in the literature. Further, the same predictions are provided by a physical interpretation of four-dimensional fusion algebra, of Conne’s non-commutative geometry and related theories such as Freedmann topological theory of four manifolds (wild topology), and Penrose space X (of which E is a higher dimensional space). The fascinating details are beyond the scope of the present paper. We cite the results because they so strongly suggest that the mass spectrum based on the golden mean is a reflection of the quantum topology of spacetime [65]. This conjecture is reinforced, as MeN pointed out, in the simplest mechanical model for E theory, which is two golden mean coupled-oscillators, with frequencies of vibrations ω1 = Φ, ω2 = 1/Φ . That is because at least in the case of quarks, what we consider to be a particle can be thought of as a highly localized vibration, a standing wave simulating a particle (as in string theory). Generalizing to n such nested oscillators, Leila Marek-Crnjac was able to use Southwell and Dunkerley summation theorems for structural stability to obtain the masses of the elementary particles of the standard model, as well as the current quarks and constituent quarks [67]. The results are again very close to the theoretical and experimental masses found previously. Marek-Crnjac also noted that the golden mean plays a central role in dynamic stability in KAM theory (see above), pointing out that string theory eliminates the wave-particle duality by using the highly localized vibration of a Planck length string that gives rise to a particle when perceived on lower energy levels [67]. In this way, the golden mean enters into the mass spectrum via the KAM theory. As it is the most irrational number (see sections 6 and 12) it plays the key role in the stability of periodic orbits and the onset of global chaos. The ratio of Ω/ω (driver to oscillator frequency) is decisive as to whether the motion is localized (regular) or dissipative (stochastic). If the ratio is rational, the torus is destroyed. However, if it is irrational, the torus persists. A well-known application of this theory is the Kirkwood gaps – dips in the distribution of the main belt asteroids, which coincide with orbital resonances with Jupiter. What this means is that a particle is observed only when the energy is localized in a highly coherent vibration in an irrational frequency, and the most irrational frequency is ϕ. Marek-Crnjac’s argument applies also to the circular thermodynamics of organisms and the fractal hierarchy of living activities developed in Sections 7-12. The mass spectrum of elementary particle and living activities are both a reflection of the universal quantum topology of spacetime. XV. Coherent Brain Waves, Scale-Free Laws and Fractals Laboratory observations on the brain have consistently detected spatially extended regions of coherent neuronal activities, most thoroughly described in recent years by Walter Freeman and colleagues [68],[69],[70]. These phenomena have been observed by Lashley since the 1940s, who introduced the notion of field. Karl Pribram proposed his holographic hypothesis of memory formation in the early 1960s that introduced the notion of coherence in analogy to laser theory [58]. Hiroomi Umezawa proposed the first quantum field theory of memory in 1967 (see [71],[72]), that includes both the notions of field and coherence. Observations clearly show that the coherent cortical activities cannot be fully accounted for by the electric field of the extracellular dendritic current or the extracellular magnetic field from the high-density electric current inside the dendritic shafts, nor by chemical diffusion. Spatially extended patterns of phase-locked oscillations are intermittently present in resting, awake subjects, and in the same subjects actively engaged in cognitive tasks. They are best described as properties of the brain modulated upon engagement with the environment. These packets of waves extend over spatial domains covering much of the hemisphere in rabbits and cats [73]-[76] and over regions of 19 cm linear dimension in the human cortex with near zero phase dispersion [68], [77]-[79]. Umezawa’s many-body model [80] is based on the quantum field theoretic notion of spontaneous breakdown of symmetry [81],[61], which requires the existence of Nambu-Goldstone (NG) massless particles. Examples are the phonon modes in crystals, the magnon modes in ferromagnets, etc. They can be described, as customary in quantum theory, as the quanta associated with certain waves such as the elastic wave in crystals, the spin wave in ferromagnets. The role of such waves is that of establishing long range correlation. The mechanism of spontaneous breakdown of symmetry generates the change of scale: from the microscopic scale of the elementary constituent dynamics to the macroscopic scale of the system observable ordered patterns. In ferromagnets, for example, the ordered patterns is described by magnetization, which for this reason is called the order parameter. The phase transition from zero magnetization to the magnetic phase is induced by some external weak magnetic field. The mathematical structure of the theory must be adequate to allow for physically distinct phases. Quantum field theory possesses such a mathematical structure. The ground states corresponding to physically distinct phases are characterized by distinct degrees of ordering described by different densities of NG modes condensed in them. Such a condensation of NG modes in the ground states is a coherent condensation, which physically expresses the in phase synchronized oscillations. In quantum mechanics, on the contrary, all the state representations are physically equivalent and therefore not useful to describe phase transitions [82]. The quantum variables relevant to the many-body model of the brain have been identified to be the electric dipole vibration modes of the water molecules that constitute the matrix in which neurons and glia cells are embedded [83],[84]. The spontaneous breakdown of the rotational symmetry of electrical dipoles of water and other molecules implies the existence of NG modes. These modes have been called the dipole wave quanta (DWQ). The system ground state is thus obtained in terms of coherent condensation of the DWQ, which are the agents responsible for neuronal coordination [83]-[85]. In particular, it is found that the memory state is a squeezed coherent state for the basic quantum variables expressed mesoscopically as amplitude and phase modulation of the carrier signal observed in electroencephalograms (EEGs) and electrocorticograms (ECoGs). Laboratory observations show that self-similarity characterizes the brain ground state. Measurements of the durations, recurrence intervals and diameters of neocortical EEG phase patterns have power-law distribution with no detectable minima. The power spectral densities in both time and space conform to power law distribution [73],[74], a hallmark of fractals. It is therefore consistent with the squeezed coherent state for memory found earlier. Coherent signals are obtained when the electrodes are closely spaced. A lot of data have been collected by Freeman’s group on ECoG spatial imaging coming from small, high-density electrode arrays fixed on the surfaces of the olfactory, visual, auditory, or somatic modules. Spectral analysis of the ECoG shows broad distribution of the frequency components, where the temporal power spectral density in log-log coordinates is power-law, 1/fα, in segments. Below an inflection in the theta-alpha range the power spectral density is at, α = 0. Above, the log10 power decreases linearly with increasing log10 frequency in the beta-gamma range (12.5 – 80 Hz) with the exponent α between 2 and 3. One can show that in slow-wave sleep, the exponent averages near 3; in seizures it averages near 4. On the basis of what was said above, such values of the slope α provide corresponding values of the fractal dimension dα, with deformation parameter q ≡ 1/fα and coherent strength corresponding to the power spectral density. Brief epochs of narrow band oscillation create multiple peaks, indicating a departure from the scale free regime (the straight line) and therefore the presence of structures emergent from the background activity. The observed power law signals the scale free (“spatial similarity”) property of the brain ground states. The spatial similarities reveal the long spatial distances across which synaptic interactions can sustain coherence, namely the high-density coordination of neuron firing by synaptic interaction, in agreement with the dissipative model prediction. A confirmation of the brain scale-free behaviour comes also from the group of Dietmar Plenz at the US National Institute of Mental Health. They have identified neuronal avalanches – cascades of neuronal activities that follow precise 1/f power laws – in the excitatory neurons of the superficial layers in isolated neocortex preparations in vitro as well as in awake animals and humans in vivo [86]. They showed that the neuronal avalanche of the default state with the 1/f signature of self-organized criticality gives the optimum response to inputs as well as maximum information capacity (reviewed elsewhere in more detail by MWH [87]). Most significantly, the avalanche dynamics give rise to coherence potentials consisting of subsets of avalanches in which the precise waveform of the local field potential is replicated with high fidelity in distant network sites. The process is independent of spatial distance and includes near instantaneous neuronal activities as well as sequential activities over many time scales. Most coherence potentials are spatially disjunct. Local Field Potentials (LFPs) of successive coherence potentials are not similar, but they are practically identical within a coherence potential among all the participating sites, there being no growth or dissipation during propagation. This suggests that the waveform of a coherence potential is a high-dimensional coding space in information processing of the brain. For decades, phase-locked neuronal activity has been reliably recorded using the LFPs or EEG and was found to correlate with the presentation of stimulus in animals and visual perception in humans. XVI. Golden Music of the Brain In living systems, fractals must satisfy the state of quantum coherence which maximizes global cohesion as well as local autonomy, and enables energy from any local level to spread to the global and conversely concentrate energy to any domain from the entire system. Our discussion suggests that fractals with fractal dimension in the golden mean (golden fractals) might be the most effective in giving autonomy to the greatest number of cycles. On the other hand, global cohesion is also ensured, because cycles in a fractal hierarchy are quantum coherent; energy can be shared between global and local. The golden mean is also close to an infinite number of rational ratios, so special resonances or correlations can be easily established. Thus, one would expect biological rhythms in general to conform to the golden mean, although no such survey has yet been carried out. The golden mean figures prominently in the EEG frequencies of the resting brain (see Table 1). Belinda Pletzer, Hubert Kerschbaum and Wolfgang Klimesch proposed that brain frequencies that never synchronize in the resting brain may play an important role in the organization of groups of cells in keeping their rhythms distinct and free from mutual interference. They suggested that this can be achieved via frequencies in irrational multiples in the resting (default) brain [88] (see also [87]). Table 1 lists the typical EEG frequency bands. Table 1: Typical EEG frequency bands and subbands and corresponding periods (from [88]). The classical EEG frequency bands can indeed be described as a geometric series with a ratio between neighbouring frequencies approximating Φ = 1.618, and the successive frequencies are the sum of two previous ones, as in the Fibonacci sequence. Intuitively at least, one can see that the golden mean provides the highest physiologically possible desynchronized frequencies, and at the same time, the potential for spontaneous diverse coupling and uncoupling between rhythms and a rapid transition from resting state to activity. A team of researchers led by Miles Whittington have been studying the intricacies in the golden music of the brain by recording from multiple layers of the neocortex simultaneously. They found multiple local neuron assemblies supporting different discrete frequencies in the neocortex network, and the relationships between different frequencies appear designed to minimize interference and to allow diverse coupling of activities via stable phase interactions and the control of the amplitude of one frequency in relation to the phase of another [89]. The 1/f pattern of EEG is really a time-averaged smoothed collection of multiple, discrete frequencies, and does not represent all the frequencies and combination of frequencies present in the brain. (It is like a recording of a Mozart symphony that averages out all the sounds made in discrete periods of time, so the music is completely buried.) Detailed observations made by the team have shown that at least three discrete frequencies δ (1-3 Hz), θ (6-9 Hz), and γ (30-50 Hz) are often expressed simultaneously, and can be associated with further much slower rhythms both in vivo and in vitro. Discrete frequencies ranging from low δ to high γ can be produced from a single area of the isolated neocortex in vitro, with peak frequencies distributed according to the golden mean. To keep simultaneously occurring frequencies apart and minimize interferences, the solution is indeed to have ratios of frequencies that are irrational numbers. Coexistent γ1 and β2 rhythms in the cortex, for example, are generated in two different layers and survive physical separation of the layers. The ratio of peak frequencies is approximately Φ, resulting in a periodic pattern of change in low-level synchrony between the layers with a period equal to the sum of the two periods of oscillation present. This phenomenon can occur to some extent with any pair of co-expressed frequencies. But using Φ as a common ratio between adjacent frequencies in the EEG spectrum enables the neocortex to pack the available frequency space (thereby maximising the information processing capacity, or the capacity to produce the most music). If the cortex uses different frequency bands to process different aspects of incoming information, then it must also have the ability to combine information held in these bands to reconstruct the input; hence the importance of keeping them separate, as the golden mean does. It is also possible for a local neuron assembly generating a single frequency rhythm to switch frequencies. Such changes are facilitated by a range of mechanisms including changes in neuronal intrinsic conductances and non-reciprocal interactions with other regions oscillating at a similar frequency. After stimulation, γ frequencies can transform to β frequencies (approximately halved) due to inhibitory postsynaptic potentials on the principal cells generating the action potentials. In the time domain, the ability to distinguish rapidly changing features of an input from more slowly changing features provides an efficient means of recognizing objects. Feature detection over a range of time scales can reproduce many properties of individual neurons in the visual cortex. Thus, from a computational perspective, it is an advantage for the cortex to process different temporal scales of information separately, using different frequencies. It has been shown that rhythms with larger temporal scales (slower frequencies) facilitate interactions over greater distances in cortical networks, i.e., they may synchronize over larger areas of the visual map in the retina of the eyes. Thus, different frequencies may have a role for processing sensory information on different spatial scales. In a visual task designed to test perception shifting from features of an object with low spatial frequency to those with high spatial frequency, a direct correlation was found between the spatial scale of the sensory object and the temporal scale (frequency) of associated cortical rhythms. Thus cross-frequency phase synchronization is a possible means of combining information from different frequency channels to fully represent a sensory object. XVII. Conclusion We have considered the fabric of spacetime from the widest perspectives, drawing on findings from mathematics, quantum physics, far from equilibrium thermodynamics, biology, and neurobiology. Although a definite answer cannot be given to the question asked in the title of this paper, the totality of findings obtained by different authors discussed in previous Sections seem to converge and justify our speculation that spacetime is fractal and quantum coherent in the golden mean. Mathematically, the fractal universe is non-differentiable and discontinuous, yet dense in infinite dimensional spacetime. Physically, it is a quantum coherent universe consisting of an infinite diversity of autonomous agents all participating in co-creating organic, fractal space-time by their multitudinous coupled activity cycles. Biologically, this fractal coherent space-time could also provide the fabric of conscious awareness mirrored in the quantum coherence of our brain states. This view depicts a new organic cosmogony consonant with that of Alfred North Whitehead [1], resolving major paradoxes associated with the classical mechanics, and paving the way to reconciling or transcending quantum theory and general relativity. Much work remains to be done in order to provide a definitive answer to our question of whether fractal self-similarity, golden ratio and coherence are characterizing features of the fabric of spacetime. As this manuscript is going to press, the first ever observation is reported that the brightness of some stars pulsate at primary and secondary frequencies whose ratios are near the golden mean [John F. Lindner, Vivek Kohar, Behnam Kia, Michael Hippke, John G. Learned, and William L. Ditto, Strange Nonchaotic Stars, Phys. Rev. Lett. 114, 054101 (2015).]. This evidence of strange nonchaotic dynamics recorded by the Kepler space telescope lends support to our vision of the fractal golden mean structure of spacetime. XVIII. Acknowledgment We thank Peter Saunders for stimulating and helpful discussions and for critical reading of successive drafts of this manuscript. Appendix 1. Zeno’s Paradox Achilles is running a race with the tortoise. Achilles gives the tortoise a head start of 100 metres, say. If we suppose that both run at constant speed – Achilles very fast and the tortoise very slow – then after some time, Achilles will have run 100 metres, bringing him to the tortoise’s starting point. But during that time, the tortoise has run a much shorter distance, say 10 metres. It will then take Achilles some further time to run that distance, by which time the tortoise will have gone ahead farther; and then he would need more time still to reach that third point, while the tortoise moves ahead, and so on. Thus, whenever Achilles reaches somewhere the tortoise has been, he still has farther to go. Therefore, because there are an infinite number of points where the tortoise has already been for Achilles to reach, he can never overtake the tortoise [4]. Appendix 2. Some Informal Definitions We provide here informal definitions (from [8]) of some mathematical terms that will be used in this paper; some, like deterministic chaos, have no generally agreed definition. Set theory is the branch of mathematical logic about collections of mathematical objects. The modern study of set theory was initiated by German mathematicians Georg Cantor (1845-1918) and Richard Dedekind (1831-1916) in the 1870s. A closed set contains its own boundary, its complement is an open set which is a set that does not contain its boundary. A Borel set is any set in a topological space that can be formed from open sets (or equivalently from closed sets) through the operations of countable union, countable intersection, and relative complement. A countable set is a set with the same number of elements as some subset of the set of natural numbers. The elements of a countable set can be counted one at a time, and although the counting may never finish, every element of the set will eventually be associated with a natural number. The union of two sets A and B is the the set of all elements that belong to either set and is denoted by AB. The intersection of A and B is the set of all elements that belong to both sets and is denoted by AB . The relative complement of A in B is the set of elements that are in B but are not in A. A bijection is a mapping from a set A to a set B that is both an injection (one-to-one) and a surjection (onto); it relates each member of A (the domain) to a unique member of B (the range). Each element of B also has a unique corresponding element in A. The classical triadic Cantor set is obtained by dividing the unit line into three equal parts, discarding the middle part except for its end points, and repeating the operation with the two remaining parts ad infinitum. In the random version, it could be any of the three parts that is discarded at random after each division. A topological space is a set of points and a set of neighbourhoods for each point that satisfy a set of axioms relating to points and neighbourhoods. The definition of a topological space relies only on set theory and is the most general notion of a mathematical space. For our purposes, the topological dimension is what one ordinarily understands by the concept of dimension. A point has topological dimension 0, a curve has topological dimension 1 (whether it is closed up into a circle or not); a sheet has a topological dimension of 2, as do the surface of a cylinder, sphere or doughnut, and so on. The Menger-Urysohn dimension is a generalized topological dimension of topological spaces, arrived at by mathematical induction. It is based on the observation that, in n-dimensional Euclidean space Rn, the boundaries of n-dimensional balls have dimension n-1. Therefore it should be possible to define the dimension of a space inductively in terms of the dimensions of the boundaries of suitable open sets. The Hausdorff dimension generalizes the notion of dimension to irregular sets such as fractals. For example, a Cantor set has a Hausdorff dimension of log 2/log 3, the ratio of the logarithm to the base 2 of the parts remaining to the whole after each iteration. A fractal is a mathematical set that typically displays self-similar patterns, and has dimensions that are fractions rather than integers. Geometric examples are rebranching trees, blood vessels, frond leaves etc. Deterministic Chaos describes dynamical systems with unpredictable behavior that is highly sensitive to initial conditions, but nevertheless globally determined in the sense that the trajectories are confined within a region of phase space called strange attractors. References Références Referencias 1. Ho MW. Golden geometry of E-infinity fractal spacetime. Story of phipart 5. Science in Society 2014, 62, 36-39. 2. Whitehead AN. Science and the Modern World, Lowell Lectures 1925, Collins Fontana Books, Glasgow, 1975. 3. Penrose R. The Road to Reality, A Complete Guide to the Laws of the Universe, Vintage Books, London, 2005. 1099 pp. 4. Wikipedia. Zeno’s paradoxes. 20 October 2014. 5. Bergson H. Time and Free Will, An Essay on the Immediate Data of Consciousness (FL Pogson trans.), George Allen & Unwin, Ltd., New York, 1916. 6. Whitehead, AN. Concept of Nature, Tarner Lectures delievered in Trinity College, Cambri dge, November 1919, Gutenberg eBook, 16 July 2006. 7. Cantor G. Über unendliche, lineare Puntkmannigfaltigkeiten V. Mathematische Annelen 21, 545591; cited in [8]. 8. Rosser JB. Complex Evolutionary Dynamics in Urban-Regional and Ecologi-Economic Systems, DOI 10.1007/978-1-4419-8828-7, Springer Science + Business Media, LLC 2011. 9. Mandelbrot BB. The Fractal Geometry of Nature, W.H. Freeman, San Francisco, 1983. 10. Lorenz E. N. Deterministic Nonperiodic Flow. J. Atmos Sci. 1963 , 20, 130-141. 11. Lorenz E. N. The Essence of Chaos, University of Washington Press, Seattle, 1993. 12. Ord, GN. Fractal space-time: a geometric analogue of relativistic quantum mechanics. J. Phys. A: Math. Gen. 1983, 16, 1869-84. 13. Feynman RP and Hibbs AR. Quantum Mechanics and Path Integrals, McGraw-Hill, New York, 1965. 14. Nottale L. Fractal Space-Time and Microphysics: Towards a Theory of Scale Relativity, World Scientific, 1993. 15. Nottale L. Scale Relativity and Fractal Space-Time, A New Approach in Unifying Relativity and Quantum Mechanics, Imperial College Press, London, 2011. 16. El Naschie MS. A review of E infinity theory and the mass spectrum of high energy particle physics. Chaos Solitons & Fractals 2004,19, 209- 36. 17. El Naschie MS. The theory of Cantorian spacetime and high energy particle physics (an informal review). Chaos, Solitons and Fractals 2009, 41, 2635-46. 18. Mauldin RD and Williams SC. Random recursive constructions: asymptotic geometric and topologica l properties. Trans. Amer. Math. Soc. 1986, 295, 325 -46. 19. Ho MW. The story of phi part 1. Science in Society 2014, 62, 24-26. 20. “There is something about phi, Chapter 9 – Fractals and the golden ratio” Javier Romanach, YouTube, accessed 1 March 2014, 21. Ho MW. Watching the daisies grow. Science in Society 2014, 62, 27-29. 22. Marek-Crnjac L. The Hausdorff Dimension of the Penrose universe. Phys. Res. Interna 2011, 874302, 4 pages 23. Connes A. Noncommutative Geometry, Paris, 1994, 24. Ho MW. The Rainbow and the Worm, the Physics of Organisms, World Scientific, 1993, 2nd edition, 1996, 3rd enlarged edition, 2008, Singapore and London. 25. Schommers W. Space-time and quantum phenomena. In Quantum Theory and Pictures of Reality (W. Schommers ed.), pp. 217-77, Springer-Verlag, Berlin, 1989. 26. El Naschie MS. Quantum collapse of wave interference pattern in the two-slit experiment: a set theoretical resolution. Nonlinear Sci. Lett. A 2011, 2, 1- 8. 27. El Naschie MS. Topological-geometrical and physical interpretation of the dark energy of the cosmos as a halo energy of the Schrdinger quantum wave. J. Mod . Phys. 2013, 4, 591-6. 28. Ho MW. E infinity spacetime, quantum paradoxes and quantum gravity. Science in Society 2014, 62,40-43. E-infinity _space_time_ quantum_paradoxes_and_quantum_ gravity_story_of_phi_part_6 29. “Our universe continually cycles through a series of ‘aeons’”, The Daily Galaxy, 26 September 2011, 30. Eugene WC. An introduction to KAM theory. Preprint January 2008, trokam.pdf 31. Ho MW. Golden cycles and organic spacetime. Science in Society 2014, 62, 32-34. 32. Kolmogorov-Arnold-Moser theorem. Wikipedia, 24 January 2014, 33. Effect of noise on the golden-mean quasiperiodicity at the chaos threshold. 34. Arnold tongue. Wikipedia, 4 February 2014 35. Ivankov NY and Kuznetsov SP. Complex periodic orbits, renormalization, scaling for quasiperiodic golden-mean transition to chaos. Phys. Rev. E 2001, 63, 046210. 36. Lombardi OW and Lombardi MA. The golden mean in the solar system. The Fibonacci Quarterly 1984, 22, 70-75. 37. Phi and the solar system. f Phi 1.618 The Golden Number, 13 May 2013, 38. “Is the solar system stable?” Scott Tremaine, Institute for Advanced Study, Summer 2011, 39. Sprott JC. Honors: A tribute to Dr Edward Norten Lorenz. EC Journal 2008, 55-61, 40. Sprott JC. Strange Attractors: Creating Patterns in Chaos, M&T Books, New York, 1993. 41. Viswanath D. Symbolic dynamics and periodic orbits of the Lorenz attractor. Nonlinearity 2003, 16, 1035-56. 42. Birman JS and Williams RF. Knotted periodic orbits in dynamical systems 1. Lorenz’s equations. Topology 1983, 22, 47- 82. 43. Gutzwiller M. Quantum chaos. Scientific American January 1992, republished 27 October 2008, 44. Selvam AM. Cantorian fractal space-time fluctuations in turbulent fluid flows and the kinetic theory of gases. Apeiron  2002, 9, 1-20. 45. Kraut DA, Carroll KS a nd Herschlag D. Challenges in enzyme mechanisms and energetics. Ann. Rev. Biochem. 2003, 72, 517-71. 47. Ho MW. Water is the means, medium, and message of life. Int. J. Design, Nature and Ecodynamics 2014, 9, 1-12. 49. Ho MW. Circular thermodynamics of organisms and sustainable systems. Systems 2013, 1, 30-49. 50. Glauber RJ. Coherence and quantum detection. In Quantum Optics ( RJ Glauber ed.), Academic Press, New York, 1969. 51. Metabolic pathways. Sigma Aldrich, accessed 27 October 2014. 52. Prigogine I. Time, structure and fluctuations. Nobel Lecture, 8 December 1977, 53. G. Vitiello, Fractals as macroscopic manifestation of squeezed coherent states and brain dynamics. J. Physics: Conf Ser 2012, 380, 012021 (13 pp). 54. G. Vitiello, Fractals, coherent states and self-similarity induced noncommutative geometry. Phys. Lett. A376, 2527-2532 (2012). 55. G. Vitiello, On the isomorphism between dissipative systems, fractal selfsimilarity and electrodynamics. Toward an integrated vision of Nature. Systems 2, 203- 216 (2014). 56. H. O. Peitgen, H. Jürgens and D. Saupe, Chaos and Fractals. New frontiers of Science, (Springer-Verlag, Berlin 1986) 57. A. A. Andronov, A. A. Vitt, S. E. Khaikin, Theory of Oscillators, (Dover Publications, INC, N.Y. 1966). 58. E. Celeghini, M. Rasetti and G. Vitiello, Quantum dissipation. Ann. Phys. 215, 156-170 (1992). 59. G. ‘t Hooft, Quantum Gravity as a Dissipative Deterministic System. Classical and Quantum Gravity 16, 3263- 3279 (1999); 60. M. Blasone, P. Jizba and G. Vitiello, Dissipation and quantization. Phys. Lett. A287, 205-210 (2001). M. Blasone, E. Celeghini, P. Jizba and G. Vitiello, Quantization, group contraction and zero point energy. Phys. Lett. A310, 393-399 (2003). M. Blasone, P. Jizba, F. Scardigli and G. Vitiello, Dissipation and quantization in composite systems. Phys. Lett. A373, 4106-4112 (2009). 61. M. Blasone, P. Jizba and G. Vitiello Quantum Field Theory and its Macroscopic Manifestations, (Imperial College Press, L ondon 2011). 62. Ho MW. Quantum world coming series. Science in Society 2004, 22, 4-15. 63. El Naschie MS. Quantum loops, wild topology and fat Cantor sets in transfinite high-energy physics. Chaos, Solitons & Fractals 2002, 13, 1167-74. 64. El Naschie MS. Wild topology, hyperbolic geometry and fusion algebra in high-energy physics. Chaos, Solitons & Fractals 2002, 13, 1935-45. 65. El Naschie MS. On the exact mass spectrum of quarks. Chaos Solitons & Fractals 2002, 14, 369-76. 66. El Naschie MS. On a class of general theories for high energy particle physics. Chaos, Solitons & Fractals 2002, 14, 649-68. 67. Marek-Crnjac L. The mass spectrum of high energy elementary particles via El Naschie’s E∞ golden mean nested oscillators, the Dunkerly-Southwell eigenvalue theorems and KAM. Chaos Solitons & Fractals 2003, 18, 125-33. 68. Freeman WJ. Mass Action in th e Nervous System, Academic Press, New York, 1975, 2004. 69. Freeman WJ. Neurodynamics, an Exploration of Mesoscopic Brain Dynamics, Springer- V erlag, 2000. 70. Freeman WJ. How Brains make up Their Minds, Columbia University Press, 2001. 71. Umezawa H. Advanced Field Theory: Micro, Macro and Thermal Physics, American Institute of Physics, New York, 1993. 72. Vitiello G. Hiroomi Umezawa and quantum field theory. NeuroQuantology 2011, 9, DOI: 10.14704/nq.2011.9.3 .450 73. Freeman WJ. Origin, Structure, and role of background EEG activity. Part 1. Analytic phase. Clin Neurol. 2004, 115, 2077-88. 74. Freeman WJ. Origin, Structure, and role of background EEG activity. Part 2. Analytic amplitude. Clin Neurol. 2004, 115, 2089-107. 75. Freeman WJ. Origin, Structure, and role of background EEG activity. Part 3. Neural frame classification. Clin. Neurol. 2005, 116, 1118-29. 76. Freeman WJ. Phase transitions in the neuropercolation nodel of neural populations with mixed local and non-local interactions. Biol. Cybern. 2005, 92, 367-79. 77. Bassett DS, Meyer-Lindenberg A, Achard S, Duke T, Bullmore E. Adaptive reconfiguration of fractal small-world human brain functional networks. PNAS 2006, 103, 19518-23. 78. Freeman WJ and Burke BC. A neurobiological theory of meaning in perception. Part 4. Multicortical patterns of amplitude modulation in gamma EEG. Int J. Bifurc. & Chaos 2003, 13, 2857-66. 79. Freeman WJ and Rogers LJ. A neurobiological theory of meaning in perception. Part 5. Multicortical patterns of phase modulation in gamma EEG. Int. J. Bifurc. & Chaos 2003, 13, 2867-87. 80. Ricciardi LM and Umezawa H. Brain and physics of many-body problems. Kybernetik 1967, 4, 44-48. 81. Vitiello G. My Double Unveiled, John Benjamin Pub. Co., Amsterdam, 2001. 82. Umeza wa H and Vitiello G. Quantum Mechanics, Bibliopolis, Naples, 1985. 83. Vitiello G. D issipation and memory capacity in the quantum brain model. Int. J. Mod. Phys. 1995, B9, 973. 84. Jibu M and Yasue K. Quantum Brain Dynamics and Consciousness, John Benjamin, Amsterdam, 1995. 85. Del Giudice E, Preparata G and Vitiello G. Water as a free electric dipole laser. Phys. Rev. Lett. 1988, 61, 1085-88. 86. Plenz D. Neuronal avalanches and coherence potentials. Eur. Phys. J. Special Topics 2012, 205, 259- 301. 87. Ho MW. Golden music of the brain. Science in Society 2014, 62. ©2015 Global Journals Inc. (US) One thought on “Is Spacetime Fractal and Quantum Coherent in the Golden Mean? By Mae-Wan Ho, Mohamed el Naschie & Giuseppe Vitiello 1. Spacetime is a fractel/ and at the same time not, because space is endless and not time yet. It is an at the same time equation by itself. But because the ultimate point in space, on the other side of which space could not exist, because there is no place anymore (place is anti space)/ everything around it is space, as a delayed point. The difference between this point and its shortest delay creates the speed of light, which is a square equation, because it is straight. It moves in a straight line/ while delayed time (space) around the point is always circular. So our time, like the spin of particles and the solar system, is always circular/ but consists of many overlapping times. The spacetime equation is created by the fact space; although it is pure free will, inertia; cannot be devided at the same time by a square equation (matrix) and a circular one and still remain equal to itself. It must choose for either a circular/ or a square equation, to be first. It chooses for the circular one, because in that way it stays equal to itself, around the point, which means a delayed time. Because particles define a place in space/ and at the same time contain an amount of undivided space within the core; because it represents a delayed space; they will have to compensate this when overlapping eachother at the same time as circular ones in the core of the sun/ so an amount of undivided space shoots out. This is light and chooses the square equation, which is the fastest point in space/ as a non delayed. So, light is actually negative space, which could not be delayed anymore . But still fits within a circular one; like that of water; when it is able to delay its spin longer: make it wider, so it basicly creates new future space where the negative undivided one of light fits in again. This is the explanation of life, since now it exists on a higher energy level than its surrounding space. Which also is the reason for its entropy. The mind bogling conclusion of all of it is, since space is timeless, it (the universe) could happen anytime always. The fact our universe started at some point in time/ does not mean this is the absolute definition of what spacetime is. It happened to start within it, at some point/ but this is (remains) at the same time meaningless for space itself. To explain parallel universes is not an impossibillity. Leave a Reply
a289cbeae09b52bb
World Library   Flag as Inappropriate Email this Article Density matrix Density matrix A density matrix is a matrix that describes a quantum system in a mixed state, a statistical ensemble of several quantum states. This should be contrasted with a single state vector that describes a quantum system in a pure state. The density matrix is the quantum-mechanical analogue to a phase-space probability measure (probability distribution of position and momentum) in classical statistical mechanics. Explicitly, suppose a quantum system may be found in state | \psi_1 \rangle with probability p1, or it may be found in state | \psi_2 \rangle with probability p2, or it may be found in state | \psi_3 \rangle with probability p3, and so on. The density operator for this system is[1] \hat\rho = \sum_i p_i |\psi_i \rangle \langle \psi_i|, where \{|\psi_i\rangle\} need not be orthogonal and \sum_i p_i=1. By choosing an orthonormal basis \{|u_m\rangle\}, one may resolve the density operator into the density matrix, whose elements are[1] \rho_{mn} = \sum_i p_i \langle u_m | \psi_i \rangle \langle \psi_i | u_n \rangle = \langle u_m |\hat \rho | u_n \rangle. The density operator can also be defined in terms of the density matrix, \hat\rho = \sum_{mn} |u_m\rangle \rho_{mn} \langle u_n| . For an operator \hat A (which describes an observable A of the system), the expectation value \langle A \rangle is given by[1] \langle A \rangle = \sum_i p_i \langle \psi_i | \hat{A} | \psi_i \rangle = \sum_{mn} \langle u_m | \hat\rho | u_n \rangle \langle u_n | \hat{A} | u_m \rangle = \sum_{mn} \rho_{mn} A_{nm} = \operatorname{tr}(\rho A). In words, the expectation value of A for the mixed state is the sum of the expectation values of A for each of the pure states |\psi_i\rangle weighted by the probabilities pi and can be computed as the trace of the product of the density matrix with the matrix representation of A in the same basis. Mixed states arise in situations where the experimenter does not know which particular states are being manipulated. Examples include a system in thermal equilibrium (or additionally chemical equilibrium) or a system with an uncertain or randomly varying preparation history (so one does not know which pure state the system is in). Also, if a quantum system has two or more subsystems that are entangled, then each subsystem must be treated as a mixed state even if the complete system is in a pure state.[2] The density matrix is also a crucial tool in quantum decoherence theory. The density matrix is a representation of a linear operator called the density operator. The close relationship between matrices and operators is a basic concept in linear algebra. In practice, the terms density matrix and density operator are often used interchangeably. Both matrix and operator are self-adjoint (or Hermitian), positive semi-definite, of trace one, and may be infinite-dimensional.[3] The formalism was introduced by John von Neumann[4] in 1927 and independently, but less systematically by Lev Landau[5] and Felix Bloch[6] in 1927 and 1946 respectively. • Pure and mixed states 1 • Example: Light polarization 1.1 • Mathematical description 1.2 • Formulation 2 • Measurement 3 • Entropy 4 • The Von Neumann equation for time evolution 5 • "Quantum Liouville", Moyal's equation 6 • Composite Systems 7 • C*-algebraic formulation of states 8 • See also 9 • Notes and references 10 Pure and mixed states In quantum mechanics, a quantum system is represented by a state vector (or ket) | \psi \rangle . A quantum system with a state vector | \psi \rangle is called a pure state. However, it is also possible for a system to be in a statistical ensemble of different state vectors: For example, there may be a 50% probability that the state vector is | \psi_1 \rangle and a 50% chance that the state vector is | \psi_2 \rangle . This system would be in a mixed state. The density matrix is especially useful for mixed states, because any state, pure or mixed, can be characterized by a single density matrix. A mixed state is different from a quantum superposition. In fact, a quantum superposition of pure states is another pure state, for example | \psi \rangle = (| \psi_1 \rangle + | \psi_2 \rangle)/\sqrt{2} . For pure state, it is always to be tr(\rho^2)=1 . Example: Light polarization The incandescent light bulb (1) emits completely random polarized photons (2) with mixed state density matrix \begin{bmatrix} 0.5 & 0 \\ 0 & 0.5 \\ \end{bmatrix} After passing through vertical plane polarizer (3), the remaining photons are all vertically polarized (4) and have pure state density matrix \begin{bmatrix} 1 & 0 \\ 0 & 0 \\ \end{bmatrix} An example of pure and mixed states is light polarization. Photons can have two helicities, corresponding to two orthogonal quantum states, |R\rangle (right circular polarization) and |L\rangle (left circular polarization). A photon can also be in a superposition state, such as (|R\rangle+|L\rangle)/\sqrt{2} (vertical polarization) or (|R\rangle-|L\rangle)/\sqrt{2} (horizontal polarization). More generally, it can be in any state \alpha|R\rangle+\beta|L\rangle, corresponding to linear, circular, or elliptical polarization. If we pass (|R\rangle+|L\rangle)/\sqrt{2} polarized light through a circular polarizer which allows either only |R\rangle polarized light, or only |L\rangle polarized light, intensity would be reduced by half in both cases. This may make it seem like half of the photons are in state |R\rangle and the other half in state |L\rangle. But this is not correct: Both |R\rangle and |L\rangle photons are partly absorbed by a vertical linear polarizer, but the (|R\rangle+|L\rangle)/\sqrt{2} light will pass through that polarizer with no absorption whatsoever. However, unpolarized light (such as the light from an incandescent light bulb) is different from any state like \alpha|R\rangle+\beta|L\rangle (linear, circular, or elliptical polarization). Unlike linearly or elliptically polarized light, it passes through a polarizer with 50% intensity loss whatever the orientation of the polarizer; and unlike circularly polarized light, it cannot be made linearly polarized with any wave plate. Indeed, unpolarized light cannot be described as any state of the form \alpha|R\rangle+\beta|L\rangle. However, unpolarized light can be described perfectly by assuming that each photon is either | R \rangle with 50% probability or | L \rangle with 50% probability. The same behavior would occur if each photon was either vertically polarized with 50% probability or horizontally polarized with 50% probability. Therefore, unpolarized light cannot be described by any pure state, but can be described as a statistical ensemble of pure states in at least two ways (the ensemble of half left and half right circularly polarized, or the ensemble of half vertically and half horizontally linearly polarized). These two ensembles are completely indistinguishable experimentally, and therefore they are considered the same mixed state. One of the advantages of the density matrix is that there is just one density matrix for each mixed state, whereas there are many statistical ensembles of pure states for each mixed state. Nevertheless, the density matrix contains all the information necessary to calculate any measurable property of the mixed state. Where do mixed states come from? To answer that, consider how to generate unpolarized light. One way is to use a system in thermal equilibrium, a statistical mixture of enormous numbers of microstates, each with a certain probability (the Boltzmann factor), switching rapidly from one to the next due to thermal fluctuations. Thermal randomness explains why an incandescent light bulb, for example, emits unpolarized light. A second way to generate unpolarized light is to introduce uncertainty in the preparation of the system, for example, passing it through a birefringent crystal with a rough surface, so that slightly different parts of the beam acquire different polarizations. A third way to generate unpolarized light uses an EPR setup: A radioactive decay can emit two photons traveling in opposite directions, in the quantum state (|R,L\rangle+|L,R\rangle)/\sqrt{2}. The two photons together are in a pure state, but if you only look at one of the photons and ignore the other, the photon behaves just like unpolarized light. More generally, mixed states commonly arise from a statistical mixture of the starting state (such as in thermal equilibrium), from uncertainty in the preparation procedure (such as slightly different paths that a photon can travel), or from looking at a subsystem entangled with something else. Mathematical description The state vector | \psi \rangle of a pure state completely determines the statistical behavior of a measurement. For concreteness, take an observable quantity, and let A be the associated observable operator that has a representation on the Hilbert space \mathcal{H} of the quantum system. For any real-valued, analytical function F defined on the real numbers,[7] suppose that F(A) is the result of applying F to the outcome of a measurement. The expectation value of F(A) is \langle \psi | F(A) | \psi \rangle\, . Now consider a mixed state prepared by statistically combining two different pure states | \psi \rangle and |\phi\rangle , with the associated probabilities p and 1 − p, respectively. The associated probabilities mean that the preparation process for the quantum system ends in the state |\psi\rangle with probability p and in the state |\phi\rangle with probability 1 − p. It is not hard to show that the statistical properties of the observable for the system prepared in such a mixed state are completely determined. However, there is no state vector |\xi\rangle which determines this statistical behavior in the sense that the expectation value of F(A) is \langle \xi | F(A) | \xi \rangle \, . Nevertheless, there is a unique operator ρ such that the expectation value of F(A) can be written as \operatorname{tr}[\rho F(A)]\, , where the operator ρ is the density operator of the mixed system. A simple calculation shows that the operator ρ for the above discussion is given by \rho = p | \psi\rangle \langle \psi | + (1-p) | \phi\rangle \langle \phi |\,. For the above example of unpolarized light, the density operator is \rho = \tfrac{1}{2} | R \rangle \langle R | + \tfrac{1}{2} | L \rangle \langle L |. For a finite-dimensional function space, the most general density operator is of the form \rho = \sum_j p_j |\psi_j \rang \lang \psi_j| where the coefficients pj are non-negative and add up to one. This represents a statistical mixture of pure states. If the given system is closed, then one can think of a mixed state as representing a single system with an uncertain preparation history, as explicitly detailed above; or we can regard the mixed state as representing an ensemble of systems, i.e. a large number of copies of the system in question, where pj is the proportion of the ensemble being in the state \textstyle |\psi_j \rang . An ensemble is described by a pure state if every copy of the system in that ensemble is in the same state, i.e. it is a pure ensemble. If the system is not closed, however, then it is simply not correct to claim that it has some definite but unknown state vector, as the density operator may record physical entanglements to other systems. Consider a quantum ensemble of size N with occupancy numbers n1, n2,...,nk corresponding to the orthonormal states \textstyle |1\rang,...,|k\rang, respectively, where n1+...+nk = N, and, thus, the coefficients pj = nj /N. For a pure ensemble, where all N particles are in state \textstyle |i\rang , we have nj = 0, for all ji, from which we recover the corresponding density operator \textstyle\rho = |i\rang\lang i|. However, the density operator of a mixed state does not capture all the information about a mixture; in particular, the coefficients pj and the kets ψj are not recoverable from the operator ρ without additional information. This non-uniqueness implies that different ensembles or mixtures may correspond to the same density operator. Such equivalent ensembles or mixtures cannot be distinguished by measurement of observables alone. This equivalence can be characterized precisely. Two ensembles ψ, ψ' define the same density operator if and only if there is a matrix U with i.e., U is unitary and such that | \psi_i'\rangle \sqrt {p_i'} = \sum_{j} u_{ij} | \psi_j\rangle \sqrt {p_j}. This is simply a restatement of the following fact from linear algebra: for two square matrices M and N, M M* = N N* if and only if M = NU for some unitary U. (See square root of a matrix for more details.) Thus there is a unitary freedom in the ket mixture or ensemble that gives the same density operator. However if the kets in the mixture are orthonormal then the original probabilities pj are recoverable as the eigenvalues of the density matrix. In operator language, a density operator is a positive semidefinite, hermitian operator of trace 1 acting on the state space.[8] A density operator describes a pure state if it is a rank one projection. Equivalently, a density operator ρ is a pure state if and only if \; \rho = \rho^2, Geometrically, when the state is not expressible as a convex combination of other states, it is a pure state.[9] The family of mixed states is a convex set and a state is pure if it is an extremal point of that set. It follows from the spectral theorem for compact self-adjoint operators that every mixed state is a finite convex combination of pure states. This representation is not unique. Furthermore, a theorem of Andrew Gleason states that certain functions defined on the family of projections and taking values in [0,1] (which can be regarded as quantum analogues of probability measures) are determined by unique mixed states. See quantum logic for more details. Let A be an observable of the system, and suppose the ensemble is in a mixed state such that each of the pure states \textstyle |\psi_j\rang occurs with probability pj. Then the corresponding density operator is: \rho = \sum_j p_j |\psi_j \rang \lang \psi_j| . The expectation value of the measurement can be calculated by extending from the case of pure states (see Measurement in quantum mechanics): \lang A \rang = \sum_j p_j \lang \psi_j|A|\psi_j \rang = \sum_j p_j \operatorname{tr}\left(|\psi_j \rang \lang \psi_j|A \right) = \sum_j \operatorname{tr}\left(p_j |\psi_j \rang \lang \psi_j|A\right) = \operatorname{tr}\left(\sum_j p_j |\psi_j \rang \lang \psi_j|A\right) = \operatorname{tr}(\rho A), where \operatorname{tr} denotes trace. Moreover, if A has spectral resolution A = \sum_i a_i |a_i \rang \lang a_i| = \sum _i a_i P_i, where P_i = |a_i \rang \lang a_i|, the corresponding density operator after the measurement is given by: \; \rho ' = \sum_i P_i \rho P_i. Note that the above density operator describes the full ensemble after measurement. The sub-ensemble for which the measurement result was the particular value ai is described by the different density operator \rho_i' = \frac{P_i \rho P_i}{\operatorname{tr}[\rho P_i]}. This is true assuming that \textstyle |a_i\rang is the only eigenket (up to phase) with eigenvalue ai; more generally, Pi in this expression would be replaced by the projection operator into the eigenspace corresponding to eigenvalue ai. The von Neumann entropy S of a mixture can be expressed in terms of the eigenvalues of \rho or in terms of the trace and logarithm of the density operator \rho. Since \rho is a positive semi-definite operator, it has a spectral decomposition such that \rho= \sum_i \lambda_i |\varphi_i\rangle\langle\varphi_i| where |\varphi_i\rangle are orthonormal vectors, \lambda_i> 0 and \sum \lambda_i = 1. Then the entropy of a quantum system with density matrix \rho is S = -\sum_i \lambda_i \ln \,\lambda_i = -\operatorname{tr}(\rho \ln \rho)\quad. Also it can be shown that S\left(\rho=\sum_i p_i\rho_i\right)= H(p_i) + \sum_i p_iS(\rho_i) when \rho_i have orthogonal support, where H(p) is the Shannon entropy. This entropy can increase but never decrease with a projective measurement, however generalised measurements can decrease entropy.[10][11] The entropy of a pure state is zero, while that of a proper mixture always greater than zero. Therefore a pure state may be converted into a mixture by a measurement, but a proper mixture can never be converted into a pure state. Thus the act of measurement induces a fundamental irreversible change on the density matrix; this is analogous to the "collapse" of the state vector, or wavefunction collapse. Perhaps counterintuitively, the measurement actually decreases information by erasing quantum interference in the composite system—cf. quantum entanglement, einselection, and quantum decoherence. (A subsystem of a larger system can be turned from a mixed to a pure state, but only by increasing the von Neumann entropy elsewhere in the system. This is analogous to how the entropy of an object can be lowered by putting it in a refrigerator: The air outside the refrigerator's heat-exchanger warms up, gaining even more entropy than was lost by the object in the refrigerator. See second law of thermodynamics. See Entropy in thermodynamics and information theory.) The Von Neumann equation for time evolution Just as the Schrödinger equation describes how pure states evolve in time, the von Neumann equation (also known as the Liouville-von Neumann equation) describes how a density operator evolves in time (in fact, the two equations are equivalent, in the sense that either can be derived from the other.) The von Neumann equation dictates that[12][13] where the brackets denote a commutator. Note that this equation only holds when the density operator is taken to be in the Schrödinger picture, even though this equation seems at first look to emulate the Heisenberg equation of motion in the Heisenberg picture, with a crucial sign difference: i \hbar \frac{dA^{(H)}}{dt}=-[H,A^{(H)}] ~, where A^{(H)}(t) is some Heisenberg picture operator; but in this picture the density matrix is not time-dependent, and the relative sign ensures that the time derivative of the expected value \langle A \rangle comes out the same as in the Schrödinger picture. Taking the density operator to be in the Schrödinger picture makes sense, since it is composed of 'Schrödinger' kets and bras evolved in time, as per the Schrödinger picture. If the Hamiltonian is time-independent, this differential equation can be easily solved to yield \rho(t) = e^{-i H t/\hbar} \rho(0) e^{i H t/\hbar}. "Quantum Liouville", Moyal's equation The density matrix operator may also be realized in phase space. Under the Wigner map, the density matrix transforms into the equivalent Wigner function, W(x,p)\stackrel{\mathrm{def}}{=}\frac{1}{\pi\hbar}\int_{-\infty}^\infty \psi^*(x+y)\psi(x-y)e^{2ipy/\hbar}\,dy ~. The equation for the time-evolution of the Wigner function is then the Wigner-transform of the above von Neumann equation, \frac{\partial W(q,p,t)}{\partial t} = -\{\{W(q,p,t) , H(q,p )\}\}~, where H(q,p) is the Hamiltonian, and { { •,• } } is the Moyal bracket, the transform of the quantum commutator. The evolution equation for the Wigner function is then analogous to that of its classical limit, the Liouville equation of classical physics. In the limit of vanishing Planck's constant ħ, W(q,p,t) reduces to the classical Liouville probability density function in phase space. The classical Liouville equation can be solved using the method of characteristics for partial differential equations, the characteristic equations being Hamilton's equations. The Moyal equation in quantum mechanics similarly admits formal solutions in terms of quantum characteristics, predicated on the ∗−product of phase space, although, in actual practice, solution-seeking follows different methods. Composite Systems The joint density matrix of a composite system of two systems A and B is described by \rho_{AB} . Then the subsystems are described by their reduced density operator. \operatorname{tr}_B is called partial trace over system B. If A and B are two distinct and independent systems then \rho_{AB}=\rho_{A}\otimes\rho_{B} which is a product state. C*-algebraic formulation of states It is now generally accepted that the description of quantum mechanics in which all self-adjoint operators represent observables is untenable.[14][15] For this reason, observables are identified with elements of an abstract C*-algebra A (that is one without a distinguished representation as an algebra of operators) and states are positive linear functionals on A. However, by using the GNS construction, we can recover Hilbert spaces which realize A as a subalgebra of operators. Geometrically, a pure state on a C*-algebra A is a state which is an extreme point of the set of all states on A. By properties of the GNS construction these states correspond to irreducible representations of A. The states of the C*-algebra of compact operators K(H) correspond exactly to the density operators, and therefore the pure states of K(H) are exactly the pure states in the sense of quantum mechanics. The C*-algebraic formulation can be seen to include both classical and quantum systems. When the system is classical, the algebra of observables become an abelian C*-algebra. In that case the states become probability measures, as noted in the introduction. See also Notes and references 1. ^ a b c Sakurai, J, Modern Quantum Mechanics (2nd ed.), p. 181,   2. ^ Hall, B.C. (2013), Quantum Theory for Mathematicians, p. 419  3. ^   4. ^   5. ^ Schlüter, Michael and Lu Jeu Sham (1982), "Density functional theory", Physics Today 35 (2): 36,   6. ^ Ugo Fano (June 1995), "Density matrices as polarization vectors", Rendiconti Lincei 6 (2): 123–130,   7. ^ Technically, F must be a Borel function 8. ^ Hall, B.C. (2013), Quantum Theory for Mathematicians, Springer, p. 423  9. ^ Hall, B.C. (2013), Quantum Theory for Mathematicians, Springer, p. 439  10. ^ Nielsen, Michael; Chuang, Isaac (2000), Quantum Computation and Quantum Information,  . Chapter 11: Entropy and information, Theorem 11.9, "Projective measurements cannot decrease entropy" 11. ^   12. ^ Breuer, Heinz; Petruccione, Francesco (2002), The theory of open quantum systems, p. 110,   13. ^ Schwabl, Franz (2002), Statistical mechanics, p. 16,   14. ^ See appendix,   15. ^ Emch, Gerard G. (1972), Algebraic methods in statistical mechanics and quantum field theory,  
11ec6771741a128d
« PreviousUpNext »Contents Previous: 2 Physics    Top: 2 Physics    Next: 3 Models 2.2  Quantum Mechanical Electromigration Description The force due to EM is modeled in continuum mechanical problems as described above by (2.20) \{begin}{gather} \b {F}=Z^{*}e\rho \b {J}, \mathref {(2.20)} \{end}{gather} where the \( Z^{*} \) is called effective valence or effective charge. The EM induced force on an atomic scale was theoretically studied by Huntington et al. [64]. They used some simplifications, such as the defects are decoupled from the lattice, the electrons are scattered by atoms only, and the creation or annihilation of phonons is neglected. Under these assumptions the \( x \)-directional momentum transferred from the scattered electrons to the defects per time is given by (2.21) \{begin}{gather} \displaystyle \frac {dM_{x}}{dt}=-\Bigl (\displaystyle \frac {1}{4\pi ^{3}}\Bigr )^{2}\iint \Bigl (\frac {m_{0}}{h}\Bigr ) \Bigl (\displaystyle \frac {\partial E}{\partial \b {k}_{x}}-\frac {\partial E}{\partial \b {k}_{x}}\Bigr )f(\b {k})(1-f(\b {k}’))W_{d}(\b {k},\ \b {k}’)\mathrm {d}\b {k}\mathrm {d}\b {k}’, \mathref {(2.21)} where \( f(\b {k}) \) is the distribution function of the electrons and \( W_{d}(\b {k},\ \b {k}’) \) is the transition probability per unit time of an electron in state \( \b {k} \) to jump into state \( \b {k}’ \) By separating the two energies’ differentiation into two integrals, interchanging the primed and the unprimed variables of the second integration, and employing the substitution (2.22) \{begin}{gather} \displaystyle \frac {f(\b {k})-f_{0}(\b {k})}{\tau _{a}}=\int f(\b {k})(1-f(\b {k}’))W_{d}(\b {k},\ \b {k}’)-f(\b {k}’)(1-f(\b {k}’))W_{d}(\b {k}’,\ \b {k})\frac {\mathrm {d}\b {k}\mathrm {d}\b {k}’}{4\pi ^{2}} \mathref {(2.22)} \{end}{gather} equation (2.21) can be written in the form (2.23) \{begin}{gather} \displaystyle \frac {dM_{x}}{dt}=\ \Bigl (\frac {m_{0}}{\tau _{a}\hbar }\Bigr )\int \frac {\partial E}{\partial \b {k}_{x}}f(\b {k})\frac {\mathrm {d}\b {k}}{4\pi ^{3}}, \mathref {(2.23)} \{end}{gather} where \( \tau _{a} \) is the relaxation time of the electrons. As a common assumption for metals, the relaxation time is taken to be constant over all states \( \b {k}. f_{0}(\b {k}) \) describes the electron distribution at equilibrium and integrates to zero. The current density in the \( x \)-direction can be expressed by [3, 78] (2.24) \{begin}{gather} J_{x}=\displaystyle \frac {-e}{4\pi ^{2}}\int f(\b {k})\frac {\partial E(\b {k})}{\hbar \partial \b {k}_{x}}\mathrm {d}\b {k}. \mathref {(2.24)} \{end}{gather} By comparison of (2.21) and (2.24) the relation (2.25) \{begin}{gather} \displaystyle \frac {\mathrm {d}M_{x}}{\mathrm {d}t}=\frac {J_{x}m_{0}}{e\tau _{a}} \mathref {(2.25)} \{end}{gather} is obtained. With the density of defects \( N_{a} \), the density of the conducting electrons \( n, \) and the contribution of the defects to the resistivity \( \rho _{d} = |m^{*}|/ne^{2}\tau _{a} \) the force can be expressed by (2.26) \{begin}{gather} F_{\mathrm {w}\mathrm {i}\mathrm {n}\mathrm {d}}=-\displaystyle \frac {neJ_{x}\rho _{\mathrm {d}}m_{0}}{N_{d}|m^{*}|}=-eE_{x}z\frac {N\rho _{\mathrm {d}}}{\rho N}\frac {m_{0}}{|m^{*}|}, \mathref {(2.26)} \{end}{gather} where the density of the conducting electrons is substituted by \( n = zN \), where \( N \) is the density of the lattice atoms, and \( z \) is the number of conducting electrons per lattice atom. For an ion at a saddle point between two vacant position the interaction of the ion and the conducting electron is the strongest, whereas the interaction at lattice points is the weakest. On the way from one lattice point to an other the interaction varies and as this position dependent interaction is not known, Huntington et al. [64] chose a sinusoidal form leading to (2.27) \{begin}{gather} \b {F}(y)=\b {F}_{\mathrm {m}}\displaystyle \sin ^{2}\Bigl (\frac {\pi y}{a}\Bigr ), \mathref {(2.27)} \{end}{gather} where \( a \) is the jump distance, and \( \b {F}_{m} \) is the maximum force. For a jump path \( j \) with an angle \( \theta _{j} \) between the path and the force \( \b {F}_{m} \), the energy required for a jump can be calculated by (2.28) \{begin}{gather} \displaystyle \triangle V_{j}=\int \limits _0^{\frac {a_{j}}{2}}\mathbf {F}(y)\cdot \mathrm {d}\b {y}=\frac {1}{4}a_{j}F_{wind}\cos \theta _{j}. \mathref {(2.28)} \{end}{gather} The net flow of atoms due to EM in the current direction is the sum of the probabilities of jumps (along the paths j) times the jump length in the current direction [78]. (2.29) \{begin}{gather} J_{\mathrm {w}\mathrm {i}\mathrm {n}\mathrm {d}}=\displaystyle \sum _{j}C_{lJ_{0}}\exp \Bigl (-\frac {V}{k_{\mathrm {B}}T}\Bigr )a_{j}\cos (\theta _{j})\sinh \Bigl (\frac {\triangle V_{j}}{k_{\mathrm {B}}T}\Bigr ) \mathref {(2.29)} \{end}{gather} \( l/0 \) is the atomic vibration frequency, \( C \) the concentration of the ions in the metal, and \( V \) the saddle point energy including the formation energy and the motion energy of vacancies. This equation can be linearized and rewritten to (2.30) \{begin}{gather} J_{\mathrm {w}\mathrm {i}\mathrm {n}\mathrm {d}}=\displaystyle \frac {cDF_{wind}}{k_{\mathrm {B}}T} \mathref {(2.30)} with \( D \) being the diffusion coefficient. (2.31) \{begin}{gather} D=\displaystyle \frac {1}{2}v_{0}\exp \Bigl (-\frac {V}{k_{\mathrm {B}}T}\Bigr )\sum _{j}a_{j}^{2}\cos (\theta _{j}) \mathref {(2.31)} \{end}{gather} The resulting equation (2.30) differs form the Nernst-Einstein relation by the factor 2 in the denominator, which is the average of the chosen position dependent interaction of the conducting electrons and the ions on their path from one lattice point to an other. In addition to the force due to the electron wind also the force due to the electric potential gradient has to be included, leading to the effective charge (2.32) \{begin}{gather} e’=eZ^{*}=ez\displaystyle \ \Bigl (\frac {1}{2}\ \Bigl (\frac {\rho _{\mathrm {d}}N}{\rho N_{\mathrm {d}}}\frac {m_{0}}{|m^{*}|}\Bigr )\ -1\Bigr ) \mathref {(2.32)} \{end}{gather} and to the effective valence \( Z^{*} \) Using the Einstein relation for field assisted diffusion in a potential the drift velocity can be expressed by [94] (2.33) \{begin}{gather} v_{\mathrm {E}\mathrm {M}}=\displaystyle \frac {DF}{k_{\mathrm {B}}T}=\frac {DZ^{*}e\rho J_{x}}{k_{\mathrm {B}}T}. \mathref {(2.33)} \{end}{gather} This was the first quantum mechanical expression of the EM induced ion velocity. Within the ballistic model [96] it was shown that there is a linear relation between the EM induced flux and the current density. For a quantum mechanical force calculation another equation was developed and widely used, based on the scattering states of the conducting electrons [15, 117, 124, 132, 128] obtained from the linear response theory of Kubo [84], (2.34) \{begin}{gather} \b {F}_{\mathrm {w}\mathrm {i}\mathrm {n}\mathrm {d}}=\displaystyle \frac {e\Omega }{4\pi ^{3}}\bigg [\iint \limits _{\mathrm {FS}}\frac {\mathrm {d}^{2}\b {k}}{|\nabla E_{\b {k}}|}\tau _{E_{\b {k}}}\b {v}_{\mathrm {F}}(\b {k})\iiint \psi _{\b {k}}^{*}(r)\nabla _{\b {R}}V(|\b {R}-\b {r}|)\psi _{\b {k}}(\b {r})\mathrm {d}^{3}\b {r}\bigg ] \cdot {\it \b {E}}. \{end}{gather} The considered atom is located at position \( \b {R} \) and \( V(|\b {R}-\b {r}|) \) is the effective one-electron potential. \( \b {E} \) is the electric field, \( \Omega \) is the volume of the unit cell, \( \tau _{E} \) is the relaxation time of the scattered electron, \( v_{\mathrm {F}}(\b {k}) \) is the \( \b {k} \)-dependent Fermi velocity, and \( \psi _{\b {k}} \) is the wave function of the electron, which can be calculated with the Schröedinger equation [37] (2.35) \{begin}{gather} - \displaystyle \frac {1}{2}\nabla ^{2}\psi _{\b {k}}+V(\b {r})\psi _{\b {k}}(\b {r})=E_{\b {k}}\psi _{\b {k}}(\b {r}) \mathref {(2.35)} \{end}{gather} In (2.34) the first part on the right hand side has the meaning of the effective charge, has the form of a tensor of second order, and reflects the possible dependence of the crystal orientation on the current direction of the EM force especially for non-cubic crystal metals (e.g. zinc) [59]. For periodic structures the integral (2.34) is always equal 0. The reason is the symmetry of the wave functions regarding the crystal wave vector (2.36) \{begin}{gather} \psi _{\b {k}}(\b {x})\ =\psi _{-\b {k}}^{*}(\b {x}), \mathref {(2.36)} \{end}{gather} making the result of the integration over space an even function in \( \b {k} \) due to the fact that the potential is a real valued function, (2.37) \{begin}{gather} \displaystyle \iiint \psi _{\b {k}}^{*}(r)\nabla _{R}V(|R-r|)\psi _{\b {k}}(r)\mathrm {d}^{3}\b {r}=\iiint \psi _{-\b {k}}^{*}(r)\nabla _{R}V(|R-r|)\psi _{-\b {k}}(\b {r})\mathrm {d}^{3}\b {r}. \mathref {(2.37)} \{end}{gather} Furthermore, the Fermi velocity is an odd function in \( \b {k} \) leading to a vanishing result of the integration over the Fermi sphere. This causes the necessity of calculations of nonperiodic problems. For bulk materials the calculations were performed using the pseudo potential and the KKR method [129, 130, 131, 144, 145]. \( Bly \) et al. [15, 111, 112] showed how a calculation for a single adatom can be carried out by employing the LKKR method [97, 98]. They used the muffin-tin approximation by confining the atomic potential to non-overlapping spheres with a constant interstitial potential [101]. The advantage of this method is the simplicity and the computational economy payed by an insufficient description of valence electron potentials in covalent open structures [147] compared to full potential calculations [105] The electron wave function was defined by (2.38) \{begin}{gather} \displaystyle \psi _{\b {k}}(\b {r})=\frac {4\pi }{\sqrt {\Omega }}\sum _{lm}i^{l}A_{lm}(\b {k})R_{l}(\b {r})Y_{lm}(\b {r}), \mathref {(2.38)} \{end}{gather} Figure 2.1.:  The dependence of the effective valence on the distance of an adatom to a semi-infinite metal surface for two different location above the crystal lattice shown on the right [15]. \( \tau \) is the time scale of relaxation of the electronic charge density. where \( Y_{lm}(\b {r}) \) is a spherical harmonic [150], \( A_{lm}(\b {k}) \) is the coefficient from the spherical wave expansion, evaluated by the LKKR calculation, and \( R_{l}(\b {r}) \) is the spherical solution of the Schrödinger equation [141], which can be expressed as (2.39) \{begin}{gather} R_{l}(\b {r})=\ (j_{l}(\kappa \b {r})+h_{l}^{(1)}(\kappa \b {r}))ie^{i\delta _{l}^{a}}\sin (\delta _{l}^{a}). \mathref {(2.39)} Here \( \kappa = \sqrt {2E_{k}}, j_{l} \) is a spherical Bessel function [150], \( h_{l}^{(1)} \) is the spherical Henkel function [150] of the first kind, and \( \delta _{l}^{a} \) characterizes the scattering phase shift of each atom. The results show that the effective valance of an adatom is strongly dependent on the height of the atom relative to the metal surface (c.f. Figure 2.1). This dependence is quite well described by a simple ballistic model, if the reduced electron density relative to the bulk is taken into account [15]. This calculation was extended to islands of adatoms on a substrate modeled by the jellium model showing that the distance to islands has a huge impact on the effective valance of single adatoms [113]. « PreviousUpNext »Contents Previous: 2 Physics    Top: 2 Physics    Next: 3 Models
17dbe4062e29aa9c
Towards Reconciliation of Biblical and Cosmological Ages of the Universe[1] Alexander Poltorak Two opposite views of the age of the universe are considered.  According to the traditional Jewish calendar based on the Talmud the age of the universe is less then six thousand years.  The cosmological models of the universe supported by the abundant empirical data place the age of the universe in the twelve billion years range.  Critical examination of both views is presented in the first part of the paper.  In the second part, we consider quantum-mechanical state of matter before and after the introduction of a conscious observer.  Role of the observer’s free will is examined.  The definitions of physical and proto-physical states of matter are proposed.  It is suggested that creation of the first conscious being with free leads to collapse of the global quantum wave-function thereby bringing the world from a proto-physical to physical state.  We propose that the total cosmological age of the universe is comprised of two periods: proto-physical on the order of twelve billion years and physical which is no longer then the age of the conscious human observer.  This thesis is used to reconcile the Biblical and scientific views on the age of the universe.  This conclusion is analyzed within the framework of classical Jewish thought. 1.    Cosmological Age of the Universe Contemporary science places the age of the universe in the twelve billion years range, give or take a billion years.  This number is derived from both theoretical models as well as experimental data.  Let us first briefly consider the theoretical foundations of modern cosmology. 1.1.                  Theoretical models Modern cosmology is based on the theoretical foundation of Einstein’s General Theory of Relativity (GR).[[1]]  As Albert Einstein stated in 1942, “It is impossible to achieve any reliable theoretical results in cosmology outside of the principles of General Theory of Relativity.” General Relativity The main equation of GR is G = 8pT                                                                                     (1a) Âik – ½Â*gik = 8p Tik                                                                                                 (1b) Let us consider a simple cosmological model based in GR.  For this purpose the following assumptions are made: (a)            Homogeneous density.  Let us assume that the stars are dispersed in the cosmos like dust with a constant average density of mass-energy r. (b)            Homogeneous and isotropic geometry.  Let us also assume that the curvature of the space-time is constant throughout the universe. (c)             Geometry is closed.  Let us further assume that the universe is closed, as the boundary conditions for the Einstein field equations. Three-dimensional sphere satisfies all of the three conditions above.  The space-time geometry of such a sphere is described by the following metric: ds2 =  –dt2 + a2 (t) [dc2 + sin2c (dq2 + sin2qdf2)]                       (2) The Einstein field equation (1) for this metric is rather simple: 6/a2 (da/dt) 2 + 6/a2 = 16 pr                                                       (3) The first term in this equation is called the “second invariant of external curvature of the space section” of 4-geometry, which shows the rate of expansion of all linear dimensions with time.  The second term is the “internal invariant of 3-dimensional curvature of the space section, taken in a given moment in time. The total “mass” of the universe is M = r 2p2a3                                                                               (4) And the maximum radius of the universe is a max = 4M/3p                                                                             (5) The field equation (3) now takes a simple form: (da/dt) 2a max /a = –1                                                              (6) The first term of this equation is analogous to the kinetic energy and the second – to the potential energy.  It becomes now obvious that the expanding universe can not expand beyond the maximum radius a max because it would render the kinetic energy of expansion negative, which, of course, is impossible.  We see that that the universe begins to expand from a very small radius a with an ever slowing rate of expansion until it stops at the maximum radius a max and begins to collapse back to it’s original state.  This is a very simple cosmological model of a closed universe, which begins its evolution with a Big Bang and ends in a Big Crunch. The astonishing prediction of General Relativity that our universe was expanding, was very disconcerting to Albert Einstein.  In order to do away with this supposedly “erroneous” result, Einstein proposed an ad hoc cosmological constant as an additional term in the GR field equation.  When Hubble proved experimentally in 1929 that the universe was indeed expanding [[2]], Einstein admitted that the addition of the cosmological term was the biggest mistake of his life.  It is interesting to note that two years ago new experimental data obtained from the Hubble telescope demonstrated that the universe is expanding at accelerated pace.  This fact rekindled interest among cosmologists in the cosmological term which represent a mysterious repelling anti-gravity force permeating even the empty space.  The nature of this force is now a subject of much speculation. The ration of the speed of expansion over distance is called Hubble constant: H0 = (speed of expansion)/(distance to the galaxy) = (da/dt)/a   (7) The Hubble constant is measured in kilometers per second (km/sec) per million light-years.  The observable galaxies provide us with the distance and their rate of expansion allowing thereby to calculate the Hubble constant.  The Hubble constant is approximately 55 km/sec per Mps.  The reverse Hubble constant, H0-1 is called Hubble time and it is found to be approximately 18 billion years TH = H0-1 ~ 18 · 109 years                                                          (8) The Hubble time is the time required to reach present observed distances between galaxies assuming that the speed of expansion was constant from time of the Big Bang.  Hubble time is approximately 1.5 times larger than the cosmological age of the Universe, which is, therefore in the range of twelve billion years TU ~ 12 · 109 years                                                                     (9) The Cosmological Models Contemporary Curvature of the Space Cosmological Term L Hyperbolic; K0 < 0 < 0 The universe evolves from the Big Bang expanding until maximum density and then beginning ever accelerating contraction into the Big Crunch Hyperbolic; K0 < 0 = 0 The universe evolves from the Big Bang expanding into a flat Minkowski space when the rate of expansion becomes constant Closed; K0 > 0 <= 0 Friedmann Cosmology: Expansion from the Big Bang, followed by collapse into the Big Crunch Closed; K0 > 0 0 < L > Lcrit. The Universe evolving from the Big Bang slows down its expansion rate until almost stopping still, then beginning to accelerate the expansion exponentially Closed; K0 > 0 0 < L = Lcrit. Einstein Cosmology.  The Universe evolving from the Big Bang asymptotically approaches the maximum radius where it becomes static.  This Cosmology is unstable and contradicts the experimental data Closed; K0 > 0 0 < L < Lcrit. Infinitely large Universe contracting exponentially until reaching a minimal radius, then beginning exponential expansion into infinity Simply speaking, the universe can be either closed like a hyper-sphere, open like a saddle, or flat.  The most recent experimental data seems to support the flat universe.  However, instead of the slowing rate of expansion it appears to be accelerating.  This fact led to recent resurrection of interest to cosmological constant. Big Bang Extrapolating backwards the expanding Universe one arrives at the initial point where the entire infinitely dense Universe was contained in one point, a singularity.  The evolution of the Universe, according to this theory, called the Big Bang, begins from one singularity point, infinitely dense and infinitely hot; a point at which the concepts of space and time do not yet exist.  Inexplicable, ineffable explosion, the courses of which are beyond the limits of scientific inquiry, created the space, time and matter in the first moment after the Big Bang. The cosmology describes the primordial chronology as follows.  The Big Bang created a dot of space of the size of approximately 10-33 cm.  The first moment we can speak of is about 10-43 c.  Before this Plank time interval we can no longer speak of time as we know it.  At this point in time all four fundamental forces of nature: gravitation, electro-magnetism, strong and weak nuclear forces were combined in one “super force”[[3]].  The quarks begin to bond into photons, positrons and neutrinos along with their antiparticles.  The density of the universe at this point is estimated to have been 1094g/cm3 much of it being radiation.  This fireball continued to expand at astonishing speed, many times the speed of light, into a size of a pinhead, an apple, a ball.  One millisecond after the explosion, the Universe was a fireball 30 million times hotter than the surface of the sun, 50 million times denser than lead.  Known as the inflationary epoch, the universe doubled in size one hundred times in less than one millisecond, from an atomic nucleus to 1035 meters in diameter.  An isotropic expansion of the Universe, when it was perfectly smooth, ends at 10-35c.  A small fluctuation of the density at this point is thought to have led to the creation of galaxies.[[4]] When the universe aged to one hundredth of a second, the temperature dropped to 1013K, the Electromagnetic, Strong and Week Nuclear interactions split off the “super force”.  Because of the continuous annihilation of the particles and antiparticle, the matter was not yet stable, unable to survive for more than a few nanoseconds.  The light was not visible yet being trapped in the dense energy ball.  It is called the “Epoch of Last Scattering”. One second after the Big Bang, the universe has expanded to the size of 20 light-years.  The temperature cooled off to ten billion degrees.  After three minutes, when the temperature cooled to one billion degrees, nucleosynthesis first began to take place. The next important stage in the expansion occurred around thirty minutes later when creation of photons increased through annihilation of electron-positron pairs. For the next 300,000 years the universe was expanding while cooling to 10,000K.  It was then that the first helium atoms are thought to be born.  At this point, as the density decreased, the light began to be visible.  From this point on, the universe has been expanding at, apparently, an accelerated pace up until the present time. In 1980, Dr. Alan Guth of MIT proposed an inflation theory to explain the initial explosion of the singularity – the Big Bang.  This inflation theory seems to be well supported by the most recent experiments measuring the size of the ripples in the background microwave radiation. 1.2.                  Experimental Data Light from Distant Stars It takes eight minutes for light to travel from the sun to earth.  Knowing the velocity of light and the distance to the stars, it easy to calculate that it takes many millions of years for the light of the distant stars to reach earth.  By measuring the position of a star at different times of year, astronomers can see the apparent motion of this star compared to more distant stars, and this information can be used to calculate the distance to the nearby star. Measuring the distances to nearby stars is the first step towards measuring distances to very remote objects, and ultimately in determining the distances to the most remote objects in the universe. Astronomers rely on a stacked set of yardsticks of different lengths to measure distances to stars and galaxies.  Each yardstick in the set is measured against, or calibrated to, the previous one.  The most accurate yardsticks in this system are parallax measurements.  The measurements across great cosmic distances run into an inherent problem: one must distinguish between a far, bright object and a nearby faint one.  For this purpose the astronomers use an extremely bright object as “standard candles”, such as supernovae. Let us consider several examples for illustration purposes.  The star closest to us, Proxima Centauri from the Alpha Centauri system, is 4.22 light-years away from us, which means that it takes 4.22 years for the light of Proxima to reach us on earth. Eta Carinae in our galaxy is more than 8,000 light-years away. Estimated to be 100 times more massive than our Sun, Eta Carinae may be one of the most massive stars in our Galaxy. Eta Carinae was observed by Hubble in September 1995. Trifid Nebula, in the constellation Sagittarius, is located about 9,000 light-years from Earth. The galaxy M100 (100th object in the Messier Catalog of non-stellar objects) is one of the brightest members of the Virgo Cluster of galaxies. The galaxy is estimated to be tens of millions of light-years away.  One of the prime goals of the Hubble Space Telescope has been the detection of Cepheid variable stars in distant galaxies. Before HST Cepheids had only been detected in very nearby galaxies, out to about 12 million light years. A team led by Dr. Wendy Freedman of the Carnegie Observatories has detected the furthest Cepheids yet in the Virgo Cluster spiral M100 at a distance of about 50 million light-years. Sextans A in the Milky Way is about 10 million light years distant. The pair of clusters are 166,000 light-years apart in the Large Magellanic Cloud (LMC) in the southern constellation Doradus.  About 60 percent of the stars belong to the dense cluster called NGC 1850, which is estimated to be 50 million years old. A loose distribution of extremely hot massive stars in the same region are only about 4 million years old and represent about 20 percent of the stars in the image. This is a Hubble Space Telescope image of an 800-light-year-wide spiral-shaped disk of dust fueling a massive black hole in the center of galaxy, NGC 4261, located 100 million light-years away in the direction of the constellation Virgo. A rare and spectacular head-on collision between two galaxies appears in this Hubble Space Telescope true-color image of the Cartwheel Galaxy, located 500 million light-years away in the constellation Sculptor. Hubble astronomers conducting research on a class of galaxies called ultra-luminous infrared galaxies (ULIRG), within 3 billion light-years of Earth, have discovered that over two dozen of these are found within “nests” of galaxies, apparently engaged in multiple collisions that lead to fiery pile-ups of three, four or even five galaxies smashing together. In other words the stars we see in the night sky are not the stars as they exist now but the stars as they existed billions of years ago.  Many of them are long gone, exploded into supernovas, collapsed into black holes or simply burned out. The light from supernova number 1987-A in the Large Magellan Cloud, for example, that exploded 169,000 light years ago has just recently reached earth. Before the explosion, this supernova was first a red giant and then a blue giant star.  The incontestable fact that we see the stars billions years old is the simplest and most direct argument for the age of the universe being at least as old as the oldest star. Expanding Universe In 1929 American astronomer Edwin P. Hubble discovered that the galaxies were moving apart in all directions.  He earlier discovered the red shift in the spectrum of light emitted by remote stars.  The shift in the wave frequency is usually associated with the Doppler effect.  Observing two dozen galaxies 106 light-years away and gauging their distance by their brilliancy Hubble discovered that more distant galaxies were racing away from the Earth faster than the brighter ones, more closed to us.  He further discovered that the rate at which galaxies were racing away from Earth was proportional to their distance from Earth.  This let to the conception of the expanding Universe.  According to the Friedmann cosmological model, the universe is expanding as 3-dimensional air-balloon blown in an imaginary 4-dimensional space. The study of several dozen supernovae four to seven billion light-years away demonstrated that the explosions were about 25% dimmer than expected.  This suggested that the universe was expanding slower in the past than it is now and, therefore, it took longer for the universe to reach it present stage.  Thus, an accelerated expansion suggests an older age of the universe. Cosmic Background Radiation Initially the matter and radiation were in thermal equilibrium.  The energy released as the radiation cooled must have obeyed the laws of black-body statistics.  If the temperature of this relict radiation, called cosmic microwave background radiation (CBR), can be measured now, one can calculate the original temperature and vice versa.  Based on the theory of the Big Bang, this temperature was predicted to be around 2.7oK.  In 1965 Arno Penzias and Robert Wilson discovered uniform and isotropic relict radiation having this temperature, for which they received a Nobel Price.  As it was later confirmed by NASA Cosmic Background Explorer (COBE), this discovery was the first sound experimental validation of the Big Bang theory. The CBR has ripples, minor fluctuations, which allow astronomers to use them as the yardsticks to measure the cosmos. The size of the ripples, as measured recently by three teams and the Caltech, Princeton and Berkley, turned out to be approximately one degree on the sky, or twice the size of the Moon as seen from Earth.  The size of the ripples is an indication of the geometry of space, which in turn is determined by the density of mass in the universe.  The ripples of one degree are indicative of a flat universe, predicted by the inflation theory of Dr. Guth.  This discovery once again brought back in focus the mysterious cosmological constant, introduced and then abandoned by Einstein. As recent as five years ago, the experimental data was insufficient to predict the age of the universe more accurately than within the range of 10 to 20 billion years.  In 1994 a team led by Wendy Freedman from Carnegie Observatories in Pasadena, Cal., suggested that the universe was much younger, between 8 to 12 billion years old.  This finding suggested that universe may be younger than some of its oldest stars.  The rival group led by Allan Sandage, interpreting the same data, defended the older universe.  Both groups now converge on the number 12 billion years, which seems to be a consensus for the age of the universe among the scientists today. Geological Age of the Earth Even though the dating of the fossils and geological strata lies outside the scope of this paper, we will mention here in passing that geological age of our own planet further exacerbates the problem. Carbon dating techniques using Carbon-14 (C14) isotopes further corroborated by other methods, such as uranium-thorium radioactive decay place the age of the earth well beyond the biblical age. To summarize, there are compelling theoretical considerations fully corroborated by available experimental data to establish that the ages of our planet, other stars and the entire universe well beyond the apparent biblical age of less than six thousand years.  Amounting to billions of years this discrepancy is so enormous that no amount of criticism related to the scientific methods and assumptions used to arrive at these numbers is going to help to reconcile this discrepancy.  Even if the scientists were overestimating the age of the universe by 50%, which is highly unlikely, Torah and science would still be six billion years apart!  Thus, we find such criticism unproductive and we shall look for solution of this problem elsewhere. 2.    Torah View 2.1.                  The Jewish Calendar According to the traditional Jewish calendar we live now in the year 5760.  The implication of this number is that it seems as the world, from the traditional Jewish point of view is no older than six thousand years.  First of all let us note that the popular misconception that the Torah begins counting the calendar from the beginning of creation of the world has no basis.  In fact, the calendar begins with the creation of Adam – the first man.  Thus, when we say that, according to Jewish tradition, today, for example is five thousand seven hundred sixty years three months and five days, it is from the date first man was created and not from date the world was created. 3.    Previous Attempts to Reconcile the Conflict In his book “Immortality, Resurrection, and the Age of the Universe: a Kabbalistic View”[5], the late Rabbi Aryeh Kaplan present an excellent overview of the various attitudes towards this problem and attempts to resolve it.  In summary, these attitudes may be categorized as follows: Six Days as Six Epochs Each day represents an entire epoch billions of years long This interpretation of the biblical text is far from the literal meaning is not based on any classical commentaries Dismissal If G‑d created the first man fully grown he could have created a “mature” universe which was already billions of years old at the point of creation Irrefutable and, therefore, unscientific approach Sabbatical Cycles Base on the concept of the cosmic sabbatical cycles, the world was 15 billion years old the first man was created A significant but now widely accepted view expressed by some important kabbalists almost two thousand years ago To this we may add another recent approach expressed by Gerald Schroeder that attempts to explain the difference in ages by means of gravitational time dilation.[6] 3.1.                  Sabbatical Cycles The most interest for our discussion present the kabbalistic approach of sabbatical cycles expounded by R. Kaplan.  This section closely follows R. Kaplan’s exposition of this approach. The idea of sabbatical cycles is based on esoteric interpretation of several scriptural and talmudic sayings.  According to the Talmud, the world will exist for seven thousand years and in the [end of] seventh millenium it will be destroyed.[7] According to the Talmudic sage and a great kabbalist of the first century rabbi Nehunya ben HaKanah expressed in his important work Sefer HaTemunah, this seven thousand years period is only one cycle out of total even. This idea is based on a biblical concept of a Jubilee, which consists of seven sabbatical (seven-year) cycles.  This leads to forty nine thousand years as the total age of the universe.  According to many later kabbalists the present cycle is the last of the seven and, therefore, when Adam, the first man, was created, the world was forty two thousand years old. This approach is alluded to in some midrashic sources.  Thus Midrash Rabbah on the verse “It was evening and it was morning, one day” (Genesis 1:5) states, “This teaches that there were orders of time before this.” Another Midrash teaches that “G‑d created universes and destroyed them”[8] seams to support the concept of sabbatical cycles, as it is explain in another kabbalistic treatise Ma’arekheth Elokuth.  Interestingly, the Talmud states that there were 974 generations before Adam.[9]  The idea of sabbatical cycles was expressed and elaborated in the works of such sages of Jewish philosophy and kabbalah as Bahya, Ziyoni, Recanati and Sefer HaChinukh’s commentaries on Leviticus 25:8.  This idea is also alluded to in the commentaries of Nachmanides on Genesis 2:3, Yehuda HaLevy[10] and Ibn Ezra’s commentary on Genesis 8:22. Rabbi Kaplan’s discovery of a little known commentary by Rabbi Isaac of Akko sheds entirely new light on the concept of sabbatical cycles.  Commenting on the verse, “A Thousand years in Your sight are as a day” (Psalms 90:4) midrashic sources have stated that one divine day is equal to a thousand terrestrial years.  In his kabbalistic treatise, Otzar HaHayim, Rabbi Isaac of Akko states that the first six sabbatical cycles are counted in the divine not human years.  If a divine day is thousand years and, then a divine year, equal to 3651/4 divine days, is 365,250 terrestrial years.  If we multiply this number by forty two thousand years comprising the first six cycles before Adam we get fifteen billion three hundred forty and a half million (15,340,500,000) years.  Thus, according to one of the greatest Talmudic sages of the first century, R. Nehunya ben HaKanah, as explained by a prominent kabbalist of the 13th century, R. Isaac of Akko, at the time Adam was created, the universe was already more than fifteen billion years old – a number very closely correlated with the current estimates of the cosmological age of the universe! We must note however, that this approach was strongly contested by Isaac Luria, the holly lion, Ari, who is considered by many the greatest kabbalist of all times.  Ari maintained that the previous sabbatical cycles did not exist on the terrestrial plane and were purely spiritual worlds.  Most of the later kabbalists with rare exceptions accepted the opinion of Ari.  Apparently there was a difference of opinion between these two (pre and post lurianic) schools of kabbalah whether the first phase of creation which stretched for fifteen billion years took place in the physical or spiritual universe. 4.    Quantum reality 4.1.                  Particle-Wave Dualism In 1923, Louis de Broglie suggested that every particle has a wavelength associated with it: l = ћ/p                                                                                       (10) were p is the momentum of the particle and ћ is the Plank constant.[11] Sin 19926, Erwin Schrödinger formulated his famous equation[12] ћ /2m · Ñ2ψ +Vψ = Eψ                                                             (11) where V is the potential energy of a particle, E is the kinetic energy and ψ is the wavefunction that describes the quantum-mechanical state of the particle. 4.2.                  Wave Function What is the wavefunction?  The attempts by Schrödinger and others to interpret it as a scalar potential of some physical field were not successful.  In 1926, Max Born noticed that the square of the amplitude of the particle wavefunction in a given region gives the probability of finding the particle in this region of configuration space.  He suggested that the wavefunction represented not a physical reality by rather our knowledge of the quantum state of an object. The wavefunction represent our knowledge of all possible quantum mechanical states of an object.  In other words the quantum mechanical state of a physical system is a linear superposition of all possible states of this system.  Thus, for example, the state vector for a left circularly polarized photon | ψL > is a linear superposition of the vertical and horizontal eigenstates | ψL >  = 1/Ö2 (| ψv > + i| ψh >)                                                    (12) When a left circularly polarized photon goes through a calcite crystal it is detected to be in either vertical or horizontal polarization states.  At the moment of the measurement the state vector | ψL > being a superposition of two possibilities | ψv > and | ψh > is suddenly reduced to one actuality: either | ψv > or | ψh >.  This is called the collapse of the wavefunction.  What actually happens during the collapse of the wavefunction is that the previously amorphous reality existing in undetermined state of various possibilities is suddenly comes into a physical reality in one particular state (eigenstate). The irreversible (time-asymmetric) collapse of the wavefunction does not follow from the Schrödinger equation. 5.    Introduction of an Observer [[13]] The collapse of the wave functions is a serious problem in the quantum theory.  The trouble is that it doesn’t follow from the Schrödinger equation.  Let us consider an experiment in which we collide one elementary particle with another to measure it’s momentum.  Such an experiment is an interaction of two subatomic particles and should obey the Schrödinger equation.  However, as we said before, this Schrödinger equation does lead to a collapse of the wave function, which is a necessary result of any experiment. So, what than causes the collapse of the wavefunction? To resolve this paradox, it was proposed by the Copenhagen interpretation of quantum mechanics to attribute the collapse of the wave function to the interaction of a microscopic particle with a macroscopic measurement apparatus.  Since the macroscopic object behave according to the classical Newtonian physics and is not described by a wavefunction, it was thought to cause the collapse of the wavefunction of a microscopic object under measurement.  The apparent difficulty with such an explanation is that there is no reason why a macroscopic object should not obey the Schrödinger equation.  In deed, any macroscopic object is composed of microscopic molecules and atoms, which do obey the laws of quantum physics. This situation leads to absurd as clearly demonstrated by the Schrödinger Cat gedanken experiment.  If one places a cat in a closed steal chamber, together with a Geiger tube containing some radioactive material, a hammer connected to the Geiger tube and a phial of prussic acid.  From the amount of the radioactive material and it half-life we calculate that there is 50% chance that within one hour one atom will decay.  If an atom decays, the Geiger counter is triggered and causes the hammer to break the phial of prussic acid, which kills the cat.  Prior to the measurement the state vector of the atom is a linear superposition of two possibilities: decayed and not-decayed atom.  Accordingly the state vector of the cat is also a linear superposition of two physical possibilities: cat is alive and cat is dead.  In other words, before the measurement takes place the cat is dead and alive at the same time!  To be more precise, the cat is neither alive nor dead but is in state, which is a blurred combination of both possible states. 5.1.                  Role of a Conscious Observer In 1932 the mathematician Von Neumann published his famous work, Mathematical Foundations of Quantum Mechanics[14], in which he first clearly demonstrated the discrepancy between continuous time-symmetrical wave function in Schrödinger equation and a discontinuous time-asymmetrical (irreversible) event of measurement.  In this book Von Neumann made a startling suggestion that it must be a conscious observer who causes the wavefunction to collapse.  The reason for this is that the conscious is the only element present in the quantum-mechanical measurement process which is not time-symmetrical and is not required to observe the laws of quantum mechanics.  In other words, Von Neumann replaced the dualism of macroscopic-microscopic worlds with the mind-matter dualism.  While the former is easily critiqued the latter is immune to criticism because whatever we mean by the word consciousness it does not have to obey the Schrödinger equation. Since the mind is to a large degree the product of biochemistry of the brain, once we distill that level of the mind, which is no longer vested in a physical brain and is not a product of biochemical reactions, such non-physical mind has been called by some a human soul, or, more specifically, the intellectual faculty of the soul. Hence, Von Neumann’s approach to collapse of the wavefunction leads us to a classical Cartesian body-soul dualism. In 1961, Eugene Wigner revisited the hypothesis of a conscious observer.[15]  He posed a question: whose mind exactly collapses the wavefunction?  In one considers a gedanken experiment in which an observer relegates the measurement to his assistant and leaves the room.  After his return he inquires of the result of the measurement.  Until he learns of the result, as far as he is concerned, the state of the quantum-mechanical system under observation is a linear superposition of all possible eigenstates.  However, when he asks his assistant whether he knows definitively the results of the experiment, the assistant answers that of course he does.  This led Wigner to conclude that it is the very first conscious observer who collapses the wavefunction. One can ask a question at this point, what level of consciousness an observer must poses in order to collapse the wavefunction?  Is the cat in the Schrödinger experiment considered a conscious enough creature to collapse the wavefunction and thereby escape the inconvenience of being dead and alive at the same time? What about the omniscient G-d?  If G‑d knows the eigenstate of all wavefunctions doesn’t He immediately collapse them all?!  This question is closely related to the paradox of free will.  If G‑d knows everything, and His knowledge must be absolute and true, then how can anybody poses free will? Doesn’t G‑d forces us into acting in a certain way simply by virtue of Him knowing that we were going to act this way?  One possible answer to this paradox, as it is given in Chasidic philosophy, is that G-d indeed knows everything but He keeps His knowledge to Himself without affecting the actions and decisions of His creations.  One may try apply a similar rational here and suggest that, perhaps, G‑d’s knowledge in some peculiar way does not automatically collapse all the wavefunctions of the universe.  Alternatively, one may say that the global collapse of the world wavefunction caused by G‑d’s ultimate knowledge further underlines the paradox of free will. 6.    Resolution of the Conflict Putting aside any discussion of the merits of Von Neumann-Wigner hypothesis of conscious observer, accepting for the time this hypothesis will allow us to resolve the discrepancy between biblical and cosmological ages of the universe. Let us consider the wavefunction ψ0 in the initial moment of time t0 describing all possible eigenstates of the singularity from which, according to the Big Bang theory, the universe is about to be born.  One of the eigenstates of this wavefunction is |ψ+>, which represents the possibility of the explosion that we call Big Bang.  Another eigenstate |ψ> represents an alternative – no Big Bang.  The state vector of the universe in this instance is a linear superposition of both eigenstates: to be or not to be.  Even though the probability of the Big Bang and the subsequent birth of the universe is greater than zero, the state vector of the universe will remain such superposition of existence and non-existence for billions of years, until such time as a conscious observer enters the scene and collapses the world wavefunction thereby realizing the one and only one eigenstate of the universe corresponding to its existence.  The universe is like a giant Schrödinger cat who is awaiting its observer to find out whether it is alive or dead.  It is the man who brings the universe into existence from its undefined state of mathematical probabilities.  The great paradox of the Creation is that if the universe was ever born, it needed a human for a midwife. Therefore, we may say that the universe has two important dates the date of its conception, t0, and the date of its birth.  When human observer probes the age of the universe the physics dictated that he will arrive at the age when the universe was conceived as a mathematical wavefunction having probability of existence.  The real age of the universe may by definition be no greater than the age of the first human observer who collapsed the world wavefunction. 6.1.                  Adam as the First Observer According to the biblical account of creation, Adam was the first fully conscious being – the first observer.  Prior to first human the universe existed in a superposition of all possible states, including the states of existence and no-existence.  When the first man looked for the first time at the universe he immediately collapsed the world wavefunction and brought the world into physical existence. It is easy to see now why the Bible begins the chronology of creation with Adam and not before.  Even though the universe could have been already billions of years old it was the first human (Adam) who actualized the creation and brought it from a fuzzy state of existence/non-existence into the definite physical existence. Perhaps this is why the Bible states: “And G‑d blessed the seventh day, and sanctified it; because on it He had rested from all his work which G‑d created to make” (Genesis 2:3) The classical Jewish commentators on the Bible suggest that the meaning of the peculiar expression “G‑d created to make” is that G‑d created a man to complete His creation, to be a partner of G‑d in creating the universe.  Now it makes perfect sense.  Initially G‑d created the universe in amorphous spiritual form and He created a man to complete the creation and to bring the universe from its potential to an actual reality. This approach allows us to rationally resolve the apparent contradiction between the scientific and the biblical ages of the universe. 6.2.                  Resolution of the Dispute regarding Sabbatical Cycles As we noted before, the dispute related to the age of the universe existed not only between science and religion but also between two main schools of Jewish esoteric philosophy – kabbalah.  According to the ancient school of Rabbi Nehunya ben HaKanah, as explained by Rabbi Isaac of Akko, the universe existed for over fifteen billion years before creation of Adam.  While the lurianic school of kabbalah maintained that this took place in the spiritual rather than physical world. It seems that our approach allows to resolve that contradiction as well.  Indeed, both opinions may be correct at the same time and do not contradict each other.  When Rabbi Nehunya ben HaKanah and Rabbi Isaac of Akko along with other early sages of kabbalah spoke of sabbatical cycles and billions of years in pre-human history they spoke of the universe as originally created by G‑d in general terms.  Ari, however, further clarified the picture by pointing out that the initial phase of pre-human world history was different from the phase the post-human phase and existed on a different plane, which he called spiritual world.  In fact, the quantum mechanics confirms that prior to the first human, the world indeed existed on the different (almost spiritual) plain described by purely mathematical constructs such as the wavefunction.  This completes the puzzle. Following the approach advocated by some of the most respected scientists of this century, Von Neumann, Wigner, Wheeler and others, we are able to reconcile the apparent discrepancy in the age of the universe as it is predicted in the biblical account of creation and contemporary cosmology.  The history of the universe is comprised of two main periods: pre and post human.  In the first period, before first conscious observer peered into the universe, the universe was in an amorphous fuzzy state of linear superposition of all possible states.  The universe at this stage existed only mathematically, as a distribution of probabilities.  This period lasted approximately twelve billion years.  When the first human opened his or her eyes he/she collapsed the world wavefunction and brought the universe into actual existence.  From that point on the Bible and the humanity began counting the new age of the universe. The approached outlined above, while appears to be promising, does not purport to solve all apparent contradictions between science and religion even in the area of biblical chronology which is the subject of this paper.  We limited ourselves to the attempt of reconciling the general age of the universe without going into specific interpretations of the meaning of the six days of creation and other specific details of the biblical account of creation. Some of these problematic areas include the sequence of creation of planets and stars, the meaning appearance and evolution of the biological flora and fauna, such as apparent indications that Bible speaks of the creation of the first humans, Adam and Eve, in a fully grown and mature form and others. These remain to be addressed in the future. All that we attempted to do that not the dismissal of contradictions by there honest assessment and scientific analysis may not only lead to a resolution of an apparent contradiction by moreover, help to enrich our understanding the Bible and the science alike and further our quest for the unified and harmonious view of the universe. [1]              In this chapter we follow substantially the treatment of the subject by Misner, Ch. W., Thorne, K. S.  and Wheeler, J. A. Gravitation. (San Francisco: W. H. Freeman and Co., 1973), II., ch. 6. [2]              Hubble E. P., Proc. Nat. Acad. Sci. (US), 15, 169 (1929) [3]              Wald [4]              Parker [5]              Kaplan, Aryeh. Immortality, Resurrection, and the Age of the Universe: a Kabbalistic View. (Hoboken, NJ: KTAV, 1993) ch.1, pp.1-16 [6]              Schroeder, Gerald. The Science of God, (New York: The Free Press, 1997), ch. 3, p.41 [7]              Babylonian Talmud, Sanhedrin 97a [8]              Rabbi Nehunya ben HaKanah, Sefer HaTemunah. p.314 [9]              Babylonian Talmud , Hagigah 13b [10]            Ha Levi, Yehuda. Kuzari, 1:167 [11]            De Broglie, L. Annales de Physique. (1925) [12]            Schrödinger, E. Annalen der Physik. 79, 361. (1926) [13]            This discussion follows J. Baggott. The Meaning of Quantum Theory (Oxford: Oxford University Press, 1992), 5.3, p. 185-194 [14]            Von Neumann, John. Mathematical Foundations of Quantum Mechanics (Princeton, NJ: Princeton University Press, 1955) [15]            Wigner, Eugene in Good, I.J. (ed.) The Scientist speculates: an anthology of partly-baked ideas. (London: Heinemann, 1961). Print Friendly, PDF & Email
3bbee403c69e81a8
Wave equations take the form: $$\frac{ \partial^2 f} {\partial t^2} = c^2 \nabla ^2f$$ But the Schroedinger equation takes the form: $$i \hbar \frac{ \partial f} {\partial t} = - \frac{\hbar ^2}{2m}\nabla ^2f + U(x) f$$ The partials with respect to time are not the same order. How can Schroedinger's equation be regarded as a wave equation? And why are interference patterns (e.g in the double-slit experiment) so similar for water waves and quantum wavefunctions? • 1 $\begingroup$ For a connection between Schr. eq. and Klein-Gordon eq, see e.g. A. Zee, QFT in a Nutshell, Chap. III.5, and this Phys.SE post plus links therein. $\endgroup$ – Qmechanic Jul 27 '14 at 17:47 • 5 $\begingroup$ "In what sense is the Schrödinger equation a wave equation?" in a loose sense. Its solutions are intuitively wave-like. From a mathematical point of view, things are not as easy. Standard classifications of PDE's dont accommodate the Schrödinger equation, which kinda looks parabolic but it is not dissipative. It shares many properties with hyperbolic equations, so we can say that it is a wave-equation -- not in the technical sense, but yes in a heuristic sense. $\endgroup$ – AccidentalFourierTransform May 16 '17 at 22:21 • 1 $\begingroup$ I had left a comment on one of the answers below, but then deleted it... I'll post something similar here because it's along the lines of what @AccidentalFourierTransform said. I wouldn't call this equation a wave equation. It's not hyperbolic. Wave-like? Maybe. But I don't think I would try to defend the statement that it's a wave equation. To me, hyperbolic <-> wave equation and anything else is just something else. $\endgroup$ – tpg2114 May 16 '17 at 22:28 • 4 $\begingroup$ A variant on this question - why does double-slit interference produce such similar interference patterns for water waves as for the electron wavefunction, if their underlying differential equations are so different? $\endgroup$ – tparker May 16 '17 at 22:50 • 1 $\begingroup$ @tparker We see that all the time in, say, fluid dynamics. Linear potential equations can generate very similar solutions as the full Navier-Stokes equations under some circumstances despite the vast differences in their underlying equations. But, there are solutions that can't be produced by one or the other. I'm reluctant to say it's all just coincidental, but it's not unheard of that fundamentally different equations can produce similar solutions in a limited number of situations. $\endgroup$ – tpg2114 May 16 '17 at 23:15 Actually, a wave equation is any equation that admits wave-like solutions, which take the form $f(\vec{x} \pm \vec{v}t)$. The equation $\frac{\partial^2 f}{\partial t^2} = c^2\nabla^2 f$, despite being called "the wave equation," is not the only equation that does this. If you plug the wave solution into the Schroedinger equation for constant potential, using $\xi = x - vt$ $$\begin{align} -i\hbar\frac{\partial}{\partial t}f(\xi) &= \biggl(-\frac{\hbar^2}{2m}\nabla^2 + U\biggr) f(\xi) \\ i\hbar vf'(\xi) &= -\frac{\hbar^2}{2m}f''(\xi) + Uf(\xi) \\ \end{align}$$ This clearly depends only on $\xi$, not $x$ or $t$ individually, which shows that you can find wave-like solutions. They wind up looking like $e^{ik\xi}$. • 7 $\begingroup$ Doesn't any translationally-invariant PDE satisfy this criterion, even if it isn't rotationally invariant or even linear? $\partial f({\bf \xi}) / \partial x_i = \partial f({\bf \xi}) / \partial \xi_i$ and $\partial f({\bf \xi}) / \partial t = -{\bf v} \cdot {\bf \nabla}_{\bf \xi} f({\bf \xi})$, so if you take any translationally invariant PDE and replace every $\partial / \partial t$ with $-\bf{v} \cdot {\bf \nabla}_{\bf \xi}$, then can't any solution $f({\bf \xi})$ of the resulting 3D PDE be converted into a "wave-like" solution to the original 4D PDE by ... $\endgroup$ – tparker May 8 '17 at 6:35 • 1 $\begingroup$ ... letting $f({\bf \xi}) \to f({\bf x} - {\bf v} t)$? $\endgroup$ – tparker May 8 '17 at 6:35 • 1 $\begingroup$ I've expanded the comment above into an answer. $\endgroup$ – tparker May 10 '17 at 8:28 • 4 $\begingroup$ I disagree. For the wave equation any function f(x-vt) (with correctly fixed v) is a solution. In you Schroedinger example only very special special functions do fulfill the equation. $\endgroup$ – lalala May 17 '17 at 5:19 • $\begingroup$ @lalala ... for non-dispersive waves. However, plenty of other phenomena that your really do want to keep calling 'waves' (like slinky waves, sound in solids, light in glass, or ripples in a pond) no longer support that property: they do have an infinite basis of solutions of the form $e^{i(kx-\omega t)}$, but they no longer sustain $f(x-vt)$ as a solution, exactly in the way that the Schrödinger equation does. "Wave" is a bit of a fluffy term, but if you use that basis to write out the Schrödinger equation, you've got to be prepared to kick out the others. $\endgroup$ – Emilio Pisanty May 25 '17 at 18:32 Both are types of wave equations because the solutions behave as you expect for "waves". However, mathematically speaking they are partial differential equations (PDE) which are not of the same type (so you expect that the class of solutions, given some boundary conditions, will present different behaviour). The constraints on the eigenvalues of the linear operator are also particular to each of the types of PDE. Generally, a second order partial differential equation in two variables can be written as $$A \partial_x^2 u + B \partial_x \partial_y u + C \partial_y^2 u + \text{lower order terms} = 0 $$ The wave equation in one dimension you quote is a simple form for a hyperbolic PDE satisfying $B^2 - 4AC > 0$. The Schrödinger equation is a parabolic PDE in which we have $B^2 - 4AC < 0$. It can be mapped to the heat equation. • 1 $\begingroup$ Shouldn't it be $B^2 - 4AC$ or maybe $2B$ in the PDE? $\endgroup$ – Luzanne May 18 '17 at 15:21 In the technical sense, the Schrödinger equation is not a wave equation (it is not a hyperbolic PDE). In a more heuristic sense, though, one may regard it as one because it exhibits some of the characteristics of typical wave-equations. In particular, the most important property shared with wave-equations is the Huygens principle. For example, this principle is behind the double slit experiment. If you want to read about this principle and the Schrödinger equation, see Huygens' principle, the free Schrodinger particle and the quantum anti-centrifugal force and Huygens’ Principle as Universal Model of Propagation. See also this Math.OF post for more details about the HP and hyperbolic PDE's. As Joe points out in his answer to a duplicate, the Schrodinger equation for a free particle is a variant on the slowly-varying envelope approximation of the wave equation, but I think his answer misses some subtleties. Take a general solution $f(x)$ to the wave equation $\partial^2 f = 0$ (we use Lorentz-covariant notation and the -+++ sign convention). Imagine decomposing $f$ into a single plane wave modulated by an envelope function $\psi(x)$: $f(x) = \psi(x)\, e^{i k \cdot x}$, where the four-vector $k$ is null. The wave equation then becomes $$(\partial^\mu + 2 i k^\mu) \partial_\mu \psi = ({\bf \nabla} + 2 i\, {\bf k}) \cdot {\bf \nabla} \psi + \frac{1}{c^2} (-\partial_t + 2 i \omega) \partial_t \psi= 0,$$ where $c$ is the wave velocity. If there exists a Lorentz frame in which $|{\bf k} \cdot {\bf \nabla} \psi| \ll |{\bf \nabla} \cdot {\bf \nabla} \psi|$ and $|\partial_t \dot{\psi}| \ll \omega |\dot{\psi}|$, then in that frame the middle two terms can be neglected, and we are left with $$i \partial_t \psi = -\frac{c^2}{2 \omega} \nabla^2 \psi,$$ which is the Schrodinger equation for a free particle of mass $m = \hbar \omega / c^2$. $|\partial_t \dot{\psi}| \ll \omega |\dot{\psi}|$ means that the envelope function's time derivative $\dot{\psi}$ is changing much more slowly than the plane wave is oscillating (i.e. many plane wave oscillations occur in the time $|\dot{\psi} / \partial_t \dot{\psi}|$ that it takes for $\dot{\psi}$ to change significantly) - hence the name "slowly-varying envelope approximation." The physical interpretation of $|{\bf k} \cdot {\bf \nabla} \psi| \ll |{\bf \nabla} \cdot {\bf \nabla} \psi|$ is much less clear and I don't have a great intuition for it, but it seems to basically imply that if we take the direction of wave propagation to be $\hat{{\bf z}}$, then $\partial_z \psi$ changes very quickly in space along the direction of wave propagation (i.e. you only need to travel a small fraction of a wavelength $\lambda$ before $\partial_z \psi$ changes significantly). This is a rather strange limit, because clearly it doesn't really make sense to think of $\psi$ as an "envelope" if it changes over a length scale much shorter than the wavelength of the wave that it's supposed to be enveloping. Frankly, I'm not even sure if this limit is compatible with the other limit $|\partial_t \dot{\psi}| \ll \omega |\dot{\psi}|$. I would welcome anyone's thoughts on how to interpret this limit. As stressed in other answers and comments, the common point between these equations is that their solutions are "waves". It is the reason why the physics they describe (eg interference patterns) is similar. Tentatively, I would define a "wavelike" equation as 1. a linear PDE 2. whose space of spacially bounded* solutions admits a (pseudo-)basis of the form $$e^{i \vec{k}.\vec{x} - i \omega_{\alpha}(\vec{k})t}, \vec{k} \in \mathbb{R}^n, \alpha \in \left\{1,\dots,r\right\}$$ with $\omega_1(\vec{k}),\dots,\omega_r(\vec{k})$ real-valued (aka. dispersion relation). For example, in 1+1 dimension, these are going to be the PDE of the form $$\sum_{p,q} A_{p,q} \partial_x^p \partial_t^q \psi = 0$$ such that, for all $k \in \mathbb{R}$ the polynomial $$Q_k(\omega) := \sum_{p,q} (i)^{p+q} A_{p,q} k^p \omega^q$$ only admits real roots. In this sense this is reminiscent of the hyperbolic vs parabolic classification detailed in @DaniH answer, but without giving a special role to 2nd order derivatives. Note that with such a definition the free Schrödinger equation would qualify as wavelike, but not the one with a potential (and rightly so I think, as the physics of, say, the quantum harmonic oscillator is quite different, with bound states etc). Nor would the heat equation $\partial_t \psi -c \partial_x^2 \psi = 0$: the '$i$' in the Schrödinger equation matters! * Such equations will also often admit evanescent waves solutions corresponding to imaginary $\vec{k}$. This answer elaborates on my comment to David Z's answer. I think his definition of a wave equation is excessively broad, because it includes every translationally invariant PDE and every value of $v$. For simplicity, let's specialize to linear PDE's in one spatial dimension. A general order-$N$ such equation takes the form $$\sum_{n=0}^N \sum_{\mu_1, \dots, \mu_n \in \{t, x\}} c_{\mu_1, \dots, \mu_n} \partial_{\mu_1} \dots \partial_{\mu_n}\, f(x, t) = 0.$$ To simplify the notation, we'll let $\{ \mu \}$ denote $\mu_1, \dots, \mu_n$, so that $$\sum_{n=0}^N \sum_{\{\mu\}} c_{\{\mu\}} \partial_{\mu_1} \dots \partial_{\mu_n}\, f(x, t) = 0.$$ Let's make the ansatz that $f$ only depends on $\xi := x - v t$. Then $\partial_x f(\xi) = f'(\xi)$ and $\partial_t f(\xi) = -v\, f'(\xi)$. If we define $a_{\{\mu\}} \in \mathbb{N}$ to simply count the number of indices $\mu_i \in \{\mu\}$ that equal $t$, then the PDE becomes $$\sum_{n=0}^N f^{(n)}(\xi) \sum_{\{\mu\}} c_{\{\mu\}} (-v)^{a_{\{\mu\}}} = 0.$$ Defining $c'_n := \sum \limits_{\{\mu\}} c_{\{\mu\}} (-v)^{a_{\{\mu\}}}$, we get the ordinary differential equation with constant coefficients $$\sum_{n=0}^N c'_n\ f^{(n)}(\xi) = 0.$$ Now as usual, we can make the ansatz $f(\xi) = e^{i z \xi}$ and find that the differential equation is satisfied as long as $z$ is a root of the characteristic polynomial $\sum \limits_{n=0} c_n' (iz)^n$. So our completely arbitrary translationally invariant linear PDE will have "wave-like solutions" traveling at every possibly velocity! • 2 $\begingroup$ Interesting point. I'm not ready to admit that the definition is overly broad; maybe all translationally invariant linear PDEs are "wave equations". But I am wondering whether there's more to the story. For example, do some of these PDEs admit other solutions which cannot be expressed as a linear combination of waves $f(x \pm vt)$? $\endgroup$ – David Z May 10 '17 at 19:07 • $\begingroup$ @DavidZ The PDE's don't even have to be linear, as mentioned in my original comment (I just considered the linear case for simplicity). If you allow the phrase "wave equation" to cover general TI nonlinear PDE's, in my opinion it becomes so broad that you might as well just say "translationally invariant PDE." $\endgroup$ – tparker May 10 '17 at 23:02 While it is not very technical in nature it is worth going back to the first definition of what a wave is (that is, the one you use before learning that "a wave is a solution to a wave-equation"). The wording I use in the introductory classes is a moving disturbance where the 'disturbance' is allowed to be in any measurable quantity, and simple means that the quantity is seen to vary from its equilibrium value and then return to that value. The surprising thing is not how general that expression is, but that it is necessary to use something that general to cover all the basic cases: waves on strings, surface waves on liquids: sound and light. And by that definition Schrödinger's equation is used to describe the moving variation of various observables, so arguably qualifies. There is room to quibble—the wave-function itself is not an observable, and even the distributions of values that can be observed are often statistical in nature—but I've always been comfortable with this approach. • $\begingroup$ Yes, I'm starting to regret placing the bounty on this question instead of creating my own. The thing I really want to understand is the much more concrete question of why slit interference patterns look so similar (both qualitatively and quantitatively) for the free-particle Schrödinger equation and for "the" wave equation, even the the differential equations are so mathematically different. $\endgroup$ – tparker May 24 '17 at 2:15 • $\begingroup$ I see. That is an interesting question, but not one I've given a lot of thought to before. A line of inquiry which presents itself is considering the TDSE as the Newtonian approximation to the underlying relativistic, quantum, wave-equations, which have the symmetry between time and space that we see in the "the" wave-equation. Certainly that fits with the usual heuristic picture in which $H = p^2/2m + V(x)$ plus the time derivative of $\Psi_0\exp(kx - \omega t)$ resulting in energy while spacial derivatives result in momentum (to within appropriate constants, of course). $\endgroup$ – dmckee May 24 '17 at 2:40 • 1 $\begingroup$ @tparker just to let you know, reading your comments prompted me to ask this related question. physics.stackexchange.com/questions/335225/… $\endgroup$ – CDCM May 25 '17 at 1:06 Your Answer
114c749fbe4a145b
In physics, specifically in quantum mechanics, a coherent state is the specific quantum state of the quantum harmonic oscillator, often described as a state which has dynamics most closely resembling the oscillatory behavior of a classical harmonic oscillator. It was the first example of quantum dynamics when Erwin Schrödinger derived it in 1926, while searching for solutions of the Schrödinger equation that satisfy the correspondence principle.[1] The quantum harmonic oscillator and hence, the coherent states arise in the quantum theory of a wide range of physical systems.[2] For instance, a coherent state describes the oscillating motion of a particle confined in a quadratic potential well (for an early reference, see e.g. Schiff's textbook[3]). The coherent state describes a state in a system for which the ground-state wavepacket is displaced from the origin of the system. This state can be related to classical solutions by a particle oscillating with an amplitude equivalent to the displacement. These states, expressed as eigenvectors of the lowering operator and forming an overcomplete family, were introduced in the early papers of John R. Klauder, e. g.[4] In the quantum theory of light (quantum electrodynamics) and other bosonic quantum field theories, coherent states were introduced by the work of Roy J. Glauber in 1963 and are also known as Glauber states. The concept of coherent states has been considerably abstracted; it has become a major topic in mathematical physics and in applied mathematics, with applications ranging from quantization to signal processing and image processing (see Coherent states in mathematical physics). For this reason, the coherent states associated to the quantum harmonic oscillator are sometimes referred to as canonical coherent states (CCS), standard coherent states, Gaussian states, or oscillator states. Coherent states in quantum opticsEdit Figure 1: The electric field, measured by optical homodyne detection, as a function of phase for three coherent states emitted by a Nd:YAG laser. The amount of quantum noise in the electric field is completely independent of the phase. As the field strength, i.e. the oscillation amplitude α of the coherent state is increased, the quantum noise or uncertainty is constant at 1/2, and so becomes less and less significant. In the limit of large field the state becomes a good approximation of a noiseless stable classical wave. The average photon numbers of the three states from top to bottom are ⟨n⟩=4.2, 25.2, 924.5[5] Figure 2: The oscillating wave packet corresponding to the second coherent state depicted in Figure 1. At each phase of the light field, the distribution is a Gaussian of constant width. Figure 3: Wigner function of the coherent state depicted in Figure 2. The distribution is centered on state's amplitude α and is symmetric around this point. The ripples are due to experimental errors. In quantum optics the coherent state refers to a state of the quantized electromagnetic field, etc.[2][6][7] that describes a maximal kind of coherence and a classical kind of behavior. Erwin Schrödinger derived it as a "minimum uncertainty" Gaussian wavepacket in 1926, searching for solutions of the Schrödinger equation that satisfy the correspondence principle.[1] It is a minimum uncertainty state, with the single free parameter chosen to make the relative dispersion (standard deviation in natural dimensionless units) equal for position and momentum, each being equally small at high energy. Further, in contrast to the energy eigenstates of the system, the time evolution of a coherent state is concentrated along the classical trajectories. The quantum linear harmonic oscillator, and hence coherent states, arise in the quantum theory of a wide range of physical systems. They occur in the quantum theory of light (quantum electrodynamics) and other bosonic quantum field theories. While minimum uncertainty Gaussian wave-packets had been well-known, they did not attract full attention until Roy J. Glauber, in 1963, provided a complete quantum-theoretic description of coherence in the electromagnetic field.[8] In this respect, the concurrent contribution of E.C.G. Sudarshan should not be omitted,[9] (there is, however, a note in Glauber's paper that reads: "Uses of these states as generating functions for the  -quantum states have, however, been made by J. Schwinger [10]). Glauber was prompted to do this to provide a description of the Hanbury-Brown & Twiss experiment which generated very wide baseline (hundreds or thousands of miles) interference patterns that could be used to determine stellar diameters. This opened the door to a much more comprehensive understanding of coherence. (For more, see Quantum mechanical description.) In classical optics, light is thought of as electromagnetic waves radiating from a source. Often, coherent laser light is thought of as light that is emitted by many such sources that are in phase. Actually, the picture of one photon being in-phase with another is not valid in quantum theory. Laser radiation is produced in a resonant cavity where the resonant frequency of the cavity is the same as the frequency associated with the atomic electron transitions providing energy flow into the field. As energy in the resonant mode builds up, the probability for stimulated emission, in that mode only, increases. That is a positive feedback loop in which the amplitude in the resonant mode increases exponentially until some non-linear effects limit it. As a counter-example, a light bulb radiates light into a continuum of modes, and there is nothing that selects any one mode over the other. The emission process is highly random in space and time (see thermal light). In a laser, however, light is emitted into a resonant mode, and that mode is highly coherent. Thus, laser light is idealized as a coherent state. (Classically we describe such a state by an electric field oscillating as a stable wave. See Fig.1) The energy eigenstates of the linear harmonic oscillator (e.g., masses on springs, lattice vibrations in a solid, vibrational motions of nuclei in molecules, or oscillations in the electromagnetic field) are fixed-number quantum states. The Fock state (e.g. a single photon) is the most particle-like state; it has a fixed number of particles, and phase is indeterminate. A coherent state distributes its quantum-mechanical uncertainty equally between the canonically conjugate coordinates, position and momentum, and the relative uncertainty in phase [defined heuristically] and amplitude are roughly equal—and small at high amplitude. Quantum mechanical definitionEdit Mathematically, a coherent state   is defined to be the (unique) eigenstate of the annihilation operator â associated to the eigenvalue α. Formally, this reads, Since â is not hermitian, α is, in general, a complex number. Writing   |α| and θ are called the amplitude and phase of the state  . The state   is called a canonical coherent state in the literature, since there are many other types of coherent states, as can be seen in the companion article Coherent states in mathematical physics. Physically, this formula means that a coherent state remains unchanged by the annihilation of field excitation or, say, a particle. An eigenstate of the annihilation operator has a Poissonian number distribution when expressed in a basis of energy eigenstates, as shown below. A Poisson distribution is a necessary and sufficient condition that all detections are statistically independent. Compare this to a single-particle state (   Fock state): once one particle is detected, there is zero probability of detecting another. The derivation of this will make use of dimensionless operators, X and P, normally called field quadratures in quantum optics. (See Nondimensionalization.) These operators are related to the position and momentum operators of a mass m on a spring with constant k, Figure 4: The probability of detecting n photons, the photon number distribution, of the coherent state in Figure 3. As is necessary for a Poissonian distribution the mean photon number is equal to the variance of the photon number distribution. Bars refer to theory, dots to experimental values. For an optical field, are the real and imaginary components of the mode of the electric field inside a cavity of volume  . With these (dimensionless) operators, the Hamiltonian of either system becomes Erwin Schrödinger was searching for the most classical-like states when he first introduced minimum uncertainty Gaussian wave-packets. The quantum state of the harmonic oscillator that minimizes the uncertainty relation with uncertainty equally distributed between X and P satisfies the equation or, equivalently, and hence Thus, given (∆X−∆P)² ≥ 0, Schrödinger found that the minimum uncertainty states for the linear harmonic oscillator are the eigenstates of (X+iP). Since â is (X+iP), this is recognizable as a coherent state in the sense of the above definition. Using the notation for multi-photon states, Glauber characterized the state of complete coherence to all orders in the electromagnetic field to be the eigenstate of the annihilation operator—formally, in a mathematical sense, the same state as found by Schrödinger. The name coherent state took hold after Glauber's work. If the uncertainty is minimized, but not necessarily equally balanced between X and P, the state is called a squeezed coherent state. The coherent state's location in the complex plane (phase space) is centered at the position and momentum of a classical oscillator of the phase θ and amplitude |α| given by the eigenvalue α (or the same complex electric field value for an electromagnetic wave). As shown in Figure 5, the uncertainty, equally spread in all directions, is represented by a disk with diameter ​12. As the phase varies, the coherent state circles around the origin and the disk neither distorts nor spreads. This is the most similar a quantum state can be to a single point in phase space. Figure 5: Phase space plot of a coherent state. This shows that the uncertainty in a coherent state is equally distributed in all directions. The horizontal and vertical axes are the X and P quadratures of the field, respectively (see text). The red dots on the x-axis trace out the boundaries of the quantum noise in Figure 1. For more detail, see the corresponding figure of the phase space formulation. Since the uncertainty (and hence measurement noise) stays constant at ​12 as the amplitude of the oscillation increases, the state behaves increasingly like a sinusoidal wave, as shown in Figure 1. Moreover, since the vacuum state   is just the coherent state with α=0, all coherent states have the same uncertainty as the vacuum. Therefore, one may interpret the quantum noise of a coherent state as being due to vacuum fluctuations. The notation   does not refer to a Fock state. For example, when α=1, one should not mistake   for the single-photon Fock state, which is also denoted   in its own notation. The expression   with α=1 represents a Poisson distribution of number states   with a mean photon number of unity. The formal solution of the eigenvalue equation is the vacuum state displaced to a location α in phase space, i.e., it is obtained by letting the unitary displacement operator D(α) operate on the vacuum, where â = X+iP and â = X-iP. This can be easily seen, as can virtually all results involving coherent states, using the representation of the coherent state in the basis of Fock states, where |n〉are energy (number) eigenvectors of the Hamiltonian For the corresponding Poissonian distribution, the probability of detecting n photons is Similarly, the average photon number in a coherent state is and the variance is That is, the standard deviation of the number detected goes like the square root of the number detected. So in the limit of large α, these detection statistics are equivalent to that of a classical stable wave. These results apply to detection results at a single detector and thus relate to first order coherence (see degree of coherence). However, for measurements correlating detections at multiple detectors, higher-order coherence is involved (e.g., intensity correlations, second order coherence, at two detectors). Glauber's definition of quantum coherence involves nth-order correlation functions (n-th order coherence) for all n. The perfect coherent state has all n-orders of correlation equal to 1 (coherent). It is perfectly coherent to all orders. Roy J. Glauber's work was prompted by the results of Hanbury-Brown and Twiss that produced long-range (hundreds or thousands of miles) first-order interference patterns through the use of intensity fluctuations (lack of second order coherence), with narrow band filters (partial first order coherence) at each detector. (One can imagine, over very short durations, a near-instantaneous interference pattern from the two detectors, due to the narrow band filters, that dances around randomly due to the shifting relative phase difference. With a coincidence counter, the dancing interference pattern would be stronger at times of increased intensity [common to both beams], and that pattern would be stronger than the background noise.) Almost all of optics had been concerned with first order coherence. The Hanbury-Brown and Twiss results prompted Glauber to look at higher order coherence, and he came up with a complete quantum-theoretic description of coherence to all orders in the electromagnetic field (and a quantum-theoretic description of signal-plus-noise). He coined the term coherent state and showed that they are produced when a classical electric current interacts with the electromagnetic field. At α ≫ 1, from Figure 5, simple geometry gives Δθ |α | = 1/2. From this, it appears that there is a tradeoff between number uncertainty and phase uncertainty, Δθ Δn = 1/2, which is sometimes interpreted as a number-phase uncertainty relation; but this is not a formal strict uncertainty relation: there is no uniquely defined phase operator in quantum mechanics.[11][12][13][14][15][16][17][18] The wavefunction of a coherent stateEdit Time evolution of the probability distribution with quantum phase (color) of a coherent state with α=3. To find the wavefunction of the coherent state, the minimal uncertainty Schrödinger wave packet, it is easiest to start with the Heisenberg picture of the quantum harmonic oscillator for the coherent state  . Note that The coherent state is an eigenstate of the annihilation operator in the Heisenberg picture. It is easy to see that, in the Schrödinger picture, the same eigenvalue In the coordinate representations resulting from operating by 〈x|, this amounts to the differential equation, which is easily solved to yield where θ(t) is a yet undetermined phase, to be fixed by demanding that the wavefunction satisfies the Schrödinger equation. It follows that so that σ is the initial phase of the eigenvalue. The mean position and momentum of this "minimal Schrödinger wave packet" ψ(α) are thus oscillating just like a classical system, The probability density remains a Gaussian centered on this oscillating mean, Mathematical features of the canonical coherent statesEdit The canonical coherent states described so far have three properties that are mutually equivalent, since each of them completely specifies the state  , namely, 1. They are eigenvectors of the annihilation operator:    . 2. They are obtained from the vacuum by application of a unitary displacement operator:    . 3. They are states of (balanced) minimal uncertainty:     . Each of these properties may lead to generalizations, in general different from each other (see the article "Coherent states in mathematical physics" for some of these). We emphasize that coherent states have mathematical features that are very different from those of a Fock state; for instance, two different coherent states are not orthogonal, (linked to the fact that they are eigenvectors of the non-self-adjoint annihilation operator â). Thus, if the oscillator is in the quantum state   it is also with nonzero probability in the other quantum state   (but the farther apart the states are situated in phase space, the lower the probability is). However, since they obey a closure relation, any state can be decomposed on the set of coherent states. They hence form an overcomplete basis, in which one can diagonally decompose any state. This is the premise for the Sudarshan-Glauber P representation. This closure relation can be expressed by the resolution of the identity operator I in the vector space of quantum states, This resolution of the identity is intimately connected to the Segal–Bargmann transform. Another peculiarity is that   has no eigenket (while â has no eigenbra). The following equality is the closest formal substitute, and turns out to be useful for technical computations, This last state is known as an "Agarwal state" or photon-added coherent state and denoted as   Normalized Agarwal states of order n can be expressed as  [19] The above resolution of the identity may be derived (restricting to one spatial dimension for simplicity) by taking matrix elements between eigenstates of position,  , on both sides of the equation. On the right-hand side, this immediately gives δ(x-y). On the left-hand side, the same is obtained by inserting from the previous section (time is arbitrary), then integrating over   using the Fourier representation of the delta function, and then performing a Gaussian integral over  . In particular, the Gaussian Schroedinger wavepacket state follows from the explicit value The resolution of the identity may also be expressed in terms of particle position and momentum. For each coordinate dimension (using an adapted notation with new meaning for  ), the closure relation of coherent states reads This can be inserted in any quantum-mechanical expectation value, relating it to some quasi-classical phase-space integral and explaining, in particular, the origin of normalisation factors   for classical partition functions, consistent with quantum mechanics. In addition to being an exact eigenstate of annihilation operators, a coherent state is an approximate common eigenstate of particle position and momentum. Restricting to one dimension again, The error in these approximations is measured by the uncertainties of position and momentum, Thermal coherent stateEdit A single mode thermal coherent state[20] is produced by displacing a thermal mixed state in phase space, in direct analogy to the displacement of the vacuum state in view of generating a coherent state. The density matrix of a coherent thermal state in operator representation reads where   is the displacement operator which generates the coherent state   with complex amplitude  , and   . The partition function is equal to Using the expansion of the unity operator in Fock states,  , the density operator definition can be expressed in the following form where   stands for the displaced Fock state. We remark that if temperature goes to zero we have which is the density matrix for a coherent state. The average number of photons in that state can be calculated as below where for the last term we can write As a result, we find where   is the average of the photon number calculated with respect to the thermal state. Here we have defined, for ease of notation, and we write explicitly In the limit   we obtain  , which is consistent with the expression for the density matrix operator at zero temperature. Likewise, the photon number variance can be evaluated as with  . We deduce that the second moment cannot be uncoupled to the thermal and the quantum distribution moments, unlike the average value (first moment). In that sense, the photon statistics of the displaced thermal state is not described by the sum of the Poisson statistics and the Boltzmann statistics. The distribution of the initial thermal state in phase space broadens as a result of the coherent displacement. Coherent states of Bose–Einstein condensatesEdit • A Bose–Einstein condensate (BEC) is a collection of boson atoms that are all in the same quantum state. In a thermodynamic system, the ground state becomes macroscopically occupied below a critical temperature — roughly when the thermal de Broglie wavelength is longer than the interatomic spacing. Superfluidity in liquid Helium-4 is believed to be associated with the Bose–Einstein condensation in an ideal gas. But 4He has strong interactions, and the liquid structure factor (a 2nd-order statistic) plays an important role. The use of a coherent state to represent the superfluid component of 4He provided a good estimate of the condensate / non-condensate fractions in superfluidity, consistent with results of slow neutron scattering.[21][22][23] Most of the special superfluid properties follow directly from the use of a coherent state to represent the superfluid component — that acts as a macroscopically occupied single-body state with well-defined amplitude and phase over the entire volume. (The superfluid component of 4He goes from zero at the transition temperature to 100% at absolute zero. But the condensate fraction is about 6%[24] at absolute zero temperature, T=0K.) • Early in the study of superfluidity, Penrose and Onsager proposed a metric ("order parameter") for superfluidity.[25] It was represented by a macroscopic factored component (a macroscopic eigenvalue) in the first-order reduced density matrix. Later, C. N. Yang [26] proposed a more generalized measure of macroscopic quantum coherence, called "Off-Diagonal Long-Range Order" (ODLRO),[26] that included fermion as well as boson systems. ODLRO exists whenever there is a macroscopically large factored component (eigenvalue) in a reduced density matrix of any order. Superfluidity corresponds to a large factored component in the first-order reduced density matrix. (And, all higher order reduced density matrices behave similarly.) Superconductivity involves a large factored component in the 2nd-order ("Cooper electron-pair") reduced density matrix. • The reduced density matrices used to describe macroscopic quantum coherence in superfluids are formally the same as the correlation functions used to describe orders of coherence in radiation. Both are examples of macroscopic quantum coherence. The macroscopically large coherent component, plus noise, in the electromagnetic field, as given by Glauber's description of signal-plus-noise, is formally the same as the macroscopically large superfluid component plus normal fluid component in the two-fluid model of superfluidity. • Every-day electromagnetic radiation, such as radio and TV waves, is also an example of near coherent states (macroscopic quantum coherence). That should "give one pause" regarding the conventional demarcation between quantum and classical. • The coherence in superfluidity should not be attributed to any subset of helium atoms; it is a kind of collective phenomena in which all the atoms are involved (similar to Cooper-pairing in superconductivity, as indicated in the next section). Coherent electron states in superconductivityEdit • Electrons are fermions, but when they pair up into Cooper pairs they act as bosons, and so can collectively form a coherent state at low temperatures. This pairing is not actually between electrons, but in the states available to the electrons moving in and out of those states.[27] Cooper pairing refers to the first model for superconductivity.[28] • These coherent states are part of the explanation of effects such as the Quantum Hall effect in low-temperature superconducting semiconductors. • According to Gilmore and Perelomov, who showed it independently, the construction of coherent states may be seen as a problem in group theory, and thus coherent states may be associated to groups different from the Heisenberg group, which leads to the canonical coherent states discussed above.[29][30][31][32] Moreover, these coherent states may be generalized to quantum groups. These topics, with references to original work, are discussed in detail in Coherent states in mathematical physics. • In quantum field theory and string theory, a generalization of coherent states to the case where infinitely many degrees of freedom are used to define a vacuum state with a different vacuum expectation value from the original vacuum. • In one-dimensional many-body quantum systems with fermionic degrees of freedom, low energy excited states can be approximated as coherent states of a bosonic field operator that creates particle-hole excitations. This approach is called bosonization. • The Gaussian coherent states of nonrelativistic quantum mechanics can be generalized to relativistic coherent states of Klein-Gordon and Dirac particles.[33][34][35] • Coherent states have also appeared in works on loop quantum gravity or for the construction of (semi)classical canonical quantum general relativity.[36][37] See alsoEdit External linksEdit 1. ^ a b Schrödinger, E. (1926). "Der stetige Übergang von der Mikro- zur Makromechanik". Die Naturwissenschaften (in German). Springer Science and Business Media LLC. 14 (28): 664–666. doi:10.1007/bf01507634. ISSN 0028-1042. 2. ^ a b J.R. Klauder and B. Skagerstam, Coherent States, World Scientific, Singapore, 1985. 3. ^ L.I. Schiff, Quantum Mechanics, McGraw Hill, New York, 1955. 4. ^ Klauder, John R (1960). "The action option and a Feynman quantization of spinor fields in terms of ordinary c-numbers". Annals of Physics. Elsevier BV. 11 (2): 123–168. doi:10.1016/0003-4916(60)90131-7. ISSN 0003-4916. 5. ^ Breitenbach, G.; Schiller, S.; Mlynek, J. (1997). "Measurement of the quantum states of squeezed light" (PDF). Nature. Springer Nature. 387 (6632): 471–475. doi:10.1038/387471a0. ISSN 0028-0836. 6. ^ Zhang, Wei-Min; Feng, Da Hsuan; Gilmore, Robert (1990-10-01). "Coherent states: Theory and some applications". Reviews of Modern Physics. American Physical Society (APS). 62 (4): 867–927. doi:10.1103/revmodphys.62.867. ISSN 0034-6861. 7. ^ J-P. Gazeau, Coherent States in Quantum Physics, Wiley-VCH, Berlin, 2009. 8. ^ Glauber, Roy J. (1963-09-15). "Coherent and Incoherent States of the Radiation Field". Physical Review. American Physical Society (APS). 131 (6): 2766–2788. doi:10.1103/physrev.131.2766. ISSN 0031-899X. 9. ^ Sudarshan, E. C. G. (1963-04-01). "Equivalence of Semiclassical and Quantum Mechanical Descriptions of Statistical Light Beams". Physical Review Letters. American Physical Society (APS). 10 (7): 277–279. doi:10.1103/physrevlett.10.277. ISSN 0031-9007. 10. ^ Schwinger, Julian (1953-08-01). "The Theory of Quantized Fields. III". Physical Review. American Physical Society (APS). 91 (3): 728–740. doi:10.1103/physrev.91.728. ISSN 0031-899X. 11. ^ L. Susskind and J. Glogower, Quantum mechanical phase and time operator,Physics 1 (1963) 49. 12. ^ Carruthers, P.; Nieto, Michael Martin (1968-04-01). "Phase and Angle Variables in Quantum Mechanics". Reviews of Modern Physics. American Physical Society (APS). 40 (2): 411–440. doi:10.1103/revmodphys.40.411. ISSN 0034-6861. 13. ^ Barnett, S.M.; Pegg, D.T. (1989). "On the Hermitian Optical Phase Operator". Journal of Modern Optics. Informa UK Limited. 36 (1): 7–19. doi:10.1080/09500348914550021. ISSN 0950-0340. 14. ^ Busch, P.; Grabowski, M.; Lahti, P.J. (1995). "Who Is Afraid of POV Measures? Unified Approach to Quantum Phase Observables". Annals of Physics. Elsevier BV. 237 (1): 1–11. doi:10.1006/aphy.1995.1001. ISSN 0003-4916. 15. ^ Dodonov, V V (2002-01-08). "'Nonclassical' states in quantum optics: a 'squeezed' review of the first 75 years". Journal of Optics B: Quantum and Semiclassical Optics. IOP Publishing. 4 (1): R1–R33. doi:10.1088/1464-4266/4/1/201. ISSN 1464-4266. 16. ^ V.V. Dodonov and V.I.Man'ko (eds), Theory of Nonclassical States of Light, Taylor \& Francis, London, New York, 2003. 17. ^ Vourdas, A (2006-02-01). "Analytic representations in quantum mechanics". Journal of Physics A: Mathematical and General. IOP Publishing. 39 (7): R65–R141. doi:10.1088/0305-4470/39/7/r01. ISSN 0305-4470. 19. ^ Agarwal, G. S.; Tara, K. (1991-01-01). "Nonclassical properties of states generated by the excitations on a coherent state". Physical Review A. 43 (1): 492–497. Bibcode:1991PhRvA..43..492A. doi:10.1103/PhysRevA.43.492. 20. ^ Oz-Vogt, J.; Mann, A.; Revzen, M. (1991). "Thermal Coherent States and Thermal Squeezed States". Journal of Modern Optics. Informa UK Limited. 38 (12): 2339–2347. doi:10.1080/09500349114552501. ISSN 0950-0340. 21. ^ Hyland, G.J.; Rowlands, G.; Cummings, F.W. (1970). "A proposal for an experimental determination of the equilibrium condensate fraction in superfluid helium". Physics Letters A. Elsevier BV. 31 (8): 465–466. doi:10.1016/0375-9601(70)90401-9. ISSN 0375-9601. 22. ^ Mayers, J. (2004-04-01). "Bose-Einstein Condensation, Phase Coherence, and Two-Fluid Behavior in 4He". Physical Review Letters. American Physical Society (APS). 92 (13): 135302. doi:10.1103/physrevlett.92.135302. ISSN 0031-9007. 23. ^ Mayers, J. (2006-07-26). "Bose-Einstein condensation and two fluid behavior in 4He". Physical Review B. American Physical Society (APS). 74 (1): 014516. doi:10.1103/physrevb.74.014516. ISSN 1098-0121. 24. ^ Olinto, A. C. (1987-04-01). "Condensate fraction in superfluidHe4". Physical Review B. American Physical Society (APS). 35 (10): 4771–4774. doi:10.1103/physrevb.35.4771. ISSN 0163-1829. 25. ^ Penrose, Oliver; Onsager, Lars (1956-11-01). "Bose-Einstein Condensation and Liquid Helium". Physical Review. American Physical Society (APS). 104 (3): 576–584. doi:10.1103/physrev.104.576. ISSN 0031-899X. 26. ^ a b Yang, C. N. (1962-10-01). "Concept of Off-Diagonal Long-Range Order and the Quantum Phases of Liquid He and of Superconductors". Reviews of Modern Physics. American Physical Society (APS). 34 (4): 694–704. doi:10.1103/revmodphys.34.694. ISSN 0034-6861. 27. ^ [see John Bardeen's chapter in: Cooperative Phenomena, eds. H. Haken and M. Wagner (Springer-Verlag, Berlin, Heidelberg, New York, 1973)] 28. ^ Bardeen, J.; Cooper, L. N.; Schrieffer, J. R. (1957-12-01). "Theory of Superconductivity". Physical Review. American Physical Society (APS). 108 (5): 1175–1204. doi:10.1103/physrev.108.1175. ISSN 0031-899X. 29. ^ A. M. Perelomov, Coherent states for arbitrary Lie groups, Commun. Math. Phys. 26 (1972) 222-236; arXiv: math-ph/0203002. 30. ^ A. Perelomov, Generalized coherent states and their applications, Springer, Berlin 1986. 31. ^ Gilmore, Robert (1972). "Geometry of symmetrized states". Annals of Physics. Elsevier BV. 74 (2): 391–463. doi:10.1016/0003-4916(72)90147-9. ISSN 0003-4916. 32. ^ Gilmore, R. (1974). "On properties of coherent states" (PDF). Revista Mexicana de Física. 23 (1–2): 143–187. 33. ^ G. Kaiser, Quantum Physics, Relativity, and Complex Spacetime: Towards a New Synthesis, North-Holland, Amsterdam, 1990. 34. ^ S.T. Ali, J-P. Antoine, and J-P. Gazeau, Coherent States, Wavelets and Their Generalizations, Springer-Verlag, New York, Berlin, Heidelberg, 2000. 35. ^ Anastopoulos, Charis (2004-08-25). "Generalized coherent states for spinning relativistic particles". Journal of Physics A: Mathematical and General. 37 (36): 8619–8637. arXiv:quant-ph/0312025. doi:10.1088/0305-4470/37/36/004. ISSN 0305-4470. 36. ^ Ashtekar, Abhay; Lewandowski, Jerzy; Marolf, Donald; Mourão, José; Thiemann, Thomas (1996). "Coherent State Transforms for Spaces of Connections". Journal of Functional Analysis. 135 (2): 519–551. arXiv:gr-qc/9412014. doi:10.1006/jfan.1996.0018. ISSN 0022-1236. 37. ^ Sahlmann, H.; Thiemann, T.; Winkler, O. (2001). "Coherent states for canonical quantum general relativity and the infinite tensor product extension". Nuclear Physics B. Elsevier BV. 606 (1–2): 401–440. arXiv:gr-qc/0102038. doi:10.1016/s0550-3213(01)00226-7. ISSN 0550-3213.
1f394efc19f2e22b
direkt zum Inhalt springen direkt zum Hauptnavigationsmenü Sie sind hier TU Berlin Page Content Java visualisations for quantum mechanics Wave packet dispersion This Applet visualizes the behaviour of a free wave packet changing with time. The wave number can be set by the user. In contrast to the "classical" model, a quantum mechanical particle can "dissolve" with time. However, for macroscopic items, this takes very long time. Probability current in the one-dimensional potential well WStromdichte - One may observe solutions of the time-depended Schrödinger-equation of a particle in a potential well. Through superimposing the first 10 eigen-functions, one can observe dynamics of the quantum particle. Many properties of this system can be calculated analytically (see exercise 8 in "Theoretische Physik II SS08") and it can be regarded as a simplified quantum-dot system. Two-level system This Java applet displays the behaviour of a two-level quantum system stimulated by different types of impulses. It describes the transition of an electron through the excitation pulse. The laser is a realisation of that system. Elektron-phonon interaction An extension of the two-level quantum system: damped Rabi oscillations. In contrast to the basic version, this applet allows to observe effects which occur e.g. in semiconducters with indirect transitions. Mathematica visualisations for quantum mechanics The Schrödinger equation defines the shape and distribution of the electron orbits. This Java WebStart application lets the user change the first three quantum numbers and provides a three-dimensional view of the orbits, which can be scaled and rotated. This Java application visualizes the harmonic spherical functions in 3D and 2D slices. The spherical harmonic functions play an essential role in describing the angular momentum in quantum mechanics. Furthermore, they describe the angular dependence of hydrogen atom orbits. Vary the height of a potential well or barrier and observe the scattering of a traversing Schrödinger wave. In contrast to the applet seen above, a finite height of the well is being used. Download the .nbp file and execute with Wolfram CDF Player or Mathematica. (see note below) Zusatzinformationen / Extras Quick Access: Schnellnavigation zur Seite über Nummerneingabe Auxiliary Functions
54e9a6d47b9735a5
Some books like Griffith's begin quantum mechanics with the Schrödinger equation as a postulate while some other text books derive it and state $[x,p]=i \hbar$ as an axiom. I'm not sure which one came first as the operator $\hat{p}$ can be derived from both. Also I'd like to know how did the commutation relation came about because the textbooks I have don't mention about it. • 2 $\begingroup$ Basically, Matrix mechanics formulation of quantum mechanics, due to Werner Heisenberg, Max Born, and Pascual Jordan in 1925, and wave mechanics formulated in late 1925, and published in 1926, by Erwin Schrödinger, were independent mathematical formulations of q.m. 1/2 $\endgroup$ – Mauro ALLEGRANZA Dec 7 '15 at 18:45 • $\begingroup$ ... "Although Schrödinger himself after a year proved the equivalence of his wave-mechanics and Heisenberg's matrix mechanics, the reconciliation of the two approaches and their modern abstraction as motions in Hilbert space is generally attributed to Paul Dirac, who wrote a lucid account in his 1930 classic The Principles of Quantum Mechanics. " 2/2 $\endgroup$ – Mauro ALLEGRANZA Dec 7 '15 at 18:45 If you are asking "which came first" historically, the answer is clear: Commutation relation was written for the first time in the paper of Born and Jordan, On Quantum Mechanics, submitted on September 27, 1925. It is formula (38) in this paper. English translation is available in Van der Waerden's book, Sources on quantum mechanics. Schrodinger equation was published in Annalen der Physik by Scrodinger in his paper Quantisierung als Eigenwertproblem, submitted in January 1926. (I don't know whether there is an English translation). If you are asking "what comes first" logically, it depends on the system of exposition. In every theory we are free what to take as axioms and how to develop it from the axioms. In most modern expositions, Schrodinger equation is taken as an axiom. The system where the commutation relation "comes first" is called "matrix" mechanics, and you can read about it in Dirac'a book. | improve this answer | | • $\begingroup$ Is there a derivation for the commutation relation in his book? $\endgroup$ – Weezy Dec 8 '15 at 8:29 • 1 $\begingroup$ Axioms are not derived, by definition. There are physical reasons behind the "axioms" in physical theories. $\endgroup$ – Alexandre Eremenko Dec 8 '15 at 13:49 • $\begingroup$ I believe that the non trivial commutation relation between x and p appeared first in the seminal Heisenberg's work which is previous to Born-Jordan's one. Perhaps in a disguised form and certainly with unclear interpretation. $\endgroup$ – Diracology Nov 19 '17 at 1:21 Your Answer
e417870cd9d6b47d
Spin-Rotation Coupling October 16, 2017 10:29 am Published by Influence of rotation on the phase of light passing through an optical interferometer has been theortically investigated over a century. The Sagnac effect, named after French physicist Georges Sagnac, that is the observation of a phase shift proportional to the scalar product of the rotation frequency and the area of the interferometer 1 provided an empirical basis for a rich field of both fundamental and apptied research into the influence of rotation on the phase of a quantum mechanical wave function. Sagnac’s original drawings (left) and a schematically illustration of the rotating interferometer (right) are depicted below. Sagnac copy In order to calculate the phase shift of our Sagnac interferometer consider a circular interferometer of radius , illustrated below, rotating in its plane at an angular frequency . The time taken for the two beams (of particles) to complete one circuit of the interferometer, , is calculated as , where  is the speed of propagation of our particles around the Sagnac loop. Thus we get  and therefore .Untitled-1 It follows that the difference in propagation time for the two counter-propagating beams, , is given by  . The area of the interferometer is  and in most case one can assume  . So for the phase difference, given by we finally get (in 3 dimensional space) , where is a unit vector perpendicular to the surface area of the interferometer. If we now go back to a interferometer in form of a square (middle) the relative phase difference increases (at first order) by a factor and we get . In the case of light we have and and the relative phase difference is expressed as . The Sagnac effect may be regarded as a manifestation of the coupling of orbital angular momentum of a particle, , to rotation. The Sagnac phase shift, as defined above, is a scalar quantity that is independent of the motion of the observer. From the standpoint of a rotating observer, may naturally be extended to include the intrinsic spin of a quantum mechanical particle through replacing the orbital angular momentum with the total angular momentum . This fomalism consequently predicts that in the rotating frame, in addition to the Sagnac phase shift, a displacement of the interference fringes due to spin-rotation coupling will arise proportional to , as explicitly expressed in the first Equation in the paper by Bahram Mashhoon 2. For a deeper understanding of the spin-rotation coupling an extension of the hypothesis of locality is used to determine the interference phase shift induced by the rotation of a neutron interferometer. The basic laws of physics are formulated with respect to ideal inertial observers. However, all actual observers are accelerated (eg. rotation of the earth). To interpret the IFM_Rresults of experiments, it is therefore necessary to establish a connection between actual and inertial observers. This is achieved in the standard theory of relativity by means of the hypothesis of locality, namely, the assumption that an accelerated observer at each instant along its worldline is physically equivalent to an otherwise identical momentarily comoving inertial observer. Consider a particle of energy and momentum with respect to an inertial frame , having coordinates , and a frame , with coordinats  that is related to by a uniform rotation of angular frequency . An observer at rest in with velocity with respect to measures the energy of theparticle. According to the hypothesis of locality, the result is , where  and  , which can be rewritten as , since  and is the orbital angular momentum of the particle. For a general Hamiltonian this reads  . The hypothesis of locality is thus valid for phenomena involving classical particles. However, (matter) wave properties in quantum mechanics, such asScreenshot 2018-06-07 16.07.25 period and wavelength, require extended intervals of time and space, respectively, for their determination. It is therefore necessary to develop a prescription for the measurement of wave characteristics by accelerated observers. A generalization of the hypothesis of locality can be stated as follows: A wave function  with respect to a static observers in , is represented as the wave function in the uniformly rotating frame as , where  is a unitary operator given by , where is the total angular momentum, consisting of orbital and spin angular momentum, denoted as  . The wave function satisfies the Schrödinger equation  such that wave function satisfies the Schrödinger equation  with . A detailed comparison of the last equation with  reveals,  the existence of a new effect associated with the coupling of intrinsic spin with rotation and given by the Hamiltonian . It is interesting to investigate the observational consequences of the new effect given by the last equation with polarized neutron interferometry 3. The proposed experimental realization for observation of the spin-rotation coupling is illustrated above (left). Neutrons coming a source (S) are split into identical semicircular beams which are then made to interfere before being detected at D. Along the counterclock-wise (clockwise) path, the neutron spin is assumed to be polarized parallel (antiparallel) to the direction of rotation of the interferometer. The wave function at the detector (D) is simply given by . The interference phase shift (in the quasi classical approximation) is given by , where and is the difference between the phase of the neutron wave at the detector at time and the phase at the source at time , with . Here is the neutron group speed in the inertial frame and is the radius of the interferometer. This can be calculated to first order in resulting in two parts , where , with neutron mass and being the arera of the interferometer, and . The ! Kaiser-IFM_Spulenlatter effect, is smaller than the Sagnac effect by the ratio of the (reduced) de Broglie wavelength of the neutron to the diameter of the interferometer, . The Sagnac effect is proportional to the area of the interferometer, whereas the spin-rotation coupling phase shift is proportional to the length of the separate neutron paths. The Sagnac term has already been observed in experiments at the University of Missouri-Columbia 4 and MIT 5. Since the spin-rotation coupling on the neutron phase shift is very small and to separate it from the much larger Sagnac effect it would require a very sensitive interferometer. To overcome this difficulty, Sam A. Werner has proposed an experiment using a new type of interferometer which ideally would be insensitive to Sagnac (and other gravity effects), which  is illustrated aside (right). In 2 Bahram Mashhoon proposed a setup where in a neutron interferometer longitudinally polarized neutrons pass through a slowly rotating spin flipper (instead of the interferometer itself) along one arm and a static spin flipper along the other arm resulting in a beat phenomenon at the detector, which is illustrates on the left side. Keeping the interferometer stationary in the inertial frame of the laboratory, then rotating one of the coils with angular velocity parallel to the neutron wave vector. Then a shift in frequency of this beam will induce a time-dependent interference intensity envelope of the form arises, where  is the  phase shift due to the phase flag. Finally Bahram Mashhoon proposed to replace the rotating spin-flipper by a stationary quadrature coil, which produces a rotating magnetic field , normal to the polarization axis of the neutrons 6. The expected result should be the same, since the neutron cannot distinguish if the flipper is physically rotated or the magnetic field is rotated. To describe the interaction of the spin of a free neutron in a magnetic field with angular velocity the PauliSchrödinger equation has to be solved for a particle propagating in −direction in a uniformly rotating magnetic field given by which gives  (see here for details of the derivation). The Pauli equation in a rotating magnetic field for plane waves is analytically solvable and in this particular case given by , where generates the rotation of the initial spin state which cab be expressed as  . The square of the magnitude of the rotation vector is given by a Pythagorean equation , where we used the definition of the Larmor frequency . The solution from above contains the coupling term and describes the evolution of the spin states for both the rotating and the static field (). 1. Georges Sagnac, CR. Acad. Sci. (Paris), 157, 708 (1930). ↩ 2. Bahram Mashhoon, Richard Neutze, Mark Hannam, Geoffrey E. Stedman Phys. Lett. A, 249, 161 (1998). ↩ 3. Bahram Mashhoon, Phys. Rev. Lett., 61, 2639 (1988). ↩ 4. S. A. Werner, J.-L. Staudenmann, and R. Colella, Phys. Rev. Lett., 42, 1103 (1979). ↩ 5. D. K. Atwood, M. A. Horne, C. G. Shull, and J. Arthur, Phys. Rev. Lett., 52, 1673 (1984). ↩ 6. Bahram Mashhoon and helmut Kaiser Physica B, 385, 1381 (2006). ↩
68baae856c08ab65
Journal of Computer Chemistry, Japan -International Edition Online ISSN : 2189-048X ISSN-L : 2189-048X General Paper Revisiting the Nature of Si-O-Si Bridging Author information 2019 Volume 5 Non-empirical calculation based on the Schrödinger equation is an appropriate tool for investigating the relationship among Si-O-Si angle, Si-O bond length, Si-O bond strength, and electronic structure. However, past studies could not reach a consensus about the equilibrium structure of the C2v pyrosilisic acid molecule. Moreover, the structure of disiloxane, the simplest siloxane molecule, could not be reproduced using non-empirical molecular orbital calculations. In this study, I checked the reproducibility of various model chemistries and basis sets, and found that employing the post-Hartree-Fock method and a larger basis set (at least, aug-cc-pVTZ) is necessary for accurate calculation of the disiloxane molecule. In contrast to past studies on molecular orbitals, the present study reveals no significant occupancy in the Si 3d orbitals. The total energy landscape of the C2v pyrosilisic acid molecule is calculated by using the coupled cluster method concerning three excited electrons and the aug-cc-pVTZ basis set. The stable bond length for Si-Obr is 1.604 Å, and the stable Si-O-Si angle is 159.449°. There are gentle curves around the stable angles for each bond length comparing with bond length direction. The stable angle for each bond length decreased with increasing Si-Obr bond length. The weakening of the Si-Obr bond with decreasing Si-O-Si bond angle can be explained by the decrease in the bond index and the increase in the orbital energy for Si-Obr σ-bond. Consequently, hybridization of the valence electrons of the bridging oxygen with decreasing Si-O-Si angle weakens the Si-Obr σ-bond. Electrostatic potential favors a straight configuration because of the repulsion between the SiO4 tetrahedra, while the valence electrons of the bridging oxygen favor a bent configuration. These two competing behaviors can explain the bent configuration of pyrosilisic acid without considering d-p π bonding. Fullsize Image Information related to the author © 2019 Society of Computer Chemistry, Japan Previous article Next article
7c890c78c565bd03
None of the following criteria are valid: • Partial differential equations: Both the Maxwell equations and the Schrödinger equation are PDE's, but the first model is clearly classical and the second one is not. Conversely, finite-dimensional quantum systems have as equations of motion ordinary differential equations, so the latter are not restricted to classical systems only. • Complex numbers: You can use those to analyse electric circuits, so that's not enough. Conversely, you don't need complex numbers to formulate standard QM (cf. this PSE post). • Operators and Hilbert spaces: You can formulate classical mechanics à la Koopman-von Neumann. In the same vein: • Dirac-von Neumann axioms: These are too restrictive (e.g., they do not accommodate topological quantum field theories). Also, a certain model may be formulated in such a way that it's very hard to tell whether it satisfies these axioms or not. For example, the Schrödinger equation corresponds to a model that does not explicitly satisfy these axioms; and only when formulated in abstract terms this becomes obvious. It's not clear whether the same thing could be done with e.g. the Maxwell equations. In fact, one can formulate these equations as a Dirac-like equation $(\Gamma^\mu\partial_\mu+\Gamma^0)\Psi=0$ (see e.g. 1804.00556), which can be recast in abstract terms as $i\dot\Psi=H\Psi$ for a certain $H$. • Probabilities: Classical statistical mechanics does also deal with probabilistic concepts. Also, one could argue that standard QM is not inherently probabilistic, but that probabilities are an emergent property due to the measurement process and our choice of observable degrees of freedom. • Planck's constant: It's just a matter of units. You can eliminate this constant by means of the redefinition $t\to \hbar t$. One could even argue that this would be a natural definition from an experimental point of view, if we agree to measure frequencies instead of energies. Conversely, you may introduce this constant in classical mechanics by a similar change of variables (say, $F=\hbar\tilde F$ in the Newton equation). Needless to say, such a change of variables would be unnatural, but naturalness is not a well-defined criterion for classical vs. quantum. • Realism/determinism: This seems to depend on interpretations. But whether a theory is classical or quantum mechanical should not depend on how we interpret the theory; it should be intrinsic to the formalism. People are after a quantum theory of gravity. What prevents me from saying that General Relativity is already quantum mechanical? It seems intuitively obvious that it is a classical theory, but I'm not sure how to put that intuition into words. None of the criteria above is conclusive. • 2 $\begingroup$ I've removed some comments which didn't seem to be intended to request clarifications or suggest improvements. $\endgroup$ – David Z Apr 4 '18 at 21:44 • 2 $\begingroup$ Note that the appropriate answer to this question depends quite heavily on whether you mean "what distinguishes quantum theories from classical theories specifically", or "what distinguishes quantum theories from other theories in general" - for example, the class of what are often referred to as generalized probabilistic theories, which include classical, quantum and many other theories besides. In this latter class, classical theories are distinguished by many properties, and so the lack of any of these tells us we are dealing with a non-classical theory - but not necessarily a quantum one $\endgroup$ – Robin Saunders Apr 5 '18 at 0:19 • $\begingroup$ @RobinSaunders Hmm that's actually a very good point, I like the way you put it. If you ever have some free time, please consider making that comment into an answer. Cheers! $\endgroup$ – AccidentalFourierTransform Apr 8 '18 at 2:11 • $\begingroup$ I'm interested in why you say TQFT does not fit within the Dirac-von Neumann axioms. It's true that those axioms don't tell you much about the structure of the theory, but it's not really different for any QFT, for which there is a Hilbert space associated to any spatial manifold. I'd say those axioms are insufficiently strong, rather than being too restrictive. $\endgroup$ – Holographer Apr 8 '18 at 17:07 10 Answers 10 I think this is a subtle question and I think it depends somewhat on how you choose to represent quantum mechanics. To see one extreme of this, consider the viewpoint put forth by Kibble in [1]. For simplicity I will be thinking of finite-dimensional quantum systems here; there are some subtleties in infinite dimensions but as far as I know the basic picture still holds. In this, he shows that if we describe the theory in terms of physical states (rays in the Hilbert space), then the dynamics of Schrödinger evolution correspond exactly to Hamiltonian evolution via the symplectic form from the Kähler structure on the projective Hilbert space (which is to say, the evolution is that of a classical system). However there are two distinctions which make quantum mechanics different from classical mechanics: • The phase space must be a projective Hilbert space (as opposed to just a symplectic manifold), and the Hamiltonian is restricted to being a quadratic form in the homogeneous coordinates on projective space. In classical mechanics any (sufficiently smooth) function is admissible as a Hamiltonian. • Composite systems are described differently. In classical mechanics the phase space of a composite system is the Cartesian product of the phase spaces. In quantum mechanics, it is the Segre embedding (which descends from the tensor product of Hilbert spaces). This is parametrically different; if the phase spaces of the two subsystems are $2m$ and $2n$, then in classical mechanics the composite system has dimension $2m+2n$, whereas in quantum mechanics it has dimension $2(n+1)(m+1)-2$. The extra states are the entangled states. Virtually all the observable consequences of QM come here, e.g. Bell inequalities. Of course if we consider identical particles things get even a bit more complicated. If you ignore the second point, and focus only on a single quantum system, the surprising conclusion is that every quantum mechanical system is a special case of classical mechanics (with the provision that again I haven't checked the details in infinite dimensions but it is at least morally true). However, part of the structure of quantum mechanics is how it describes composite systems so you can't just ignore this second point. A mathematician would say that this gives an injective functor from the category of quantum mechanical theories to the category of classical theories which is not compatible with the symmetric monoidal structures on the two. I want to point out that this is emphatically not how we typically think of the correspondence principle in quantum mechanics. That is, it is a mapping from a finite-dimensional quantum mechanical system to a finite-dimensional classical system (of the same dimension). Normally, if we think about e.g. a free particle in one dimension, the Hilbert space for that quantum system is infinite dimensional, yet it corresponds to a 2-dimensional classical phase space. But the point is that, at least in this question, we can't restrict to the ordinary notion of correspondence since we don't have a physical interpretation for the system of equations describing the theory. Additionally, despite the above example, whether a theory is classical or quantum has essentially nothing to do with where the states live. Indeed, if we just want to consider a free particle in one dimension again, we would typically describe its state as a self-adjoint trace class unit trace operator $\hat \rho$ on the Hilbert space $L^2(\mathbb R)$. In contrast, in classical mechanics we would describe a state as a probability distribution $\rho$ on the phase space $\mathbb R^2$ (note that in the above example we had only pure classical states i.e. only those described by a $\delta$ function on the phase space whereas now we have mixed states). However we could just as easily describe the quantum state by its Wigner function, in which case it lives in exactly the same affine space as the classical distribution. However the Wigner function satisfies slightly different inequalities than the classical probability distribution; in particular it can be slightly negative and cannot be too positive. The details of this were first worked out in [2]. In this case, it is the dynamics that give away the quantum nature. Specifically, to go from classical to quantum mechanics, we must replace the Poisson bracket by the Moyal bracket (which has $O(\hbar^2)$ corrections), indicating the failure of Liouville's theorem in the phase space formulation of quantum mechanics: (quasi)probability density is not conserved along trajectories of the system. All of this is to say that it seems difficult (and maybe impossible) to try to find a single distinguishing feature between classical and quantum mechanics without considering composite systems, so if that is what you want, I'm not sure I have an answer. If you do allow for composite systems though, it is a pretty unambiguous distinction. Given this, it is perhaps not surprising that all the experimental tests we have which demonstrate that the world is quantum and not classical are based on entanglement. [1]: Kibble, T. W. B. "Geometrization of quantum mechanics". Comm. Math. Phys. 65 (1979), no. 2, 189--201. [2]: H.J. Groenewold (1946), "On the Principles of elementary quantum mechanics", Physica 12, pp. 405-460. | cite | improve this answer | | As far as I know, the commutator relations make a theory quantum. If all observables commute, the theory is classical. If some observables have non-zero commutators (no matter if they are proportional to $\hbar$ or not), the theory is quantum. Intuitively, what makes a theory quantum is the fact that observations affect the state of the system. In some sense, this is encoded in the commutator relations: The order of the measurements affects their outcome, the first measurement affects the result of the second one. | cite | improve this answer | | • 11 $\begingroup$ I think this answer is on the right track. In quantum mechanics, the transfer of information is intrinsically tied to the dynamics of the system, whereas in classical physics that is not the case. $\endgroup$ – DanielSank Apr 4 '18 at 20:16 • 2 $\begingroup$ I would agree with this. It was my answer also but I came too late. So, in any situation, what exactly is quantum is best shown in experiments such as Stern-Gerlach type. If you measure for x dirrection you get + and - or spin up or down, but if you measure in y, you get spins in that direction. If you measure first in x, thaen in y, you get as a rersult a y direction, but if you measure in x, then again in x, you get only x..... $\endgroup$ – Žarko Tomičić Apr 4 '18 at 20:17 • 4 $\begingroup$ I would say on the contrary that observations affect the state of a classical systems where everything is physical. $\endgroup$ – Bill Alsept Apr 4 '18 at 20:55 • 3 $\begingroup$ In MWI, observations don't affect the state of the system in some mysterious way. Rather, you should consider the composite Hilbert space describing both the system and the measuring device (large-dimensional Hilbert space). A measurement is a time-dependent interaction and in the measurement limit you produce a fully entangled state between the two. If you compute the reduced density matrix for the system of interest, you get a diagonal matrix of the probabilities. The point being that "observations affect the state of the system" is arguably really a statement about composite systems. $\endgroup$ – Logan M Apr 4 '18 at 21:07 • 2 $\begingroup$ The commutator is just a way of talking about measuring one observable and then another vs. doing it in the other order. The way to say it is that a classical theory is one where conditional probabilities form a distribution. $\endgroup$ – Ryan Thorngren Apr 6 '18 at 1:41 Frame challenge: I think the question is based on a misleading premise. While there are a number of characteristics typical of quantum theories as opposed to classical theories - some you've already listed in the question, and others have been suggested in the existing answers - there's no particular reason to expect there to be a single unambiguous rule that categorizes any arbitrary theory as either quantum or classical. Nor is there any particular need for such a rule. You give the example of quantum gravity. However, the reason we want a quantum theory of gravity is not because it has the tag "quantum" attached to it, as if it were a handbag that would not be adequately fashionable without the correct label, but because we want it to be able to answer certain questions about reality which we already know General Relativity can't answer. In short, don't worry about whether the theory is "quantum" or not - worry about whether it answers the questions you want answered or not. Also relevant. Addendum: the same goes for the existing theories, of course. We don't like the Standard Model because it is quantum. We like it because it works. | cite | improve this answer | | • 4 $\begingroup$ @JerrySchirmer, that's not really what this question asks, though. $\endgroup$ – Harry Johnston Apr 5 '18 at 19:37 • 2 $\begingroup$ It asks "what is it about a theory that makes it 'quantum'". And the answer would be "we apply quantization to some classical theory" $\endgroup$ – Jerry Schirmer Apr 5 '18 at 20:43 • 2 $\begingroup$ @JerrySchirmer, that's one possible answer, certainly. But I think the OP is asking for criteria that are based directly on the mathematical characteristics of a particular model, rather than on how the model was developed. (And I think in practice that, if presented with a theory with characteristics similar to other quantum theories, most physicists would call it a quantum theory regardless of whether it was derived from a classical model or not.) $\endgroup$ – Harry Johnston Apr 5 '18 at 21:20 • 7 $\begingroup$ ... incidentally, unless I've overlooked something, none of the existing answers mention quantization as a possible criteria so you might want to post that as an answer @JerrySchirmer $\endgroup$ – Harry Johnston Apr 5 '18 at 21:23 • 1 $\begingroup$ All that said, if I had to choose one feature that was the most important characteristic of quantum theories, I'd have to endorse Photon's answer. $\endgroup$ – Harry Johnston Apr 6 '18 at 23:06 TL;DR: Correlations. First things first: since the OP asks for a criterion to tell whether a model is quantum mechanical, the answer has to involve observables. After all if you could rewrite your "quantum" model as a "classical" model, those labels would not be worth much after all. Furthermore all quantum theories (that I know of) are probabilistic, therefore this answer focuses on probabilistic observables, i.e. correlation functions. The fundamental difference between a quantum theory and a classical theory is their correlation structure. That is, quantum theories can show correlations that classical theories cannot. The historically first and simplest example of this is Bell's inequality. By now there are many such inequalities for all kinds of observables, a frequently used one being the CHSH inequality. In general these inequalities set bounds on correlation functions that cannot be violated by a classical probability theory, where the latter can be made precisely (see below). Quantum probability theories can violate some of these inequalities, which makes them intrinsically different. Interestingly, there are also theories that have correlations that are even stronger than in quantum theory. These are known as Popescu-Rohrlich boxes and they have been shown to allow maximal violation of the so called Tsirelson bound, another inequality which is however fulfilled by quantum theory. Making these statements (which all work on the level of probability distributions on a space of observables) is a whole field. Some references (I'll try to put some more tomorrow, too tired now): 1. One can try to uniquely single out quantum theory as a 'special' probability theory by starting from certain information theoretic postulates: https://arxiv.org/abs/1203.4516 2. So called 'loophole free' Bell tests have shown that we live in a world that violates classical probability theory (even though some people will argue against that): https://www.nature.com/articles/nature15759 3. A nice presentation about the ideas mentioned above of a guy who (unlike me) actually knows what he is talking about: http://www.math.umd.edu/~diom/RIT/QI-Spring10/ClassvsQuantInfo.pdf | cite | improve this answer | | Here is an experimentalist's answer: A mathematical system, either algebraic or differential equations, has axioms and theorems and is self contained and self consistent. A physics theory is a subset of a mathematical system that is defined by imposing extra axioms, called laws or postulates, which are necessary by construction, in order to pickup from the overall mathematical set, those solutions which fit data, i.e. measurements and observations. Classical theories are those that use classical laws, such as: Newton's laws for mechanics, the set of laws of electricity and magnetism unified in Maxwell's equations, the thermodynamic laws (and maybe etc). Quantum theories are the ones obeying quantum mechanical laws, i.e. the postulates of quantum mechanics, no matter the mathematical formulation. In order to fit the data and observations, quantum mechanics postulates were necessary, and this is what distinguishes classical from quantum, IMO. Edit after comments: In your list: This was the first time I met Topological Quantum Field Theories (TQFT). (Such introductions are one of the reasons I follow this site - to get whiffs of new-to-me physics.) The gauge is, if this set of theories fit data and predict measurements. In axiomatic mathematical theories, theorems can be set up as axioms, and then the former axioms have to be proven as theorems, for a self consistent theory. Usually the axioms are chosen as the simplest expression from a set of consistent theorems. Since TQFTs fit data and are predictive of quantum states, it is necessary that from the axiomatic postulates for TQFT one should be able to derive the postulates of quantum mechanics (possibly in a very complicated mathematical method). The wikipedia article on TQFT seems to indicate this. This is necessary for a theory to be quantum IMO. I.e. it is the postulates that connect measurements to the mathematical formulas, by construction. | cite | improve this answer | | • $\begingroup$ +1 Thank you for the answer, but I'm not convinced. As I said in the OP, the postulates of QM are too restrictive. There are systems that we deem quantum-mechanical, yet they fail to satisfy these axioms. For example, topological quantum field theories (which have their own set of axioms). $\endgroup$ – AccidentalFourierTransform Apr 7 '18 at 19:47 • $\begingroup$ These topological theories, do they fit any data? ? If they fit the data, then this just means that some of the postulates (linked above) of quantum mechanics can be relaxed/ignored. Otherwise , as when theorems in axiomatic mathematics can be turned into axioms, they become theorems.Or are they just a science fiction game with mathematics $\endgroup$ – anna v Apr 8 '18 at 3:12 • $\begingroup$ Wow, that's a very condescending comment. Just because you don't find them useful does not make them "science fiction games". Wow, just wow. I really didn't expect that attitude from you... $\endgroup$ – AccidentalFourierTransform Apr 8 '18 at 3:13 • 1 $\begingroup$ Sorry, I have edited a bit. This then must mean, that the usual postulates are turned into theorems. What I am trying to say is that it is data that is the decisive factor, fits and predictions. And that the mathematics must be consistent. $\endgroup$ – anna v Apr 8 '18 at 3:18 • 1 $\begingroup$ +1 for a very good point: “Quantum theories are the ones obeying quantum mechanical laws, i.e. the postulates of quantum mechanics, no matter the mathematical formulation.” $\endgroup$ – AlQuemist Apr 9 '18 at 8:24 I would say that something intrinsically quantum is the way in which probabilities and the function which obeys the partial differential equation are related. As you note, both interference and probabilities are present in classical theories. What's new are probability amplitudes where interference leads to a supression of probabilities which is not possible in classical theories. For the finite-dimensional case, there's also Lucien Hardy's proposal "Quantum Theory From Five Reasonable Axioms" (https://arxiv.org/abs/quant-ph/0101012). There, the distinguishing factor between quantum theory and classical probability theory is that "there exists a continuous reversible transformation on a system between any two pure states of that system." Another reference along similar lines is Chapter 9 of Scott Aaronson's book "Quantum Computing since Democritus". | cite | improve this answer | | • $\begingroup$ Isn't interference of probabilities basically how we express wave-particle duality mathematically? $\endgroup$ – asmaier Apr 24 '18 at 13:06 • $\begingroup$ I am not sure what you are getting at. Frist, there is no interference of probabilities but only probability amplitudes and second, sure, the physical phenomenon of wave-particle duality is related to this mathematical mechanism. $\endgroup$ – Marc Apr 25 '18 at 14:15 tl; dr Erm... You do. Say you cook up a model about a physical system ... Equations do not exists by themselves, they always have a surrounding. The head is assumptions and the tail usually describes limitations of said mathematical model. So really, it is up to your interpretation of question at hand OR the data available to you, that can consistently (deterministically?) predict if a theory is "Quantum". Conversely, if you do not have a head and tail, you can make a lot of cases about what an equations is talking about but can't say anything concretely. All the answers here are inspiring, and frankly sexy, but take time to consider my rudimentary examples below This way of thinking "what characteristic of equation predicts its applicability in <name of physics branch>" is a misuse of mathematics. Maths is, perhaps, the ultimate but we must remember that in physics we use it as a tool. My illustration below might seem childish but please consider the following equations Equation 1: $$ x^2 + x - 6 = 0 $$ Equation 2: $$ 2x + 5y = 20 $$ Just looking at these, a mathematician can happily say that • Equation 1 • has two solutions +2 and -3, and • the curve is upward facing, with maxima at x = -0.5 • Equation 2 • has a slope of -0.4 • has intercepts 4 and 10 • has infinite ordered pairs (x, y) satisfying the equation • describes a curve that encloses the origin And we would all agree with the above points. But the wise physicist stays mum, because s/he knows that these equations aren't just scribblings of some dyslexic Vulcan but are models of something, they represent something or some phenomena. So a physicist agrees with the mathematician but doesn't come to a conclusion. Let us look at the questions which lead us to these equations Question 1: The product of a quantity and one more than itself is 6, find the value of this quantity if a. the quantity is money lent b. the quantity is time Question 2: Two times the number of my sons and five times the number of my daughters always equals two times the number of appendages a normal person has on his hands. How many sons and daughters do I have? Now, I hope you have an aha! moment. The answer of Q1 b is just +2 because time can not be negative (we've all solved such questions as kids) and the answer to Q2 can be quite surprising - 5 sons and 2 daughters - because physicists are good people and don't make fractional children or negative children. Did you see that -- one equation, two variables, and we still get a unique answer - constraints. So mathematician (the equation) and physicist (the big picture) are both correct where they stand. But the physicists wins, because • we are at physics.stackexchange.com • math in itself is very strong, pure, almost unpalatable; we need both the background information and the constraints to understand what this wonderful tool is trying to tell us through equations. On a serious note, I'd like to point out that there's probably no (respectable) book on classical physics which teaches F = ma without first explicitly-and-clearly stating the following: • Assumptions required e.g. frictionless surfaces and perfectly rigid bodies • Newton's Three Laws of Motion (word-by-word) • That dF = d(m.v), which can be simplified if mass is (almost) constant • and most importantly, the fact that objects we are dealing with are not of super-tiny scale, i.e. larger than 10-9m in diameter. Authors don't do this for pedagogy, most 9th grade students wouldn't give a damn about rigidity, but in fact they do it because these statements are necessary for the equation/theory to work. Trying to predict if an equation describes a Quantum thingy is a discussion-based question at best, or meta-math. To the OP specifically, If you are an inventor, working on something like GUT (why else would you have a equation whose origin you do not know) and you are curious if it applies equally well to big and small bodies - apply constraints. I do not have the mathematical foresight but logically I can say that variations in constraints will define the way system behaves for Quantum and Classical bodies. In Thinking Fast and Slow there's a chapter which illustrates that we have a tendency to support what is popular/fancy rather than what is correct/plausible. I think the question is primarily opinion based. | cite | improve this answer | | • $\begingroup$ Apropos of equation 1, a mathematician would perhaps say minimum rather than maxima (sic). $\endgroup$ – Deepak Apr 9 '18 at 15:50 Physical models are determined by their lattice of events. The set of physical events form an algebraic lattice with the two binary operators that serve as the OR and AND between events. We assume the lattice of events to be sigma-additive and orthomodular. We call this lattice the logic of the model. In this sense events are the elements of logic. System states are probability measures over this algebra. Physical quantities are mappings between statements on measurements of a quantity (think of Borel-sets of the reals) and the logic. The logic of a classical model is isomorphic to a set algebra so it is distributive ( a ∨ (b ∧ c) = (a ∨ b) ∧ (a ∨ c) and vice versa) and fully atomic. The logic of a quantum model isomorphic to the lattice of the subspaces of a Hilbert space and therefore it is not distributive but also fully atomic. The above alone is sufficient to explain many features associated with quantum models, including • real valued physical quantities can be represented as self-adjoint operators • commutation relations • superposition of states • the Schrödinger equation | cite | improve this answer | | • 1 $\begingroup$ Can you please add some references? I think the answer could benefit from that. $\endgroup$ – Kiro Apr 7 '18 at 7:35 TLDR: Wave-Particle duality I want to answer this question from a historical perspective: According to our current understanding a quantum theory shows features of both classical mechanics and electrodynamics (e.g. light) at the same time. The first person to notice such a connection between mechanics and the theory of light was Hamilton. He developed Hamiltonian Optics, which described light as a particle (aka corpuscle). Theorists soon recognized that Hamiltonian Optics cannot account for light phenomena like interference, diffraction, and polarisation. They realized that Hamiltonian Optics is only an approximation, which works well as long as the wavelength of light is much smaller than the measurement apparatus (e.g. for geometrical optics based on light rays and lenses). Nevertheless, the language of Hamiltonian Optics worked perfectly to describe classical mechanics, which is now commonly known as Hamiltonian mechanics. Maxwell's field theory of Electrodynamics was a more correct description of light, but then came Planck and Einstein. They showed that to describe Black Body Radiation and the photoelectric effect it was necessary to assume that light cannot be a field with infinite divisibility (i.e. continuity) as assumed in Maxwell's Wave theory of light. Rather, light must consist of countable entities they called "quanta". But, this theory was ad hoc and not consistent with special relativity. (Note: the consistent version is Quantum Electrodynamics.) Although immature, the Planck and Einstein explanation of these phenomena was the first quantum theory because it showed (or better, assumed) wave-particle duality. (Note: Quantisation doesn't mean going from a wave theory of light back to a corpuscle theory like Hamiltonian Optics. Rather it combines features of waves and particles.) The crazy genius of deBroglie and Schrödinger was needed to apply this theory in the opposite direction - to particles. They noticed that if Maxwell's wave theory of light must be extended to contain quanta/particles, classical theory (which consists only of particles) must be extended to produce the features of waves. They saw classical theory could be an approximation like Hamiltonian Optics, which is valid only for short wavelengths. Thus, Schrödinger developed wave mechanics not by postulating quanta, but by reversing the approximations necessary to go from Maxwell's theory of light to Hamiltonian Optics. In opposition to Electrodynamics, Classical Mechanics needed to be "wavized" to become a complete theory showing wave-particle duality. (Note: here again, quantisation is not going from a particle theory to a complete wave theory of infinite divisibility, rather, it combines features of both worlds.) So, a theory is "Quantum" when it integrates/combines the features of both waves and particles. A classical theory is either only waves/fields or only particles. Regarding the quantisation of General Relativity, it is instructive to compare this classical field theory with another classical field theory, namely fluid dynamics. What both theories have in common is their high non-linearity. Both can only be quantised if they get linearized first. If one linearizes fluid dynamics, one gets the equation for sound waves. If one linearizes the equations of GR, one gets the equations of gravitational waves. If one quantizes the equation of sound waves, one gets phonons. If one quantizes gravitational waves, one gets gravitons. Again, both Gravitons and Phonons show wave-particle duality. But in both cases, we need to linearize our theory first to be able to quantize it. (Note: Phonons only exist in solids. Gravitons might also only exist in "solid" space-time.) | cite | improve this answer | | I'm astonished that nobody appears to mention that a quantum theory describes quantities which have discrete values. All quantities which appear continuous on the macroscopic level can only take on discrete values in a quantum theory. The differences are "communicated" by "particles" (photons etc.). That's the heart of a quantum theory. Describing the states and interacting particles has not been achieved, or has only been tentatively achieved, for gravitation. | cite | improve this answer | | • 2 $\begingroup$ -1 This answer is basically incorrect; in particular, “All quantities which appear continuous on the macroscopic level can only take on discrete values in a quantum theory”. $\endgroup$ – AlQuemist Apr 9 '18 at 14:32 • 1 $\begingroup$ @PeterA.Schneider No, that's a very simplistic view of classical mechanics (and physics in general): a single system always has an infinite number of different descriptions, some of which are typically more accurate than others. It's turtles all the way down: you can always add more levels of sophistication to a certain model. In this sense, speaking of a "coin" is not meaningful: you have to decide which degrees of freedom you want to study (only heads/tails? or also it's final temperature? what about any possible deformation due to the impact?) (1/2) $\endgroup$ – AccidentalFourierTransform Apr 9 '18 at 16:52 • 1 $\begingroup$ (2/2) At some point you truncate the problem, and pick a certain finite set of degrees of freedom. Once you do this, you should be able to decide whether the model is classical or quantum-mechanical independently of other "more sophisticated" models. The binary model is consistent in and of itself, independently of more accurate descriptions. It is a valid model, and complete as far as the degrees of freedom we chose to describe is concerned. Whether there is a Newtonian description that is more accurate is completely irrelevant. FWIW, I appreciate your answer anyway, and I upvoted it. $\endgroup$ – AccidentalFourierTransform Apr 9 '18 at 16:54 • 5 $\begingroup$ @PeterA.Schneider Take guitar string or some other resonating system - you get discrete results. $\endgroup$ – Arvo Apr 10 '18 at 12:03 • 1 $\begingroup$ As with @Arvo I immediately glanced at classical standing waves. As with quantum systems they discreteness comes from the application of boundary conditions. As with quantum systems they are a steady-state effect and you can observe results that don't meet the quantization condition in the immediate aftermath of disturbing the system. $\endgroup$ – dmckee --- ex-moderator kitten Apr 11 '18 at 19:01
7f63bb74305946fc
The attosecond dynamics underlying the photoelectric effect Photo: Maxime Valcarce / Unsplash In 1882, Heinrich Hertz devoted himself to the study of electromagnetism, including the recent and still generally unappreciated work of Maxwell. Two years later he began his famous series of experiments with electromagnetic waves. During the course of this work, Hertz discovered the photoelectric effect, which has had a profound influence on modern physics. The effect, as such, is easy to describe: when a substance (typically, a metal) is exposed to electromagnetic radiation it liberates electrons. The number of electrons emitted depends on the intensity of the radiation. The kinetic energy of the electrons emitted depends on the frequency of the radiation. And this is it. But the explanation is far from trivial. Actually, the explanation of the photoelectric effect was the major work cited in the award to Albert Einstein of the Nobel Prize in Physics in 1921. Einstein’s explanation, proposed in 1905, played a major role in the development of atomic physics. He based his theory on a daring hypothesis, for few of the experimental details were known in 1905. Moreover, the key point of Einstein’s explanation contradicted the classical ideas of the time. He assumed that the energy of light is not distributed evenly over the whole expanding wavefront (as the classical theory assumed). Instead, the light energy is concentrated in separate “lumps.” In addition, the amount of energy in each of these regions is not just any amount, but a definite amount of energy that is proportional to the frequency f of the light wave. Besides its theoretical and historical importance, the applications of the photoelectric effect are everywhere: from night vision devices to automatic doors or solar panels. And, still, it has a fundamental importance in the study of matter. The analysis of photoemission has become the standard method for the investigation of occupied electronic states, for isolated atoms and molecules and in condensed matter systems, including samples of demanding complexity. The timing of the photoemission process, i.e., the time elapsing between photon absorption and electron emission, is experimentally accessible since laser sources producing synchronized attosecond extreme-ultraviolet (XUV) pump pulses and few-cycle near-infrared (NIR) waveforms are available and its investigation considerably enhances our understanding of fundamental light-matter interactions. Experiments yield relative delays between photoelectrons from different initial states, which by comparison with appropriate reference states and theory can be turned into absolute values with high precision. For isolated particles, particularly light atoms, this time interval, which is commonly denoted as (absolute) delay can be calculated with attosecond (10-18 s) precision. Yet, for condensed matter systems, theoretical investigations are much more demanding and mostly limited to one-dimensional model systems or approximations using semiclassical treatments are used. As a matter of fact, suitable high-precision references cannot always be obtained easily and ambiguities may remain. Now, and in order to resolve these limitations for condensed matter systems, a team of researchers introduces 1 a holistic study of the dynamical photoemission process by rigorously combining time- and energy-resolved attosecond streaking spectroscopy with conventional high-resolution spectroscopy and numerical calculations based on the solution of the time-dependent Schrödinger equation over a broad excitation region spanning from the XUV into the soft-x-ray region. For prototypical solid materials XUV penetrates deeply, whereas the NIR radiation is reflected or refracted at the surface. Thus the probing of the photoelectrons starts as they traverse the substrate-vacuum boundary and not at their sites of generation. Photoemission delay due to electron transport inside the material dominates at larger kinetic energy, when atomic delays are shown to become small. Inelastic and elastic scattering limit the escape depth to the outermost atomic layers. In general, resonant photoemission, which involves transitions between bulk bands, allows larger escape depth and therefore larger delays than nonresonant photoemission with transitions into waves that do not propagate in the bulk. No study comparing energy-, angle- and time-resolution and covering resonant and nonresonant photoemission from surface and bulk valence states and from inner-shell states has been performed. This new one covers most of the band structure induced effects, especially those related to the initial state, and elucidates the opposing actions of surface localization, e.g., of a surface state compared to bulk bands, and the extended electron escape depth on resonance. Experimental geometry of attosecond and synchrotron photoemission experiments. XUV radiation and NIR radiation are focused on a Mg(0001) surface. The researchers report measurements of the temporal dynamics of the valence band photoemission from the magnesium (0001) surface and link them to observations of high-resolution synchrotron photoemission and numerical calculations of the time-dependent Schrödinger equation using an effective single-electron model potential. They find that both the observed resonant enhancement and the attosecond retardation of the photoemission are directly linked to the spatial structure of the delocalized sp-band states. By exploiting the synergies between the complementary information obtained from conventional high-resolution photoemission and attosecond streaking they can circumvent the ambiguities that plague relative photoemission time delays without implementation of technically challenging references. The approach used widens the time-domain window into conventional photoemission establishing a connection between energy-space properties of electronic states in solids and the temporal dynamics of the fundamental electronic excitations underlying the photoelectric effect. 1. Johann Riemensberger, Stefan Neppl, Dionysios Potamianos, Martin Schäffer, Maximilian Schnitzenbaumer, Marcus Ossiander, Christian Schröder, Alexander Guggenmos, Ulf Kleineberg, Dietrich Menzel, Francesco Allegretti, Johannes V. Barth, Reinhard Kienberger, Peter Feulner, Andrei G. Borisov, Pedro M. Echenique, and Andrey K. Kazansky (2019) Attosecond Dynamics of sp-Band Photoexcitation Phys. Rev. Lett. doi: 10.1103/PhysRevLett.123.176801 Leave a reply
d7d5e19e7250b42e
New Monadology Leibniz_Monadology_2The first manuscript page of the Monadology Leibniz surmised that there are indefinitely many substances individually ‘programmed’ to act in a predetermined way, each substance being coordinated with all the others. This description of reality is elegant to the ear that believes Zeus is more simple than Maxwell’s equations of electromagnetism. However, coding Zeus is more difficult than coding Maxwell’s equations. Similarly, coding a world in which all substances are individually programmed is more difficult than coding a world in which a single substance is programmed. The single substance is the amplitude distribution of the entire universe. Another problem is that for a Bayesian rationalist trained on the early 21st century blog LessWrong, the immediately succeeding question after reading Leibniz is “How would the world be otherwise if this were not true?” Unfortunately Leibniz’s view is vague enough that it cannot be made to “pay rent.” Poetic; tantalizing – yes. But the more complex an explanation is, the more evidence you need just to find it in belief-space. Screen Shot 2018-10-09 at 4.48.38 PM –Take the example of the B-Theory of Time. However counterintuitive it may be from the inside of human self-modeling computations to believe that time is an illusion, eternalism is a physical proposition because it can be denied by observing special relativity fail. If we had seen the absence of time dilation or the absence of length contraction, then special relativity would be wrong and eternalism would be debunked. Unfortunately for those who cherished belief in libertarian free will, this was not the case. It is more difficult to apply a Popper test to Leibniz’s monadology, however. Perhaps Leibniz knew of an observation that could knock down his proposition, but this jugular is not clearly visible. If a proposition believes itself immune ∀ observations, the proposition is not physical. So the sense in which I want to rehabilitate the monadology is not in the physical sense. There is an aesthetic vibe to it, and this aesthetic vibe is similar to the aesthetic vibe caused by the ontological content in my physical proposition belief space. We have learned much about reality since the time of Leibniz. If we are given a wave function \psi for a single structureless particle in position space, this reduces to saying that the probability density function p(x,y,z) for a measurement of the position at time t_{0} will be given by p(x,y,z)= {\displaystyle |\psi (x,y,z,t_{0})|^{2}.} Screen Shot 2018-10-09 at 6.10.11 PM If you are not familiar with complex conjugates, I guess you can just forget about the absolute value squared part. Just look at the picture and try to realize, try to feel, that these are indeed equal. It feels weird doesn’t it? The measurement problem arises because the quantum state vector, the source of all knowledge concerning quantum systems, evolves according to the Schrödinger equation into a linear superposition of different states, predicting paradoxical situations such as “Schrödinger’s cat”; situations never experienced in our classical world. Except that the cat which is both dead and alive does happen in our classical world. It is just not experienced that way. Observers can only find themselves where they are alive because they are nothing more than a physical configuration. The confusion arises from thinking that one can actually find oneself dead. To be more precise, there is no flowing identity in the cat that must be accounted for. The alive cat is always alive from its own slice in eternity and the similar but different dead cat in another branch is subjectively inconsequential to the observed reality of the other indexical feline. In common language it is often explained that: “According to the many worlds interpretation of quantum mechanics, reality is constantly splitting into countless parallel universes, with each possible collision and all other outcomes being realized in a different universe. Even very improbable events must then occur by chance in a small percentage of universes.” Ahh… but if all else which is occurring is directly inconsequential to immediate perception, doesn’t this belief in the objectivity of the wavefunction and therefore many-worlds, also “not pay rent”; is therefore also poetry; is therefore also Leibniz’s Monadology? Such is not the case. From both a Bayesian rationalist perspective, and a Deutsch-style Popperian perspective, Many-worlds does pay rent. It may not be obvious that Occam’s razor implies many-worlds to those who do not think about multi-particle configurations. But it pays, and we cannot kick it out of our territory through argumentation that values empiricism. However, we can kick it out as a matter of constraining our anticipation. We still believe that the sizes of infinity matter, and that somehow we exist at the most dense core of amplitude distribution – that which is most rational. Hence why we don’t buy insurance for betrayal branches were we spontaneously murder the people around us. Or even gamble at the lottery, though infinite easy trillionaires are physically created through this behavior. Screen Shot 2018-11-29 at 8.26.16 PM We can reify belief in a solipsistic core or we can say we are all discrete random variables – believing ourselves unable to distinguish where we stand in a sea of independent consciousnesses. Trained in Biology, I view that video as a form of imaginative play that is then either valued by reality or not. Everything is natural selection. But say we imagine such self-localization difficulties – then what can we say about ourselves? One choice is to still think of ourselves as separate units, but “ultimately” one. Then we may be committed to say that: If the generator of random variable X is discrete with probability mass function{\displaystyle x_{1}\mapsto p_{1},x_{2}\mapsto p_{2},\ldots ,x_{n}\mapsto p_{n}} then \operatorname {Var} (X)=\sum _{i=1}^{n}p_{i}\cdot (x_{i}-\mu )^{2}, or equivalently {\displaystyle \operatorname {Var} (X)=\left(\sum _{i=1}^{n}p_{i}x_{i}^{2}\right)-\mu ^{2},} where \mu is the average value, i.e. {\displaystyle \mu =\sum _{i=1}^{n}p_{i}x_{i}.} And if you have the circumstantial privilege to identify as that, then go right ahead. It is perhaps quite a silly endeavor to argue through physical considerations that “a final discrete element in reality exists; that consciousness itself appears to be in a singular place, at a singular time,” to someone who does not care about where the hierarchical discussion of “physical considerations” leads. There has always been a cadre of consciousness realists in the ever-bifurcating philosophical traditions of history that claim consciousness is indivisible, a singularity, a 1, a 0, the only truly discrete object. Their male brains are unable to disengage from the “object level” and notice that feeling independent consciousness is a choice, until it’s not, like the colors we learn are a cultural choice, until they’re not. If I could mockingly imitate non-mysterian consciousness realists (i.e. my past self) they would sound like this: “Could it be that awareness is a discrete probability distribution that needs to be represented as a generalized probability density function involving Dirac delta functions in order to substantially unify the treatment of the continuous reality surrounding us and the discrete distributions which we are?” Some consciousness realists take on that flag because they believe that others are denying existence itself. I never believed this. Instead, consciousness realism was the idea that my existence could be carved out with a model that isomorphically mapped to it. Incapable of noticing that the quest of the consciousness realist is just the quest to transfigure his own experience. If someone understood what I was trying to convey in my mock example with Dirac delta functions, a slightly new form of consciousness might be synthesized. The insurmountable problem for those trying to find a homotopy that translates them is that the binding into consciousness is impossible to introspect because you are inside of it. There is the possibility that lasting insight might be accidentally gathered and cached by climbing the aesthetic sense of the consciousness realism mountain. But like in theoretical physics, no Grand Unified Theory can exist. One must understand that the helpless sense of conscious self is no different to the helpless understanding of these English words. It was learned, and now it can only be undone by self-locating in regions lacking that ability. In physics-naive terms The Ability might be defined as: synchrony with past events by a complexity gradient. Screen Shot 2018-12-02 at 9.49.36 AM The ovals are events in the eternal fabric. The fabric and all its events are eternal because otherwise you would contradict special relativity. The lines indicate the binding into an experience. What selects the binding into the perfectly adaptive phenotype of now with all its particular traits (language, body, temporal grain, size of visual field, sensations, conceptual scaffoldings) is unknown. It is ultimately the mystery of, “Why am I this and not something else?” This is often a worthy mystery in regions of mindspace that are depressed or asexual humans. Such is the cortical ruminating fate in the absence of dopamine release in the dorsorostral nucleus accumbens and posterior ventral palladum.. Yet other regions of mindspace also care. I remember this existential question sharply piercing me with auto-teleologic interest when I was ten years old, sitting in a car, and realized that I was conscious; that I existed; that I was full of particularities that could have been otherwise in theory. Humans then return to the question when they don’t believe in that auto-teleologic worthiness provided by their capacity to impress a group perceived to be the adaptive, good tribe. This can include disease-ridden old people, loners, and people with relatively high moral sentiments attempting to climb to desired positions. The “why” doesn’t matter as much as the insight that sometimes results from the path. The rational insight at the core of the probability distribution is what absorbs us when we deviate from it and die. In this regard, I believe I have discovered a core insight which is that it is impossible to really die and we are inside a very particular kind of God. The Western mind assumes that the linear travelers called Subjects are not culturally constructed, somehow profoundly unlike understanding English which is culturally constructed. But the self-created Subjects are just incapable of understanding Mandarin in that regard. Remember that your experience is integrated atemporally as I indicated with the diagram showing events in relativistic light cones. If we replace his arbitrary trinitarian desert god and instead hold Leibniz closely accountable to his word that God represents that with the greatest degree of consciousness, then this would just be equivalent to that which is the absolute max of the binding function in the eternal-block. Why wouldn’t functional grain of experience scale up? Like the Namibian Himbas’ different perception of color from mine, closed individualism is an approximate blob of feeling that doesn’t generalize to all mindspace. To illustrate our situation as conscious being, it is necessary to realize that my particular state of consciousness is created by events in an eternal block. To account for the complexity of the senses and not desecrate the implications of special relativity, we need to be a set of information distributed in tenseless spacetime. Because there is no global time in the relativistic block, it must then be concluded that experience is embedded in a process which already happened. Notice that when someone takes drugs and the experienced velocity of consciousness is slower, the reason is not that the speed of light changed. The speed of light is the same and information got to neurons at the same time as always because the distance between neurons didn’t increase. The reason time feels slower is because of the different shape of the computation serving that function. Leave a Reply You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
dae0fa68824a4e10
Support Provided ByLearn More Physics + MathPhysics & Math Debating the Meaning of Quantum Mechanics ByKate BeckerThe Nature of RealityThe Nature of Reality Why is quantum mechanics like cricket? Because for me, no matter how many times the rules are explained, I can’t seem to get my head around what the game is actually about. Support Provided ByLearn More Is quantum theory a system of equations? A description of the behavior of invisible particles? A philosophy for the post-post-modern age? And how strange is it that we even have to ask? Unlike other scientific theories, quantum physics is so slippery that its formalism—the equations that add up to a mathematical representation of what we humans call reality—is divorced from its physical interpretation. Sure, we can solve the Schrödinger equation for the case of a particle stuck in a box, but what is that telling us about how the natural world really works? This isn’t a question you’d even think to ask about classical mechanics. Remember Newton’s Second Law, the one relating force to mass and acceleration? Its formalism is F=ma , and its interpretation is pretty simple: If you want to know the force an object is exerting, just multiply its mass by its acceleration. That’s F=ma . But what about: Debating the Meaning of Quantum Mechanics-schrodinger-equation.png “Quantum mechanics needs an explanation worse than other theories do because others always had a physical picture that guided the formulation of the mathematics,” explains John Cramer , a physicist at the University of Washington who also happens to be the author of his own interpretation of quantum mechanics—more on that later. Newton had his (possibly apocryphal) apples, his inclined planes, his cannonballs. Werner Heisenberg, one of the “fathers” of quantum mechanics, by contrast, had some elegant mathematics, a vision more akin to numerology than to a picture of the physical world, in Cramer’s view. “The Copenhagen interpretation is like a religious text,” says MIT physicist Max Tegmark. “It leaves a lot open to interpretation.” Yet Heisenberg, like his colleague Niels Bohr , felt that quantum mechanics needed no further interpretation. This view, which is now known as the Copenhagen interpretation, holds that there is no “objective reality” lurking beneath the formalism. If the equations say that I have a 50% chance of measuring a particle in a certain state—say, spin up —and then I go ahead and measure it in that state, what more is there to say? To guess at what the particle was doing before I made the measurement would be worse than speculation; nothing can be said about the particle except in the context of a measurement. “Reality” is no more and no less than what our instruments and senses reveal it to be. The Copenhagen interpretation may give you a headache, but according to Anton Zeilinger, the University of Vienna physicist most famous for his teleportation experiments , “It works, is useful to understand our experiments, and makes no unnecessary assumptions.” Still, many physicists find this notion unsatisfying. “Quantum mechanics is full of strange things that cry out for an interpretation,” says Cramer. There’s the problem of “ spooky action at a distance ,” the apparent connection between “entangled” particles that seems to violate the finite speed of light; and there’s Einstein’s famous discomfort with the idea that no reality exists outside of our own perceptions. As Einstein put it: “Do you really think the moon isn’t there if you aren’t looking at it?” There’s also a niggling problem with exactly what defines “looking at it”—or, in quantum-speak, what defines a “measurement.” If we truly cannot say anything definite about a particle until after we’ve measured its state, then the act of measuring it must be pretty special. But why? What happens in that moment? Physicists often talk about it as the “collapse of the wavefunction”—that is, the moment when all of the possible particle states represented in the probability equation called the wavefunction collapse into a single, measured state. The instantaneous collapse of an entity that wasn’t physically real to start with is weird in itself. But physicist Steven Weinberg pointed to another weak link in this interpretation in a 2005 article in Physics Today : “The Copenhagen interpretation describes what happens when an observer makes a measurement, but the observer and the act of measurement are themselves treated classically. This is surely wrong: Physicists and their apparatus must be governed by the same quantum mechanical rules that govern everything else in the universe.” If not Copenhagen, then what? Let’s take a quick tour of a handful of the (many!) competing interpretations of quantum mechanics. • Copenhagen interpretation : This is the interpretation we’ve just met, and the one you’ll see in most physics books—though even Heisenberg and Bohr didn’t always agree on the particulars. To put it in terms of our cricket analogy, let’s say that you’re following a cricket match on your cell phone. Actually let’s make it a baseball game because, as I’ve already confessed, I don’t understand cricket. So you’re using one of those apps that updates the box score every time you press “refresh,” but you can’t actually see the game in progress. According to the Copenhagen interpretation, there is no game—just the results you get when you ping the server. So it’s no use talking about whether the batter is getting into the pitcher’s head, or the appearance of the rally squirrel, or even the trajectory the ball takes on its way into the first baseman’s glove. The box score is real; the game isn’t. • Consistent histories : The Copenhagen interpretation applies to a situation in which an observer (the baseball fan) makes a measurement (checks the score) on some external system. But what happens when the observer is himself part of the system—say, the shortstop? That’s the problem that a special breed of physicists called quantum cosmologists encounter when they attempt to study the entire universe as a single quantum system. The Copenhagen interpretation falls short in this case, but the consistent histories interpretation, developed in the 1980s and early 1990s, does away with external “observers” and “measurements”—they are treated as part of one big system. • Many worlds : We talked earlier about the problem of the collapsing wavefunction. But what if the wavefunction never actually collapses? What if every possibility it represents really does happen in its own universe? With every measurement, each universe branches off into countless others, each of which in turn branches into ever more universes. The many worlds interpretation was first proposed in the 1950s by the young physicist Hugh Everett, and though it never gained much traction at the time, its star is now ascending: In the film Parallel Worlds, Parallel Lives , Tegmark called the many worlds interpretation “one of the most important discoveries of all time in science,” and he and his colleagues recently posited that Everett’s parallel universes might be congruent with the parallel universes proposed by cosmologists . Of course, plenty of physicists can’t stomach the idea of a multiplicity of fundamentally unobservable universes. Yet—back to baseball for a moment—there is something appealing about an interpretation that insists upon the existence of a universe in which the baseball rolls squarely into Buckner’s glove; an interpretation that guarantees that every heartbreaker in our universe is shadowed by a heroic comeback in another; an interpretation in which the Red Sox and the Yankees win, year after year after year. • Transactional interpretation : The transactional interpretation might solve some of quantum theory’s biggest quandaries, if you can get your head around the idea of a wave with negative energy that travels back in time. The transactional interpretation was first proposed in the 1980s by John Cramer, and suggests that the wavefunction includes not just one but two probability waves—the familiar one that travels forward in time, plus an exotic twin that travels backward. When they meet, they exchange a “handshake” across space-time, says Cramer; at other points, they cancel each other out completely, removing any telltale traces of the journey backward in time. So, is there any way to know which interpretation is right or wrong? “Unless you can catch an interpretation deviating from the mathematics, you can’t rule it out,” says Cramer. And though some experiments could maybe, possibly tip the scales in favor of one interpretation or another, there is no consensus that any of the contenders above have been favored or nixed by experiment. Perhaps, some physicists argue, the pursuit of an interpretation is a flawed endeavor. “There is no logical necessity of a realistic worldview to always be obtainable,” wrote Christopher Fuchs and Asher Peres in a Physics Today opinion piece titled, transparently, “ Quantum Theory Needs No ‘Interpretation’ .” “If the world is such that we can never identify a reality independent of our experimental activity, then we must be prepared for that, too.” Perhaps the interpretation problem isn’t a problem of quantum physics at all, but a problem of human beings.
4c7e7a466db06bec
Lindblad Equation Get Lindblad Equation essential facts below. View Videos or join the Lindblad Equation discussion. Add Lindblad Equation to your PopFlock.com topic list for future reference or share this resource on social media. Lindblad Equation In quantum mechanics, the Gorini-Kossakowski-Sudarshan-Lindblad equation (GKSL equation, named after Vittorio Gorini, Andrzej Kossakowski, George Sudarshan and Göran Lindblad), master equation in Lindblad form, quantum Liouvillian, or Lindbladian is the most general type of Markovian and time-homogeneous master equation describing (in general non-unitary) evolution of the density matrix ? that preserves the laws of quantum mechanics (i.e., is trace-preserving and completely positive for any initial condition).[1] The Schrödinger equation is a special case of the more general Lindblad equation, which has led to some speculation that quantum mechanics may be productively extended and expanded through further application and analysis of the Lindblad equation.[2] The Schrödinger equation deals with state vectors, which can only describe pure quantum states and are thus less general than density matrices, which can describe mixed states as well. In the canonical formulation of quantum mechanics, a system's time evolution is governed by unitary dynamics. This implies that there is no decay and phase coherence is maintained throughout the process, and is a consequence of the fact that all participating degrees of freedom are considered. However, any real physical system is not absolutely isolated, and will interact with its environment. This interaction with degrees of freedom external to the system results in dissipation of energy into the surroundings, causing decay and randomization of phase. These effects are the reasons quantum mechanics is difficult to observe on a macroscopic scale. More so, understanding the interaction of a quantum system with its environment is necessary for understanding many commonly observed phenomena like the spontaneous emission of light from excited atoms, or the performance of many quantum technological devices, like the laser. Certain mathematical techniques have been introduced to treat the interaction of a quantum system with its environment. One of these is the use of the density matrix, and its associated master equation. While in principle this approach to solving quantum dynamics is equivalent to the Schrödinger picture or Heisenberg picture, it allows more easily for the inclusion of incoherent processes, which represent environmental interactions. The density operator has the property that it can represent a classical mixture of quantum states, and is thus vital to accurately describe the dynamics of so-called open quantum systems. More generally, the Lindblad master equation for an N-dimensional system's density matrix ? can be written as[1] (for a pedagogical introduction you can refer to[3]) where H is a (Hermitian) Hamiltonian part, and is an arbitrary orthonormal basis of the Hilbert-Schmidt operators on the system's Hilbert space with the restriction that AN2 is proportional to the identity operator. Our convention implies that the other Am are traceless, and note that the summation only runs to N2 - 1 thus excluding the only basis-matrix with a non-zero trace. The coefficient matrix h, together with the Hamiltonian, determines the system dynamics. The matrix h must be positive semidefinite to ensure that the equation is trace-preserving and completely positive. The anticommutator is defined as If the hmn are all zero, then this reduces to the quantum Liouville equation for a closed system, . This is also known as the von Neumann equation, and is the quantum analog of the classical Liouville equation. Since the matrix h is positive semidefinite, it can be diagonalized with a unitary transformation u: where the eigenvalues ?i are non-negative. If we define another orthonormal operator basis we can rewrite the Lindblad equation in diagonal form The new operators Li are commonly called the Lindblad or jump operators of the system. Quantum dynamical semigroup The maps generated by a Lindbladian for various times are collectively referred to as a quantum dynamical semigroup—a family of quantum dynamical maps on the space of density matrices indexed by a single time parameter that obey the semigroup property The Lindblad equation can be obtained by which, by the linearity of , is a linear superoperator. The semigroup can be recovered as Invariance properties The Lindblad equation is invariant under any unitary transformation v of Lindblad operators and constants, and also under the inhomogeneous transformation where ai are complex numbers and b is a real number. However, the first transformation destroys the orthonormality of the operators Li (unless all the ?i are equal) and the second transformation destroys the tracelessness. Therefore, up to degeneracies among the ?i, the Li of the diagonal form of the Lindblad equation are uniquely determined by the dynamics so long as we require them to be orthonormal and traceless. Heisenberg picture The Lindblad-type evolution of the density matrix in the Schrödinger picture can be equivalently described in the Heisenberg picture using the following (diagonalized) equation of motion[] for each quantum observable X: A similar equation describes the time evolution of the expectation values of observables, given by the Ehrenfest theorem. Corresponding to the trace-preserving property of the Schrödinger picture Lindblad equation, the Heisenberg picture equation is unital, i.e. it preserves the identity operator. Physical derivation The Lindblad master equation describes the evolution of various types of open quantum systems, e.g. a system weakly coupled to a Markovian reservoir.[1] Note that the H appearing in the equation is not necessarily equal to the bare system Hamiltonian, but may also incorporate effective unitary dynamics arising from the system-environment interaction. A heuristic derivation, e.g., in the notes by Preskill,[4] begins with a more general form of an open quantum system and converts it into Lindblad form by making the Markovian assumption and expanding in small time. A more physically motivated standard treatment[5][6] covers three common types of derivations of the Lindbladian starting from a Hamiltonian acting on both the system and environment: the weak coupling limit (described in detail below), the low density approximation, and the singular coupling limit. Each of these relies on specific physical assumptions regarding, e.g., correlation functions of the environment. For example, in the weak coupling limit derivation, one typically assumes that (a) correlations of the system with the environment develop slowly, (b) excitations of the environment caused by system decay quickly, and (c) terms which are fast-oscillating when compared to the system timescale of interest can be neglected. These three approximations are called Born, Markov, and rotating wave, respectively.[7] The weak-coupling limit derivation assumes a quantum system with a finite number of degrees of freedom coupled to a bath containing an infinite number of degrees of freedom. The system and bath each possess a Hamiltonian written in terms of operators acting only on the respective subspace of the total Hilbert space. These Hamiltonians govern the internal dynamics of the uncoupled system and bath. There is a third Hamiltonian that contains products of system and bath operators, thus coupling the system and bath. The most general form of this Hamiltonian is The dynamics of the entire system can be described by the Liouville equation of motion, . This equation, containing an infinite number of degrees of freedom, is impossible to solve analytically except in very particular cases. What's more, under certain approximations, the bath degrees of freedom need not be considered, and an effective master equation can be derived in terms of the system density matrix, . The problem can be analyzed more easily by moving into the interaction picture, defined by the unitary transformation , where is an arbitrary operator, and . Also note that is the total unitary operator of the entire system. It is straightforward to confirm that the Liouville equation becomes where the Hamiltonian is explicitly time dependent. Also, according to the interaction picture, , where . This equation can be integrated directly to give This implicit equation for can be substituted back into the Liouville equation to obtain an exact differo-integral equation We proceed with the derivation by assuming the interaction is initiated at , and at that time there are no correlations between the system and the bath. This implies that the initial condition is factorable as , where is the density operator of the bath initially. Tracing over the bath degrees of freedom, , of the aforementioned differo-integral equation yields This equation is exact for the time dynamics of the system density matrix but requires full knowledge of the dynamics of the bath degrees of freedom. A simplifying assumption called the Born approximation rests on the largeness of the bath and the relative weakness of the coupling, which is to say the coupling of the system to the bath should not significantly alter the bath eigenstates. In this case the full density matrix is factorable for all times as . The master equation becomes The equation is now explicit in the system degrees of freedom, but is very difficult to solve. A final assumption is the Born-Markov approximation that the time derivative of the density matrix depends only on its current state, and not on its past. This assumption is valid under fast bath dynamics, wherein correlations within the bath are lost extremely quickly, and amounts to replacing on the right hand side of the equation. If the interaction Hamiltonian is assumed to have the form for system operators and bath operators , the master equation becomes which can be expanded as The expectation values are with respect to the bath degrees of freedom. By assuming rapid decay of these correlations (ideally ), above form of the Lindblad superoperator L is achieved. For one jump operator and no unitary evolution, the Lindblad superoperator, acting on the density matrix , is Such a term is found regularly in the Lindblad equation as used in quantum optics, where it can express absorption or emission of photons from a reservoir. If one wants to have both absorption and emission, one would need a jump operator for each. This leads to the most common Lindblad equation describing the damping of a quantum harmonic oscillator (representing e.g. a Fabry-Perot cavity) coupled to a thermal bath, with jump operators Here is the mean number of excitations in the reservoir damping the oscillator and ? is the decay rate. If we also add additional unitary evolution generated by the quantum harmonic oscillator Hamiltonian with frequency , we obtain Additional Lindblad operators can be included to model various forms of dephasing and vibrational relaxation. These methods have been incorporated into grid-based density matrix propagation methods. See also 1. ^ a b c Breuer, Heinz-Peter; Petruccione, F. (2002). The Theory of Open Quantum Systems. Oxford University Press. ISBN 978-0-1985-2063-4. 2. ^ Weinberg, Steven (2014). "Quantum Mechanics Without State Vectors". Phys. Rev. A. 90: 042102. arXiv:1405.3483. doi:10.1103/PhysRevA.90.042102. 3. ^ Manzano, Daniel (2020). "A short introduction to the Lindblad master equation". AIP Advances. 10: 025106. arXiv:1906.04478. doi:10.1063/1.5115323. 4. ^ Preskill, John. Lecture notes on Quantum Computation, Ph219/CS219 (PDF). 5. ^ Alicki, Robert; Lendi, Karl (2007). Quantum Dynamical Semigroups and Applications. Springer. doi:10.1007/b11976790. 6. ^ Carmichael, Howard. An Open Systems Approach to Quantum Optics. Springer Verlag, 1991 7. ^ This paragraph was adapted from Albert, Victor V. "Lindbladians with multiple steady states: theory and applications". arXiv:1802.00010. External links Music Scenes
a8222020e7c36752
Take the 2-minute tour × I realize that path integral techniques can be applied to quantitative finance such as option value calculation. But I don't quite understand how this is done. Is it possible to explain this to me in qualitative and quantitative terms? share|improve this question closed as too broad by ACuriousMind, Kyle Kanos, Brandon Enright, John Rennie, JamalS Feb 9 at 7:35 Although some physicists are interested in quantitative finance, this question is off-topic here. Unfortunately I can't point you to the appropriate forum off the top of my head, but I'm sure quantitative finance forums exist if you poke around. –  Mark Eichenlaub Dec 13 '10 at 9:30 @Mark, I do not think so; I think econophysics questions should be allowed here, much like mathematical physics, or physics questions with engineering bent are allowed here. –  Graviton Dec 13 '10 at 9:36 I will have to agree with @Ngu on this one. Path integrals are pretty much a modern physicist's bread and butter. So if someone asked "can you apply path integrals to understanding how to butter bread" I'd say that was a question for physicists :-) And a lot more physicists are entering this field now. One prominent example is Lee Smolin (reference) who is also one of the BIG names in quantum gravity. –  user346 Dec 13 '10 at 9:51 I voted to reopen. (I couldn't figure out how to redact my vote to close.) I don't have a strong personal stake in this question - my initial vote to close was just my immediate reaction because I hadn't seen such questions here before and I didn't know about any significant ties to physics. It now seems that a majority of users think the question is appropriate, and I'm happy to go along with the majority. –  Mark Eichenlaub Dec 14 '10 at 4:51 @marek I don't know about string theory but the complete works of shakespeare should be mandatory reading for all experimentalists :-) Theorists are just born with it ! –  user346 Dec 14 '10 at 20:23 1 Answer 1 up vote 22 down vote accepted The fundamental equation which serves as the basis for the path-integral formulation of finance and many physical problems is the Chapman-Kolmogorov equation. $$p(X_f|X_i)=\int p(X_f|X_k)p(X_k|X_i) dX_k $$ This is analogous to the following equation for amplitudes in quantum mechanics $$\langle X_f|X_i \rangle=\int \langle X_f|X_k\rangle\langle X_k|X_i\rangle dX_k $$ That's right, it's the same form, but the interpretation of the basic entities changes. In the former, they are probability densities and thus real and positive, in the latter they are probability amplitudes and thus complex. The class of physical problems that can be tackled with the first type of equation are called Markov processes, their characteristic is that the state of the system depends only on its previous state. Despite its seeming limitedness, this comprises many phenomena since any process with a long but finite memory can be mapped onto a Markov process provided the state space is enlarged appropriately. On the other hand, the second equation is pretty natural and general in quantum mechanics. It is basically stating that the unity operator can always be decomposed into a, possibly overcomplete, sum of pure states $$\mathbb{I}=\int |X_k\rangle\langle X_k| dX_k \; .$$ Now, constructing a path integral is done by slashing up the path from $X_i$ to $X_k$ into ever smaller components. Let's suppose that the endpoints are fixed, then we might assume that to go from one endpoint to another, the system has to go through paths $(X_i,X_1(t_1),X_2(t_2),\ldots,X_n(t_n),X_f)$. This leads to the following integral $$p(X_f|X_i)=\int\cdots\int \prod_{k=0}^n p(X_{k+1}(t_{k+1})|X_k(t_k)) \prod_{k=1}^n dX_k(t_k) $$ where I put $X_0(t_0)=X_i$ and $X_{n+1}(t_{n+1})=X_f$. The tricky part is now to see if the limit can be defined meaningfully. This can be very problematic, especially in the quantum case. Ironically, the cases that are used for finance and statistical mechanics are often much more well-behaved. This is again related to one integral being over complex numbers and the other over real numbers, but it's not the only reason. Up till now, I have not been specific about the kind of system I want to study, this will play an important role as well. So, let's take an option which is a financial security of which the price is dependent on the price of the underlying stock and time. So we can write $O(X,t)$ for the price of the option and we'll assume the underlying stock follows a geometric brownian motion: $$\frac{dX}{X}=\mu dt + \sigma dW$$ where $W$ represents a Wiener process with increments $dW$ having mean zero and variance $dt$. Also assume that the pay-off of the option at the expiration time $T$ is: with $F$ a given function of the terminal stock price. Then, Fisher Black and Myron Scholes have shown that the option, under the 'no arbitrage' assumption, satisfies the following PDE $$\frac{\partial O}{\partial t} + \frac{1}{2}\sigma^2X^2\frac{\partial^2 O}{\partial X^2} + r X \frac{\partial O}{\partial X} - rO = 0$$ in which $r$ is the risk free interest rate. If instead of the geometric brownian motion variable $X$, I reformulate this into $x=\ln X$ which is an arithmetic brownian motion variable, I can reformulate the equation as: $$\frac{\partial O}{\partial t} + \frac{1}{2}\sigma^2\frac{\partial^2 O}{\partial x^2} + (r-\frac{\sigma^2}{2}) \frac{\partial O}{\partial x} - rO = 0$$ This is nothing else but a special case of the PDE's that can be solved by using the Feynman-Kac formula, which includes also the Fokker-Planck equation and the Smoluchowski equation, both related to the description of diffusion processes in physics. In the diffusion problem, O is to be interpreted as a distribution of velocities of the particle (Fokker-Planck) or of the positions of the particle (Smoluchowski). That's how we relate to what I introduced above. Also note that the Schrödinger equation in quantum mechanics is very similar in form, except you'll get complex coefficients. The Feynman-Kac formula tells us that the solution to the PDE is: $$O(X,t) = e^{-r(T-t)}\mathbb{E}\left[ F(X_T)|X(t)=X \right]$$ It is this expectation value that will now be represented as a pathintegral: $$O(X,t) = e^{-r(T-t)}\int_{-\infty}^{+\infty}\left(\int_{x(t)=x}^{x(T)=x_T} F(e^{x_T}) e^{A_{BS}(x(t'))} \mathcal{D}x(t')\right) dx_T$$ $$A_{BS}(x(t'))=\int_t^{T} \frac{1}{2\sigma^2}\left(\frac{dx(t')}{dt'}-\mu\right)^2$$ is the action functional. The reason this path integral can be built is the same explained before, here it is possible to split the conditional expectation ever further in smaller intervals: Each of the conditional probabilities satisfying the PDE for the arithmetic brownian motion as noticed above. I'll stop here for now, but I refer to the following article for further details. share|improve this answer This doesn't talk about finance at all hardly. –  Noldorin Dec 13 '10 at 20:52 It doesn't really talk about physics, either. –  Sklivvz Dec 13 '10 at 22:13 Excellent answer @raskolnikov! I see the peanut gallery is pretty crowded today. –  user346 Dec 13 '10 at 23:29 Seems like the villagers have taken over. I vote to reopen. –  user346 Dec 14 '10 at 0:47 I voted to reopen. –  Robert Smith Dec 14 '10 at 2:18
a1afd04f3be104e4
The Full Wiki More info on History of classical mechanics History of classical mechanics: Quiz Question 1: Newton and most of his contemporaries, with the notable exception of ________, hoped that classical mechanics would be able to explain all entities, including (in the form of geometric optics) light. Christiaan HuygensBlaise PascalIsaac NewtonGottfried Leibniz Question 2: From ________'s heliocentric hypothesis Galileo believed the Earth was just the same as any other planet. HeliocentrismPolish–Lithuanian CommonwealthJohannes KeplerNicolaus Copernicus Question 3: It wasn't until ________'s development of the telescope and his observations that it became clear that the heavens were not made from a perfect, unchanging substance. Isaac NewtonScientific revolutionScientific methodGalileo Galilei Question 4: [1] Early yet incomplete theories pertaining to mechanics were also discovered by several other Muslim physicists during the ________. Early Middle AgesHigh Middle AgesMiddle AgesLate Middle Ages Question 5: Newton also developed the ________ which is necessary to perform the mathematical calculations involved in classical mechanics. CalculusDerivativeIntegralDifferential calculus Question 6: When combined with classical thermodynamics, classical mechanics leads to the ________ in which entropy is not a well-defined quantity. Identical particlesStatistical mechanicsIdeal gasGibbs paradox Question 7: Similarly, the different behaviour of classical ________ and classical mechanics under velocity transformations led to the theory of relativity. ElectromagnetismMagnetic fieldClassical electromagnetismMaxwell's equations Question 8: ________ extended Newton's laws of motion from particles to rigid bodies with two additional laws. Isaac NewtonPierre-Simon LaplaceLeonhard EulerJoseph Louis Lagrange Question 9: The effort at resolving these problems led to the development of ________. Quantum mechanicsIntroduction to quantum mechanicsWave–particle dualitySchrödinger equation Question 10: He led to the conclusion that in a ________ there is no reason for a body to naturally move to one point rather than any other, and so a body in a vacuum will either stay at rest or move indefinitely if put in motion. UniverseVacuumVacuum pumpOuter space Got something to say? Make a comment. Your name Your email address
613b9c1afb11a206
Regge theory From Wikipedia, the free encyclopedia   (Redirected from Regge trajectories) Jump to: navigation, search In quantum physics, Regge theory is the study of the analytic properties of scattering as a function of angular momentum, where the angular momentum is not restricted to be an integer but is allowed to take any complex value. The nonrelativistic theory was developed by Tullio Regge in 1959. History and implications[edit] The main result of the theory is that the scattering amplitude for potential scattering grows as a function of the cosine z of the scattering angle as a power that changes as the scattering energy changes: A(z) \propto z^{l(E^2)} where l(E^2) is the noninteger value of the angular momentum of a would-be bound state with energy E. It is determined by solving the radial Schrödinger equation and it smoothly interpolates the energy of wavefunctions with different angular momentum but with the same radial excitation number. The trajectory function is a function of s=E^2 for relativistic generalization. The expression l(s) is known as the Regge trajectory function, and when it is an integer, the particles form an actual bound state with this angular momentum. The asymptotic form applies when z much greater than one, which is not a physical limit in nonrelativistic scattering. Shortly afterwards, Stanley Mandelstam noted that in relativity the purely formal limit of z large is near to a physical limit — the limit of large t. Large t means large energy in the crossed channel, where one of the incoming particles has an energy momentum that makes it an energetic outgoing antiparticle. This observation turned Regge theory from a mathematical curiosity into a physical theory: it demands that the function that determines the falloff rate of the scattering amplitude for particle-particle scattering at large energies is the same as the function that determines the bound state energies for a particle-antiparticle system as a function of angular momentum.[1] The switch required swapping the Mandelstam variable s, which is the square of the energy, for t, which is the squared momentum transfer, which for elastic soft collisions of identical particles is s times one minus the cosine of the scattering angle. The relation in the crossed channel becomes A(z) \propto s^{l(t)} ...which says that the amplitude has a different power law falloff as a function of energy at different corresponding angles, where corresponding angles are those with the same value of t. It predicts that the function that determines the power law is the same function that interpolates the energies where the resonances appear. The range of angles where scattering can be productively described by Regge theory shrinks into a narrow cone around the beam-line at large energies. In 1960 Geoffrey Chew and Steven Frautschi conjectured from limited data that the strongly interacting particles had a very simple dependence of the squared-mass on the angular momentum: the particles fall into families where the Regge trajectory functions were straight lines: l(s)=ks with the same constant k for all the trajectories. The straight-line Regge trajectories were later understood as arising from massless endpoints on rotating relativistic strings. Since a Regge description implied that the particles were bound states, Chew and Frautschi concluded that none of the strongly interacting particles were elementary. Experimentally, the near-beam behavior of scattering did fall off with angle as explained by Regge theory, leading many to accept that the particles in the strong interactions were composite. Much of the scattering was diffractive, meaning that the particles hardly scatter at all — staying close to the beam line after the collision. Vladimir Gribov noted that the Froissart bound combined with the assumption of maximum possible scattering implied there was a Regge trajectory that would lead to logarithmically rising cross sections, a trajectory nowadays known as the Pomeron. He went on to formulate a quantitative perturbation theory for near beam line scattering dominated by multi-Pomeron exchange. From the fundamental observation that hadrons are composite, there grew two points of view. Some correctly advocated that there were elementary particles, nowadays called quarks and gluons, which made a quantum field theory in which the hadrons were bound states. Others also correctly believed that it was possible to formulate a theory without elementary particles — where all the particles were bound states lying on Regge trajectories and scatter self-consistently. This was called S-matrix theory. The most successful S-matrix approach centered on the narrow-resonance approximation, the idea that there is a consistent expansion starting from stable particles on straight-line Regge trajectories. After many false starts, Dolen Horn and Schmidt understood a crucial property that led Gabriele Veneziano to formulate a self-consistent scattering amplitude, the first string theory. Mandelstam noted that the limit where the regge trajectories are straight is also the limit where the lifetime of the states is long. As a fundamental theory of strong interactions at high energies, Regge theory enjoyed a period of interest in the 1960s, but it was largely succeeded by quantum chromodynamics. As a phenomenological theory, it is still an indispensable tool for understanding near-beam line scattering and scattering at very large energies. Modern research focuses both on the connection to perturbation theory and to string theory. List of unsolved problems in physics How does Regge theory emerge from quantum chromodynamics at long distances? See also[edit] 1. ^ Gribov, V. (2003). The Theory of Complex Angular Momentum. Cambridge University press. ISBN 0-521-81834-6.  Further reading[edit] External links[edit]
b3a4c0c691009dcb
Take the 2-minute tour × Back in college I remember coming across a few books in the physics library by Mendel Sachs. Examples are: General Relativity and Matter Quantum Mechanics and Gravity Quantum Mechanics from General Relativity Here is something on the arXiv involving some of his work. In these books (which I note are also strangely available in most physics department libraries) he describes a program involving re-casting GR using quaternions. He does things that seem remarkable like deriving QM as a low-energy limit of GR. I don't have the GR background to unequivocally verify or reject his work, but this guy has been around for decades, and I have never found any paper or article that seriously "debunks" any of his work. It just seems like he is ignored. Are there glaring holes in his work? Is he just a complete crackpot? What is the deal? share|improve this question closed as not constructive by dmckee Jun 1 '13 at 16:49 There are many questions also about the related Geometric Algebra. This type of thing is not physics, but formalism, and I have seen the claims about "QM from GR", they derive a quantization rule similar to Bohr Sommerfeld from a GR looking thing, and this is total rubbish from the point of view of physics. This part is crackpot, but the part about quaternions is probably empty formalism rather than wrong (although I didn't review it). –  Ron Maimon Jan 17 '12 at 5:00 He does things that seem remarkable like deriving QM as a low-energy limit of GR ... This sentence looks suspicious, I thought it is the other way round, GR is derievable as the classical low energy limit from a high energy quantum mechanics theory of gravity (or quantum gravity for short). –  Dilaton May 28 '13 at 14:25 In addition, see here for arguments why quantum mechanics has to use complex variables instead of anything else, so a quantum gravity can not be based on quaternions either. Physicists know this, and that is probably among other things why they ignore such approaches to quantum gravity. –  Dilaton May 28 '13 at 14:34 @Dilaton, I didn't type the sentence wrong. That is what he does in his books: QM as the low-energy limit of GR. I'm an experimentalist, so I probably don't have the background to dig into it enough, but I've just never been able to find anything wrong in his books and found it strange I never found any refutation or critical reviews of his works. His logic appears OK by my eye and he seems to have been an actual physicist at a real college, and his books seem to be in all the physics libraries... it's just odd. –  user1247 May 28 '13 at 17:56 Just for the notes, I did neither say nor mean that it is user1247 who typed the sentence wrong, but the sentence IS wrong from a physics point of view. –  Dilaton May 30 '13 at 17:02 5 Answers 5 There are many formalisms that relate general relativity to quaternions in the literature and it would be a huge task to entangle their interelations and see who cited each other. Quaternions or split quaternions or biquaternons can be related to the Pauli matrices so it is easy to see how someone might then relate GR to QM. (This does not mean that QM needs to be based on quaternions rather than complex numbers) All theory that uses twistors or spinor formalisms for quantisation of gravity have a similar flavour and could probably be related to the work of MS in some way. It is unlikely that MS had derived Quantum Field Theory from GR because GR is a local theory and QFT is non-local. It is possible that he related some formulation of GR to "first quantised" local equations such as the Dirac equation. Notice that in the modern view the Dirac Equation is regarded as classical even though it includes spin half variables and the Planck constant. The distinction between classical and quantum is not as clean as some people like to believe. I have not studied his work but I will hazard a guess that his work was not really ignored or debunked. It was just incorporated into other approaches with different interpretations that may have made it non-obvious that some of his ideas were included. One day when we know the final theory of physics there will be lots of science historians who dig through old papers and work out who really had the important ideas first, then perhaps MS will get more credit (if his ideas are part of the final answer and he thought of them first). Until then there is just a big melting pot of ideas that often get reinvented and the shear quantity of papers means that if you spend your time reading everything that anyone else has done you will never make any progress yourself. share|improve this answer Mendel Sachs may have been blacklisted, which would certainly be wrong. But his theory has a fatal error. His derivation depends on the assumption that certain 2x2 complex matrices, standing for quaternions, approach the Pauli spin matrices in the limit of zero curvature. This is impossible; the Pauli matrices are not quaternions and the argument collapses. share|improve this answer Could you elaborate: the Pauli matrices are indeed isomorphic to the unit quaternions. I'm sure you're right about the details, but given the isomorphism this answer is bound to look a bit odd to a nonspecialist like me. –  WetSavannaAnimal aka Rod Vance Aug 2 at 11:46 First of all, if Mendel Sachs does things like deriving QM as a low-energy limit of GR, he has things completely upside down. The fundamental laws of physics are quantum, so quantum mechanics can not be derived from something else. It is rather the case that general relativity is derievable as the classical low energy limit from a high energy quantum mechanics theory of gravity (or quantum gravity for short). This works for example for string theory. In addition, the only reasonable number system to describe quantum mechanics in are complex numbers. Some arguments why quantum mechanics has to use complex variables (instead of real variables) are given here. Complex numbers are needed for the Schrödinger equation to work, to conserve total probabilities, to describe commutators between non commuting operators (observables), to have plane wave momentum eigenstates, etc ... Generally, important physical operations in quantum mechanics demand that probability amplitudes obey the rules for addition and multiplication for complex numbers, they themself have to be complex numbers. In this article describing why quantum mechanics can not be different from the way it is, some explanations are given why using larger number systems than complex numbers to describe quantum mechanics are no good either. Using quaternions, the quaternionic wave function can be reduced to complex building blocks for example, so going from a complex number description of quantum mechanics to octanions introduces nothing new from a physics point of view. Using octanions would be really bad, since octanions have the lethal bug that they are not associative. So in summary, my reasons for being suspicious or more honestly even dismissive of Mendel Sachs's work as described here is that he seems to fundamentally misunderstand the relationship between quantum theories and their classical limits. In addition, the only reasonable number system to describe quantum mechanics are complex numbers, so I agree with Ron Maimon that introducing quaternions would at best be empty formalism. share|improve this answer I disagree that it is obvious that QM is more fundamental than GR. I think you are tautologically assuming as an axiom that QM cannot be some emergent property of GR. While it is de rigueur to quantize classical theories, it is a mistake to assume that classical theories cannot have QM as an emergent property (on the other hand modern extensions of Bell's inequalities are increasingly constraining these lines of thought). The parenthetical constraints excepted, I see no a priori reason why QM cannot emerge from GR. In fact the converse has maybe more obvious fundamental problems. –  user1247 May 29 '13 at 20:57 About complex #s vs quaternions, I agree with akhmeteli. I remember when I learned QM from Bohm's book he re-wrote QM without complex #s. Maybe not as compact a formalism, but certainly allowed. You seem to arrive at this understanding in the latter part of your answer, when you agree the quaternion stuff may just be empty formalism. On the other hand it is not obvious to me that the formalism must be completely empty. Perhaps the quaternion formalism allows a bit more freedom, being homomorphic rather than isomorphic to the complex one, leading to some additional structure. –  user1247 May 29 '13 at 21:07 @user1247 you and Mendel Sachs have it completely wrong. The real world community of active professional physicists knows that the fundamental laws of nature are quantum and that the classical theoreries are derieved from the them as a limit. The question how QM can be represented mathematically, and what does not work can be objectively and rigorously evaluated too. To bat that the voting pattern on this thread converges to represent the opinions and prejudices of people who know stuff no well enough instead of representing the knowledge of the real world active physicist community ... –  Dilaton May 30 '13 at 16:41 funnily enough, I can't stand many of Lubos' answers. Surely he is competent, I won't dispute that. But he definitely represents one very hard-line perspective about certain things that is not shared by everyone of his caliber. I don't dislike his answers just because he is so arrogant, but more because he doesn't attempt to understand where the questioner is coming from, often seemingly almost purposefully obtuse. It would please him more to insult rather than inform. –  user1247 May 30 '13 at 22:06 In any case despite your appeal to one person's opinion (Lubos), I don't think there is any a priori reason QM is fundamental. It is de rigueur to use that language, and I would use it myself when teaching a class. But that doesn't mean everyone literally thinks it must be fundamental. In fact, that is kind of stupid. Is there some logical proof it must be fundamental? Of course not. And as I pointed out there are people with Nobel's (does Lubos have one?) who work on this stuff (there are also many others in the mainstream in QM foundations). –  user1247 May 30 '13 at 22:09 I don't know much about general relativity, so I have little or nothing to say about M. Sachs' work. However, I'd like to make some remarks on some answers here where Sachs is criticized, and this is how the following is relevant to the question. For example, I don't quite understand @R S Chakravarti's critique:"the Pauli matrices are not quaternions". It is well-known that the Pauli matrices are closely related to quaternions (http://en.wikipedia.org/wiki/Pauli_matrices#Quaternions ), so maybe this critique needs some expansion/explanation. I also respectfully disagree with some of @Dilaton's statements/arguments, e.g., "the only reasonable number system to describe quantum mechanics in are complex numbers" Dilaton refers to L. Motl's arguments, however the latter can be less than watertight - please see my answer at QM without complex numbers . Maybe eventually we cannot do without complex numbers in quantum theory, but it looks like one needs more sophisticated arguments to prove that. EDIT(05/31/2013) Dilaton requested that I elaborate why I question the arguments that seem to prove that one cannot do without complex numbers in quantum theory. Let me describe the constructive results that show that quantum theory can indeed be described using real numbers only, at least in some very general and important cases. I’d like to strongly emphasize that I don’t have in mind using pairs of real numbers instead of complex numbers – such use would be trivial. Schrödinger (Nature (London) 169, 538 (1952)) noted that you can start with a solution of the Klein-Gordon equation for a charged scalar field in electromagnetic field (the charged scalar field is described by a complex function) and get a physically equivalent solution with a real scalar field using a gauge transform (of course, the four-potential of electromagnetic field will also be modified compared to the initial four-potential). This is pretty obvious, if you think about it. Schrödinger made the following comment: “"That the wave function ... can be made real by a change of gauge is but a truism, though it contradicts the widespread belief about 'charged' fields requiring complex representation." So it looks like either at least some arguments Dilaton mentioned (referred to) in his answer and comment are not quite watertight, or Schrödinger screwed up somewhere in his one- or two-page-long paper:-) I would appreciate if someone could enlighten me where exactly he failed:-) L. Motl offers some arguments related to spin. Furthermore, Schrödinger’s approach has no obvious generalization for equations describing a particle with spin, such as the Pauli equation or the Dirac equation, as, in general, one cannot simultaneously make two or more components of a spinor wavefunction real using a gauge transform. Apparently, Schrödinger looked for such generalization, as he wrote in the same short article: “One is interested in what happens when [the Klein-Gordon equation] is replaced by Dirac’s wave equation of 1927, or other first-order equations. This … will be discussed more fully elsewhere.” As far as I know, Schrödinger did not publish any sequel to his note in Nature, but, surprisingly, his conclusions can indeed be generalized to the case of the Dirac equation in electromagnetic field - please see my article http://akhmeteli.org/wp-content/uploads/2011/08/JMAPAQ528082303_1.pdf or http://arxiv.org/abs/1008.4828 (published in the Journal of Mathematical Physics). I show there that, in a general case, 3 out of 4 components of the Dirac spinor can be algebraically eliminated from the Dirac equation, and the remaining component (satisfies a 4th-order PDE and) can be made real by a gauge transform. Therefore, a 4th-order PDE for one real wavefunction is generally equivalent to the Dirac equation and describes the same physics. Therefore, we don’t necessarily need complex numbers in quantum theory, at least not in some very important and general cases. I believe the above constructive examples show that the arguments to the contrary just cannot be watertight. I don’t have time right now to consider each of these arguments separately. share|improve this answer Hi akhmeteli, can you elaborate a bit more exactly than just saying it is not "watertight" generally, about what arguments I explained you exactly disagree with and why from a physics (or mathematical) point of view? To me, the reasoning in the articles I linked too looks perfectly clear and right, I see no error therein. –  Dilaton May 30 '13 at 9:40 @Dilaton: Please see the edit to my answer. –  akhmeteli May 31 '13 at 6:42 Good question! (I have wondered the same.) I hold Mendel Sachs (deceased 05/05/12) to have been the most astute theoretical physicist since Einstein. His quaternion formalism was, no doubt, exactly what Einstein sought over his last thirty years, to complete GR. And its spinor basis induces me to suspect that Sachs' interpretation of QM, via Einstein's Mach principle, as a covariant field theory of inertia, is also right on the mark. Considering Sachs' volume of output, after much mulling, I finally had to conclude that he was "blacklisted," the establishment not permitting any discussion if they can have anything to do with it! I can see no other way that that quantity -- much less, quality -- of work could have been ignored. share|improve this answer Quantity of work is a poor measure of the work's value. –  Guy Gur-Ari Aug 10 '12 at 1:46 For the users who downvote SJRubenstein's opinion, have the elegance to motivate your vote. He has honestly answered user1247's question and I see no reason to downvote him but to confirm his view that there are some fanatics out there willing to censor anyone who is not mainstream. –  Shaktyai Sep 9 '12 at 6:20 To evade downvotes you should probably base your answer on physics arguments instead of sociological fuss and personal prejudices. Terms like "establishment" etc are often used in the internet by crackpots and trolls advertising their own physically not consistent personal pet theories, to attack professional physicists who know exactly what they are doing. –  Dilaton May 28 '13 at 14:45 To compare this guy who comes across as having a relatively weak understanding of actual physics, to Einstein, is comical to me... –  Killercam May 31 '13 at 7:36
064bfee961840744
In quantum physics, the Schrödinger technique, which involves wave mechanics, uses wave functions, mostly in the position basis, to reduce questions in quantum physics to a differential equation. Werner Heisenberg developed the matrix-oriented view of quantum physics, sometimes called matrix mechanics. The matrix representation is fine for many problems, but sometimes you have to go past it, as you’re about to see. One of the central problems of quantum mechanics is to calculate the energy levels of a system. The energy operator is called the Hamiltonian, H, and finding the energy levels of a system breaks down to finding the eigenvalues of the problem: Here, E is an eigenvalue of the H operator. Here’s the same equation in matrix terms: The allowable energy levels of the physical system are the eigenvalues E, which satisfy this equation. These can be found by solving the characteristic polynomial, which derives from setting the determinant of the above matrix to zero, like so That’s fine if you have a discrete basis of eigenvectors — if the number of energy states is finite. But what if the number of energy states is infinite? In that case, you can no longer use a discrete basis for your operators and bras and kets — you use a continuous basis. Representing quantum mechanics in a continuous basis is an invention of the physicist Erwin Schrödinger. In the continuous basis, summations become integrals. For example, take the following relation, where I is the identity matrix: It becomes the following: And every ket can be expanded in a basis of other kets, like this: Take a look at the position operator, R, in a continuous basis. Applying this operator gives you r, the position vector: In this equation, applying the position operator to a state vector returns the locations, r, that a particle may be found at. You can expand any ket in the position basis like this: And this becomes Here’s a very important thing to understand: is the wave function for the state vector — it’s the ket’s representation in the position basis. Or in common terms, it’s just a function where the quantity represents the probability that the particle will be found in the region d3r centered at r. The wave function is the foundation of what’s called wave mechanics, as opposed to matrix mechanics. What’s important to realize is that when you talk about representing physical systems in wave mechanics, you don’t use the basis-less bras and kets of matrix mechanics; rather, you usually use the wave function — that is, bras and kets in the position basis. Therefore, you go from talking about This wave function is just a ket in the position basis. So in wave mechanics, becomes the following: You can write this as the following: But what is It’s equal to The Hamiltonian operator, H, is the total energy of the system, kinetic (p2/2m) plus potential (V(r)) so you get the following equation: But the momentum operator is Therefore, substituting the momentum operator for p gives you this: Using the Laplacian operator, you get this equation: You can rewrite this equation as the following (called the Schrödinger equation): So in the wave mechanics view of quantum physics, you’re now working with a differential equation instead of multiple matrices of elements. This all came from working in the position basis, When you solve the Schrödinger equation for you can find the allowed energy states for a physical system, as well as the probability that the system will be in a certain position state. Note that, besides wave functions in the position basis, you can also give a wave function in the momentum basis, or in any number of other bases.
3e8d5c5d03c90e0a
Take the 2-minute tour × In physics, in the past, complex numbers were used only to remember or simplify formulas and computations. But after the birth of quantum physics, they found that a thing as real as "matter" itself had to be described by complex wave functions and there's no way to describe it using only real numbers. In mathematics, in real analysis, there's examples like the function $f(x)=\frac{1}{1+x^2}$, and why this function does not have the "smoothtness" of the exponential function, polynomials, sine and cosine functions, why it has a radius of convergence equals 1 despite the fact that this function is infinitely differentiable. You can't see the reality of this function until you see it through the field of complex analysis, where you can observe that $f$ is not that smooth because it has 2 singularities in the complex plane. I am just asking for examples like this such that when you see it in the narrow "window" of real analysis, you can't see the "reality" until you view it from the window of complex analysis. I am just starting to self learn complex analysis and I find it more natural than real analysis and it tells you the "truth" behind a lot of things. share|improve this question The simplest answer probably is because $\mathbb{C}$ is the algebraic closure of $\mathbb{R}$. –  Arkamis Jun 24 '14 at 21:55 @MikeMiller Yes I know that, I am talking about the smoothness of the exponential function with an infinite radius of convergence. –  user144542 Jun 24 '14 at 21:56 @user144542 You are certainly not the only one who feels complex analysis more natural. Here is a quote from Hadamard: "The shortest path between two truths in the real domain passes through the complex domain." –  EuYu Jun 24 '14 at 22:04 This question gives the impression that classical physics has no questions in which the natural language is complex analysis. I am no expert in either field, but this seems almost definitely false. (i.e. the cotangent bundle of Euclidean space has a canonical complex structure) –  PVAL Jun 24 '14 at 22:53 This is absolutely irrational. It is only natural that everything in quantum mechanics is integral. So why on earth it is so complex to the point that some people would call it outright imaginary??? Oh well, it's a good thing that the name of a mathematical object has little to do with the natural meaning of the word. –  Asaf Karagila Jun 24 '14 at 23:52 9 Answers 9 up vote 15 down vote accepted Not all real degree $n$ polynomials have $n$ roots (counting multiplicity) because some of the roots are complex. In the real domain a matrix can have no eigenvalues, e.g. 2-dimensional rotation matrix, but any real matrix has a complex eigenvalue. These are manifestations of $\mathbb{C}$ unlike $\mathbb{R}$ being algebraically closed, i.e. every polynomial equation having a solution. In the real domain $\sqrt{x}$ and $\ln{x}$ are only defined for positive $x$ because for negative $x$ the value is a complex number, and it is not unique. In the real domain exponents and trigonometric functions are completely different functions, but in the complex domain they are related by simple Euler formulas. The same goes for logarithms and inverse trigonometric functions. This is the main reason why identities for hyperbolic functions are almost the same as familiar trigonometric identities. Many definite integrals of functions that do not have elementary antiderivatives can be computed in elementary terms by extending the path of integration to the complex plane and using residues, e.g. $\int_0^\infty\frac{\ln x}{(1+x^2)^2}\,dx=-\frac{\pi}{4}$. More generally, integral and series representations of many real functions can be converted into each other because these functions extend into the complex plane and contour integrals there reduce to sums over residues. The Riemann zeta function is a typical beneficiary. These manifest another advantage of complex analysis over real one. Many commonly used real functions extend to holomorphic functions in the complex plane, and for holomorphic functions calculus tools are much stronger than for smooth ones, which is what real analysis mostly treats them as. In the real domain ellipses and hyperbolas are different types of curves, but in the complex plane they are related by a rotation of axes, i.e. they are the 'same' (more precisely, we are looking at two different projections of the same complex curve). In a similar way spherical and hyperbolic geometries are related by a complex rotation. The Schrödinger equation of quantum mechanics and the heat equation of classical physics are also related by a complex rotation called Wick rotation. Path integral interpretation of quantum mechanical solutions can be made precise using this relation and the Feynman–Kac formula. Heaviside developed operational calculus for solving ordinary differential equations with constant coefficients by treating time derivative as a 'variable' $p$ and writing solutions in terms of symbolic 'functions' of it. It turned out that the magic worked because $p$ is in fact a complex variable, and Heaviside's symbolic solutions can be converted into real ones by taking the inverse Laplace transform, which is a contour integral in the complex plane. Harmonic functions, solutions to the Laplace equation, have many nice analytic properties like being sums of convergent power series, attaining extrema at the boundary of their domains, being equal at any point to the average of values on any circle centered at it, etc. The underlying reason is that harmonic functions are exactly the real and imaginary parts of holomorphic functions. If the potential of a vector field is a harmonic function $\varphi$ then its flow lines are level curves of another harmonic function $\psi$, exactly the one that makes $\varphi+i\psi$ holomorphic. Solution formulas for the Dirichlet boundary problem for the Laplace equation in some special domains are reflections of the Cauchy integral fomula for holomorphic functions that works in 'any' domain. share|improve this answer A notable example of how Complex Analysis reveals something deep about Physics is found in Spectral Theory. There is a type of conservation law where all of the singularities in the finite plane are related to properties of the singularity at $\infty$. For example, if you have a function that is holomorphic with a finite number of poles in the complex plane, then the sum of the residues can be computed by looking at the residue at $\infty$. Extensions of these ideas allow one to prove the completeness of eigenfunctions for an operator $A$ by replacing integrals around singularities of $(\lambda -A)^{-1}$ with one residue at $\infty$ computed as $\lim_{\lambda\rightarrow\infty}\lambda(\lambda-A)^{-1}=I$. For example, if $A=\frac{1}{i}\frac{d}{dt}$ on $L^{2}[0,2\pi]$, on the domain $\mathcal{D}$ consisting of periodic absolutely continuous functions $f$, then $(\lambda -A)^{-1}$ is defined everywhere except at $\lambda = 0,\pm 1,\pm 2,\pm 3,\cdots$, where it has simple poles with residues $$ R_{n}f = \frac{1}{2\pi}\int_{0}^{2\pi}f(\theta')e^{-in\theta'}\,d\theta e^{in\theta}. $$ These residues are projections onto the eigenspaces associated the values of $\lambda$ where $(\lambda I -A)^{-1}$ is singular. Though the argument is very delicate, one can replace the sum of all these residues with one residue at $\infty$ evaluated as $\lim_{v\rightarrow\pm\infty}(u+iv)(u+iv-A)^{-1}f=f$, which leads to a proof of the $L^{2}$ convergence of the Fourier Series for an arbitrary $f \in L^{2}[0,2\pi]$: $$ f = \sum_{n=-\infty}^{\infty}\frac{1}{2\pi}\int_{0}^{2\pi}f(\theta')e^{-in\theta'}\,d\theta' e^{in\theta}. $$ This type of argument can be used to prove the convergence of Fourier-type transforms mixed with discrete eigenfunctions, too. And, these arguments where singularities of the resolvent in the finite plane are traded for a single residue at $\infty$ can give pointwise convergence results that go well beyond Functional Analysis and $L^{2}$ theory. Such methods tie in beautifully with how Dirac viewed an observable as some kind of composite number with lots of possible values determined from the singularities of $\frac{1}{\lambda -A}$, which he then used to generalize the Cauchy integral formula $$ p(A)=\frac{1}{2\pi i}\oint_{C} p(\lambda)\frac{1}{\lambda -A}\,d\lambda. $$ In this strange setting of Complex Analysis, the results of Quantum eigenvector expansion are compelling and almost natural, even though the proofs are not so simple. share|improve this answer If a function is analytic on the upper half plane and goes to zero fast enough at infinity then the Kramers-Kronig relations hold. So if $x\rightarrow f(x)=f_1(x)+if_2(x)$ is our function, then $$f_1(x)=\frac{1}{\pi}\cal{P}\int_{-\infty}^\infty\frac{f_2(x')}{x'-x}dx'$$ and $$f_2(x)=-\frac{1}{\pi}\cal{P}\int_{-\infty}^\infty\frac{f_1(x')}{x'-x}dx'$$ so you can compute the real part of $f$ from its imaginary part and vice versa. In physics the impulse response of a physical system frequently satisfies the preconditions for the relations to hold. For example, in optics the real part of the impulse response is related to the refractive index and the imaginary part is related to attenuation. This makes it possible to compute the refractive index at varying frequencies just from knowing the attenuation at different frequencies. It's almost magic that these things are connected so straightforwardly in this way but it's hard to see if you don't go complex. Other examples include the way audio engineers can read off the phase delay of a component (modulo certain preconditions) from its Bode plot and similar phenomena in the study of oscillating mechanical systems and in particle physics. share|improve this answer One example is the two-dimensional electrostatics. The potential in the domain $D$ without electric charges satisfies the Laplace equation $\varphi_{xx}+\varphi_{yy}=0$. From the 'real' point of view, its solutions for • two infinite uniformly charged planes • two infinite coaxial uniformly charged cylinders have nothing to do with each other. However, from the 'complex' point of view one is obtained from the other by the conformal transformation $w(z)=e^z$ which maps the strip $a<\Re z<b$ onto the annulus $ e^a<|z|<e^b$. share|improve this answer There are some very nice theorems that are relatively easy to prove and that only hold in complex analysis. For example: • Complex differentiability: If a function is complex differentiable once and the derivative is continuous then it is infinitely differentiable. Clearly not true in real analysis. • Power series: Not only are they infinitely differentiable but one can expand them locally into a convergent power series, i.e. they are analytic. Which fails completely in real analysis, there are infinitely differentiable functions that are nowhere analytic. • Cauchy Integral Formula: The values of a function holomorphic on a domain and continuous up to its boundary only depend on its values on the boundary, and can be recovered explicitly by a single integration. This can be used to solve boundary value problems for the Laplace equation. • Liouville's theorem: If a function is differentiable and bounded on $\mathbb C$, then it is constant. A very strong property, in my opinion, but an easy corollary of the Cauchy Integral Formula. And a useful tool for proving other theorems, gives arguably the simplest proof of the Fundamental Theorem of Algebra. • Maximum Modulus Principle: The absolute value of a function holomorphic on a domain attains its maximum on the boundary. Which gives us an easy way to find a priori bounds for holomorphic functions or prove that they are constant, among other things. share|improve this answer A function holomorphic on $\mathbb C$ does not have to be constant ; you forgot to add the "bounded" part of Liouville's Theorem. –  Patrick Da Silva Jun 24 '14 at 23:54 "If a function has a derivative, it is not continuous." What...? You mean the derivative is not necessarily continuous, I take it? –  Pedro Tamaroff Jun 25 '14 at 0:12 I don't see why a Liouville-like result should be intuitively expected. Philosophically though it makes sense since (entire) holomorphic functions are in some way the natural generalization of polynomials and in this light, the result holds. However if you don't look at it through this rather narrow lens, it isn't a result we should expect. –  Cameron Williams Jun 25 '14 at 0:56 I also came to the same impression as you over time about complex analysis being "more real." Analytic number theory uses $L$-functions for many arithmetic formulas and theorems. For instance: • Prime number theorem: Based on numerical data Legendre conjectured $\pi(x)\approx\frac{x}{A\log x+B}$, and it wasn't until half a century later when Chebyshev and Riemann considered the Riemann zeta function and contour integration, which when extended by Hadamard and de la Vallée-Poussin led to the first proof of $\pi(x)\sim\frac{x}{\log x}$. Selberg and Erdos were able to come with an "elementary" proof about a century later. While there are many proofs, nothing is as special or insightful as the connection between prime counting and the zeros of the Riemann zeta function. • Dirichlet's theorem: The effective version of this states that residues of primes are equidistributed among units modulo any number. While Selberg gave an elementary proof (I am not sure if this was for the effective or existential version) and there is a digestible proof for $p\equiv1$ mod $n$ (using the splitting theory of primes in cyclotomic fields), the textbook proof is using Dirichlet $L$-functions. (In my opinion, there is no reason to narrow the discussion down to Dirichlet's theorem when we can talk about the more general and beautiful Chebotarev Density theorem instead - which also uses $L$-functions in the same fashion for its proof - other than it is not as well-known or accessible to readers.) I think I've read before, or at least got the impression reading, that originally people were suspicious of non-elementary proofs in number theory, or at least preferred elementary proofs for being more "direct," until it became clear that complex analysis actually got to the heart of things very well. Or maybe it wasn't that people originally thought that, but some students believe this when they first start to learn number theory without first learning and thus appreciating complex analysis. At any rate, my memory is more definitive that I've had to reassure a very talented but at the time inexperienced user here on MSE that Dirichlet characters were indeed natural for Dirichlet's theorem. Also, Andrew Schlafly hilariously disputes the superiority of complex analysis proofs (but nobody takes Conservapeida seriously). Based on my experience, I believe the two main reasons complex analysis is "more real" is because of the tools of contour integration and orthogonality relations a la harmonic analysis. share|improve this answer An elementary proof of Dirichlet's Theorem (in the equidistribution sense, not just infinitude of primes in a congruence class) was given independently at the same time by Selberg and Harold N. Shapiro. See Canad. J. Math. 2 (1950) 66-78 for Selberg and Ann. of Math. 52 (1950) 217-230 and 231-243 for Shapiro. Selberg had earlier given a proof of Dirichlet's theorem in its existence form, without equidistribution, in Ann. of Math. 50 (1949) 297-304. –  KCd Jun 25 '14 at 4:40 You shouldn't mention that nonsense article because complex analysis does not rest on an assumption of existence of $i$. Rather, we could explicitly construct $\mathbb{R}[t]/\langle t^2+1 \rangle$ and prove that it is a field, and then prove results in it. Readers unfamiliar with field extensions may more easily believe the article's nonsense than the counter statements. –  user21820 Nov 22 '14 at 7:04 @user21820 It's not my responsibility to protect potential gullible readers from mere exposure to nonsense. –  blue Jan 7 at 1:54 Enumerative combinatorics is a good example. If you solve some problem using generating functions, then the solution is given as the coefficient of $x^n$ of some complicated function. The asymptotic behavior of this can usually be extracted easily by writing it as a contour integral and then either considering the dominant contribution from the residues inside (or outside) the contour, or you can do a steepest descent calculation. share|improve this answer Seen from an algebraic perspective: The complex numbers are at the peak. Going 'downwards' you loose: Multiplicative Inverses -> Additive Inverses Going 'upwards' you loose: Commutativity -> Associativity share|improve this answer Some things about complex analysis seem special when in fact those results are better viewed through the lens of vector calculus. For instance, any holomorphic function can be viewed instead as a vector field with vanishing divergence and curl. Consider the Cauchy integral theorem and Stokes' theorem: $$\begin{align*}\text{Cauchy integral theorem: }& \oint_C f(z) \, dz = 0, \quad \left(\frac{\partial f}{\partial \bar z} = 0 \right)\\ \text{Stokes' theorem: } & \oint_C F \cdot d\ell = 0, \quad (\nabla \times F = 0) \\ \text{Stokes' theorem corollary: }& \oint_C F \times d\ell = 0, \quad (\nabla \cdot F = 0) \end{align*}$$ Now, you may be tempted to say that the complex analysis version of this result is more elegant because it's one line instead of two. You can package the vector calculus result into one line with the proper tools (clifford algebra) as well, though, and more importantly, when you properly identify the derivatives as exterior and interior derivatives, the result generalizes to arbitrarily high-dimensional real vector spaces. Similarly, meromorphic functions correspond to vector fields with point sources, but that's usually where complex analysis stops, while vector calculus has the tools to treat vector fields with arbitrary sources. That quantum mechanics is written in terms of complex numbers is, in my opinion, a pedagogical mistake. It would be clearer to do quantum-style probability in terms of vectors in $\mathbb R^2$; of course, the math will ultimately come out the same, but I think the picture of things would be considerably cleaner. Complex analysis also puts a lot of emphasis on complex differentiability, but as you can see from the above argument about the CIT and Stokes' theorem, it's actually differentiation with respect to $\bar z$ that relates very directly to vector calculus. Complex differentiation is utterly insignificant when considering more general applications. share|improve this answer Your Answer
f55fe75871b21f39
Psychology Wiki Consciousness causes collapse 34,202pages on this wiki Add New Page Talk0 Share Ad blocker interference detected! Quantum physics Quantum psychology Schrödinger cat Quantum mechanics Introduction to... Mathematical formulation of... Fundamental concepts Decoherence · Interference Uncertainty · Exclusion Transformation theory Ehrenfest theorem · Measurement Double-slit experiment Davisson-Germer experiment Stern–Gerlach experiment EPR paradox · Schrodinger's Cat Schrödinger equation Pauli equation Klein-Gordon equation Dirac equation Advanced theories Quantum field theory Quantum electrodynamics Quantum chromodynamics Quantum gravity Feynman diagram Copenhagen · Quantum logic Hidden variables · Transactional Many-worlds · Many-minds · Ensemble Consistent histories · Relational Consciousness causes collapse Orchestrated objective reduction Bohm · Consciousness causes collapse is the claim that observation by a conscious observer is responsible for the wavefunction collapse in quantum mechanics. It is an attempt to solve the Wigner's friend paradox by asserting that collapse occurs at the first "conscious" observer. Supporters assert this is not a revival of substance dualism, since (in a ramification of this view) consciousness and objects are entangled and cannot be considered as separate. Opponents assert that it is unfalsifiable, and that is does not simplify our physical understanding of the universe, and is therefore scientifically uninteresting. It has been claimed that the theory meshes well with ancient Eastern mysticism and philosophy, including that of Hinduism, Taoism, and Buddhism which includes a belief in the transitory, interconnected nature of all things and the illusion of separation of thought and existence. This is one of the major themes of the book The Dancing Wu Li Masters. It also meshes well with the views of the New Thought movement. Esse est Percipi ("to be is to be perceived"): The idea of consciousness somehow being related to the creation of reality was first proposed by Bishop Berkeley. With the publication of Die Mathematische Grundlagen der Quantenmechanik, it was Von Neumann however who became the first person to hint that Quantum theory may imply an active role for consciousness in the process of reality creation. Others, such as Walter Heitler, Fritz London, Edmond Bauer, and Eugene Wigner further carried Von Neumann's argument to a claimed logical conclusion that consciousness-created reality is the inevitable outcome of Von Neumann's picture of quantum theory. There are many variations and differences of opinion on how the collapse becomes involved with functions of the brain and our perception of reality. But among other names that have expressed the belief that a deep connection exists between mind and quantum physics are Henry Stapp, Freeman Dyson, John Carew Eccles, Brian Josephson,[1] David Bohm,[2] Bernard d'Espagnat, and Roger Penrose. Wigner concluded from his own arguments about symmetry in physics that the action of matter upon mind must give rise to, as he put it, a "direct action of mind upon matter".[3] Among the more recent followers one can find Evan Harris Walker, Fred Alan Wolf, William A. Tiller, John Hagelin, Stuart Hameroff, Bernard Baars, David Chalmers, Amit Goswami, Russell Targ, Nick Herbert, Jeffrey M. Schwartz, Menas Kafatos, and the Princeton Engineering Anomalies Research Lab in New Jersey. The British Philosopher-Theologian Keith Ward is also a major proponent of this idea. To say the least, numerous celebrities of science at least at some point or another have hinted to a belief in the existence of some form of connection between contemporary physics and metaphysical concepts related to consciousness, mind, our role as the observer of reality, or a deeper meaning of reality by itself: "The distress which the reorientation of Quantum Mechanics caused continues to the present day. Basically, physicists have suffered a severe loss: their hold on reality." Bryce DeWitt "Some physicists would prefer to come back to the idea of an objective real world whose smallest parts exist objectively in the same sense as stones or trees exist independently of whether we observe them. This however is impossible." Werner Heisenberg This sentiment was similarly echoed by French physicist Bernard d'Espagnat: "The doctrine that the world is made up of objects whose existence is independent of human consciousness turns out to be in conflict with Quantum Mechanics and with facts established by experiment."[4] Criteria for consciousnessEdit Here, the process of "measurement" in quantum mechanics is attributed (directly, indirectly, or even partially) to consciousness itself. However, it is not explained by this theory which animals, living creatures, or objects have consciousness, and thus, the ability to cause the collapse of the wave function ("Can undergraduates collapse the wavefunction?"). It is also not clear whether measuring devices might also be considered conscious, though generally measuring devices are considered simply a "chain of observations" that only ends at a conscious entity. Some even suggest that some beings have a "higher consciousness" and therefore more capability to collapse the wavefunction, whereas others believe all conscious entities have an equal capability. Others believe that "higher consciousness" is inherent in all, but some have tapped into it more fully[How to reference and link to summary or text]. However, regarding the relevance of the theory to animals, some tests have been performed. A study performed by Chester Wildey at the University of Texas at Arlington sought to investigate the Hameroff/Penrose "quantum mind" hypothesis in non-human animals by testing earthworms in 231 trials for presentiment, using skin conductance as a measure. Correlations were reported.[5] Similarly in 2005, physicist Johann Summhammer of Vienna University of Technology proposed that because quantum entanglement is everywhere in nature, it is conceivable that evolution has taken advantage of it.[6] Most physicists regard this theory as a non-scientific concept, claiming that it is experimentally unfalsifiable, and that it introduces unnecessary elements into physics, rather than simplifying. See alsoEdit Notes and ReferencesEdit 1. Joesphson B.D., Pallikari-Viras F. "Biological Utilisation of quantum non locality", Foundations of Physics, 21, 197-207. 2. In the Bohmeian interpretation the wave function does not collapse at all. However, to see David Bohm's ideas on how quantum theory suggests the existence of a deeper reality than the one presented by our senses using holographic metaphors and entanglement, see: David Bohm. 1980. Wholeness and the implicate order. London. Routledge Classics. 3. Wigner E.P. 1967. Symmetries and Reflections. Cambridge MA. MIT Press. p.171-184 4. Bernard d'Espagnat, Scientific American, Nov. 1979. The Quantum Theory and Reality 158-181 5. C. Wildey. "Impulse response of Biological Systems." Department of Electrical Engineering. UT-Arlington. Thesis 2001. Also mentioned in: Entangled Minds. Dean Radin. 2006. ISBN-13 978-1-4165-1677-4 p.170.171 6. Johann Summhammer, "Quantum Cooperation of Insects". Vienna University of Technology. Atominstitut, Stadionallee 2, A–1020 Wien, Austria. Link: Further links and referencesEdit Articles and links in support of Quantum ConsciousnessEdit Articles and links against Quantum ConsciousnessEdit Related organizations, centers of research, conferences, and further informationEdit Also on Fandom Random Wiki
f09f82a9cc23c816
Take the 2-minute tour × I'm still in high school, and while I can't complain about the quality of my teachers (all of them have done at least a bachelor, some a masters) I usually am cautious to believe what they say straight away. Since I'm interested quite a bit in physics, I know more about it than other subjects and I spot things I disagree with more often, and this is the most recent thing: While discussing photons, my teacher made a couple of statements which might be true but sound foreign to me: • He said that under certain conditions, photons have mass. I didn't think this was true at all. I think he said this to avoid confusion regarding $E=mc^2$, however, in my opinion it only adds to the confusion since objects with mass can't travel with the speed of light, and light does have a tendency to travel with the speed of light.. I myself understand how photons can have a momentum while having no mass because I lurk this site, but my classmates don't. • He said photons don't actually exist, but are handy to envision. This dazzled my mind. Even more so since he followed this statement by explaining the photo-electric effect, which to me seems like a proof of the existence of photons as the quantum of light. He might have done this to avoid confusion regarding the wave-particle duality. This all seems very odd to me and I hope some of you can clarify. share|improve this question This question may be useful physics.stackexchange.com/q/34067 even though is closed. –  Jorge Feb 7 '13 at 19:28 Photons do have a mass inside a superconductor. Which is why, inside a superconductor, the electromagnetic force becomes short-range. Perhaps that's what your teacher meant. –  Dmitry Brant Feb 7 '13 at 20:07 @DmitryBrant I know this is to some extent just a matter of semantics, but I personally feel it's somewhat misleading to call the effective mass of a photon inside of a superconductor its mass. –  joshphysics Feb 7 '13 at 21:10 Also Ylyk, you might be interested to read the section in en.wikipedia.org/wiki/Photon#Experimental_checks_on_photon_mass that talks about experimental checks on photon mass. –  joshphysics Feb 7 '13 at 21:12 I would definitely ignore anything that your teacher or classmates have to say about physics that you can not find in a good text book. Until you reach topics of quantum gravity and quantum information theory, these things are well understood within the physics community (although not by all its members). I would also ignore most media releases until you can sift the good from the bad. A good list of freely available books can be found athttp://physics.stackexchange.com/questions/6157/list-of-freely-available-physi‌​cs-books –  Hal Swyers Feb 8 '13 at 11:45 5 Answers 5 up vote 8 down vote accepted 1. Photons are massless. This should not cause confusion with $E=mc^2$ because the expression for the relativistic energy of the photon is $E = h \nu$, there $\nu$ is the photon's frequency and $h$ is Planck's constant. You can also understand the relativistic energy of the photon by noting that $E = pc$ where $p=\hbar k$ is the magnitude of its momentum, and photons possess momentum, as you point out. $k$ is the photon's wavevector and $\hbar = h/2\pi$ is the reduced Plank's constant. To tie this all together, we have the formula $E=\sqrt{m^{2}c^{4}+p^{2}c^{2}}$. 2. Photons exist! As you point out, the existence of photons has physical consequences that can be measured. Perhaps, as you also mention, your teacher is trying to insist that you resist thinking of photons purely as particles (which is probably not such a bad idea), but the statement "photons don't exist," in my opinion, should be considered just as fallacious as "the keyboard on which I'm typing doesn't exist. You're being prudent by taking everything your teacher says with a large grain of salt. In my experience in physics (heck in life as a whole), it's good to take everything that anyone ever says with a grain of salt (including my response for that matter). share|improve this answer It depends on the definition of mass, obviously. Photons have zero rest mass. But physicists working in relativity also use "mass" as a synonim of "energy". –  Bzazz Feb 7 '13 at 22:48 @Bzazz Nuclear and particle physicists use the work "mass" to mean only the rest mass. Every time. While there is no difficulty in defining the "relativistic mass" the concept obscures and confuses rather than clarifying. –  dmckee Feb 7 '13 at 22:57 Yes, that's why I specified physicists working on relativity. In all the courses of GR I had, you put c=1 and talk of mass and energy equivalently. Of course, one seldom talks about the mass of a photon, because, we are more familiar with speaking of photon energy. But if we want to look for "the mass of the photon" as in the question, this is what comes to my mind first. Remember that relativistic mass is effectively mass, in the sense that also experiences gravity. –  Bzazz Feb 7 '13 at 23:05 @Bzazz I think this sort of statement is the type of corruption of physics that we should be concerned about. Although any object with energy can create curvature, Photons are bosonic and do not directly interact with the higgs boson, and are therefore massless. See some good answer at physics.stackexchange.com/questions/23161/… –  Hal Swyers Feb 8 '13 at 10:57 I definitely agree with @HalSwyers on this one. –  joshphysics Feb 8 '13 at 16:22 He is doing a good job trying to communicate the weirdness of Photons, but a poor one at being consistant. It's difficult to communicate how strange they are, without using the relationships you described. First of all photons have no mass. At rest. The concept of describing a photon at rest is a bit weird even as there is no such thing. Once your talking about photons in motion it is tempting to say they have mass, but most physicists these days don't speak of it this way. Rather its best to say it has ENERGY. While people like to say things can have relativistic mass, this is incorrect, mass is a constant: the mass a thing has never changes no matter how fast you travel: rather the energy to accelerate it increases (inertia). So a photon has no mass: but it does have momentum. This is the fascinating thing about light. And leads to the fact that momentum is not always related to mass as you mentioned. Examine $E=mc^2$ in more detail and the correct equation actually is $E^2 = m^2 c^4 + p^2 c^2$. As photons have momentum related to its energy/frequency this is fine. The best way to think of light is as photons. Light is a photon, that is the fact, BUT light waves are only a model. The appearance of wave phenomenon is because quantum mechanically light interacts probabilistically, and this nature allows it to display wave like properties. The science of Quantum Electro-Dynamics examines this strange behavior. For now think of light as a photon and a wave, it allows every day behavior to be modeled well and is perfectly fine. But in reality light is a photon particle (this is why the photo-electric effect works) that when described using Schrodinger's Equations (a description of probabilites) can be transformed into a description of a wave of Electro-Magnetism. share|improve this answer There are two reasons why a photon can't be described by the Schrödinger equation: A single particle theory for a photon is non-sense and Schrödinger's equation is non-relativistic. –  Jorge Feb 7 '13 at 20:08 For modeling interactions between photons to show that they display wave characteristics, Shrodinger's equation works as a good approximation to demonstrate his equations and Maxwells are equivalent. Nontheless like I mentioned QED manages it better: see Feynman's books for good explanations without equations. –  Eric_ Feb 7 '13 at 20:41 Eric, Schrödinger's equation is explicitly built on the Newtonian relationship between kinetic energy, momentum and mass (the rest mass for those that insist). It really, really doesn't do to use it with photons. If you must do non-field-theoretical QM with light, use the Klein-Gordon equation. However, you've got the right relationship in your post: $E^2 = m^2c^4 + p^2c^2$ is the correct answer. Don't detract from it by using the wrong wave equation. –  dmckee Feb 7 '13 at 22:45 In any case, welcome to Physics.SE. WE have the MathJax rendering engine active on the site which lets you write latex-alike math inside pairs of $'s (for inline) or $$ (for block typesetting). I've done this post for you. –  dmckee Feb 7 '13 at 22:47 I have sat in a college class and watched step by step as a Schrodinger equation is transformed into a wave equation with a form identical to maxwells. The point is not that it explains light, but that probabilistic quantum mechanics explains how a wave nature can appear for ANY and ALL particle phenomenon. There is wave particle duality in everything, not just light. –  Eric_ Feb 8 '13 at 0:00 Regarding your two points: He said that under certain conditions, photons have mass. Massless particles move at the speed of light $c$ in vacuum. By his statement your teacher may have been alluding to the fact that photons travel at speeds slower than $c$ when they travel through media, like glass for example. However, I would rather phrase this as something like "in the transmission of a photon through a medium, the photon's transit time through that medium is such that it travels as if it had a mass." The travel of photons through media is a rather complex affair, which I don't fully understand, involving interactions with charged particles in the medium and quasiparticle states, and I'm not sure to what extent the incident photon even retains its identity whilst in the medium (maybe that's another question). There is no question that photons exist. Particle physicists deal with "hard" (i.e high energy) photons, which behave like particles - scatter off other charged particles etc. Good evidence that "soft" (low energy) photons exist comes from antibunching experiments. share|improve this answer 1) I would refer your high school teacher to some of the good answers found in this physics stack exchange question. The answer with the highest votes is really good and is referring to the electroweak theory which governs the electroweak section of the Standard Model. General Relativity respects special relativity, and therefore respects that the invariant mass of the photon is zero, since the photon is described by a null-like vector in its rest frame. In particle physics, as discussed in the links above, the current understanding is that mass is a measure of the relative interaction strength of a particle with the Higgs field as mediated by the Higgs boson. The photon does not directly interact with the Higgs boson, and therefore has no mass. 2) As far as visualizing the photon, I would venture the easiest way is to think in terms of classical EM theory (which is a gauge theory btw) where we consider the orthogonal oscillating electric and magnetic fields as representing the photon. share|improve this answer $E = mc^2$ is a popular formula but only valid in a special case when "the total additive momentum is zero for the system under consideration" $^1$ The more general "energy-momentum relation" is: $E^2 = (mc^2)^2 + (pc)^2$ (Also, here's a neat little video for your classmates ;) -> http://www.youtube.com/watch?v=NnMIhxWRGNw ) You can read more about this and the correct four-vector notation under: $^1$ http://en.wikipedia.org/wiki/Mass%E2%80%93energy_equivalence Even today in the time of quantum optics, there are still quite a lot of papers, that try to grasp what a "photon" is, e.g.: 1. What is a photon - David Finkelstein 2. Light reconsidered - Arthur Zajonc 3. The concept of the photon - revisited - Ashok Muthukrishan, Marlan O. Scully, M.Suhail Zubairy and many more all together in this nice review: http://www.sheffield.ac.uk/polopoly_fs/1.14183!/file/photon.pdf Or in the words of Roy Glauber: "A photon is what a photodetector detects" share|improve this answer protected by Qmechanic Feb 8 '13 at 17:42 Would you like to answer one of these unanswered questions instead?
372a6835c41e79e8
Hamilton–Jacobi–Einstein equation From Wikipedia, the free encyclopedia Jump to: navigation, search Further information: ADM formalism In general relativity, the Hamilton–Jacobi–Einstein equation (HJEE) or Einstein–Hamilton–Jacobi equation (EHJE) is an equation in the Hamiltonian formulation of geometrodynamics in superspace, cast in the "geometrodynamics era" around the 1960s, by A. Peres[1] in 1962 and others. It is an attempt to reformulate general relativity in such a way that it resembles quantum theory within a semiclassical approximation, much like the correspondence between quantum mechanics and classical mechanics. It is named for Albert Einstein, Carl Gustav Jacob Jacobi, and William Rowan Hamilton. The EHJE contains as much information as all ten Einstein field equations (EFEs).[2] It is a modification of the Hamilton–Jacobi equation (HJE) from classical mechanics, and can be derived from the Einstein–Hilbert action using the principle of least action in the ADM formalism. Background and motivation[edit] Correspondence between classical and quantum physics[edit] In classical analytical mechanics, the dynamics of the system is summarized by the action S. In quantum theory, namely non-relativistic quantum mechanics (QM), relativistic quantum mechanics (RQM), as well as quantum field theory (QFT), with varying interpretations and mathematical formalisms in these theories, the behavior of a system is completely contained in a complex-valued probability amplitude Ψ (more formally as a quantum state ket - an element of a Hilbert space). In the semiclassical Eikonal approximation: \Psi = \sqrt{\rho}e^{iS/\hbar} the phase of Ψ is interpreted as the action, and the modulus ρ = Ψ*Ψ = |Ψ| is interpreted according to the Copenhagen interpretation as the probability density function. The reduced Planck constant ħ is the quantum of action. Substitution of this into the quantum general Schrödinger equation (SE): i\hbar\frac{\partial \Psi}{\partial t} = \hat{H}\Psi\,, and taking the limit ħ → 0 yields the classical HJE: -\frac{\partial S}{\partial t} = H\,, which is one aspect of the correspondence principle. Shortcomings of four-dimensional spacetime[edit] On the other hand, the transition between quantum theory and general relativity (GR) is difficult to make; one reason is the treatment of space and time in these theories. In non-relativistic QM, space and time are not on equal footing; time is a parameter while position is an operator. In RQM and QFT, position returns to the usual spatial coordinates alongside the time coordinate, although these theories are consistent only with SR in four-dimensional flat Minkowski space, and not curved space nor GR. It is possible to formulate quantum field theory in curved spacetime, yet even this still cannot incorporate GR because gravity is not renormalizable in QFT.[3] Additionally, in GR particles move through curved spacetime with a deterministically known position and momentum at every instant, while in quantum theory, the position and momentum of a particle cannot be exactly known simultaneously; space x and momentum p, and energy E and time t, are pairwise subject to the uncertainty principles \Delta x \Delta p \geq \frac{\hbar}{2}, \quad \Delta E \Delta t \geq \frac{\hbar}{2}\,, which imply that small intervals in space and time mean large fluctuations in energy and momentum are possible. Since in GR mass–energy and momentum–energy is the source of spacetime curvature, large fluctuations in energy and momentum mean the spacetime "fabric" could potentially become so distorted that it breaks up at sufficiently small scales.[4] There is theoretical and experimental evidence from QFT that vacuum does have energy since the motion of electrons in atoms is fluctuated, this is related to the Lamb shift.[5] For these reasons and others, at increasingly small scales, space and time are thought to be dynamical up to the Planck length and Planck time scales.[4] In any case, a four-dimensional curved spacetime continuum is a well-defined and central feature of general relativity, but not in quantum mechanics. One attempt to find an equation governing the dynamics of a system, in as close a way as possible to QM and GR, is to reformulate the HJE in three-dimensional curved space understood to be "dynamic" (changing with time), and not four-dimensional spacetime dynamic in all four dimensions, as the EFEs are. The space has a metric (see metric space for details). The metric tensor in general relativity is an essential object, since proper time, arc length, geodesic motion in curved spacetime, and other things, all depend on the metric. The HJE above is modified to include the metric, although it's only a function of the 3d spatial coordinates r, (for example r = (x, y, z) in Cartesian coordinates) without the coordinate time t: g_{ij} = g_{ij}(\mathbf{r})\,. In this context gij is referred to as the "metric field" or simply "field". General equation (free curved space)[edit] For a free particle in curved "empty space" or "free space", i.e. in the absence of matter other than the particle itself, the equation can be written:[6][7][8] \frac{1}{\sqrt{g}}\left(\frac{1}{2}g_{pq}g_{rs}-g_{pr}g_{qs}\right)\frac{\delta S}{\delta g_{pq}}\frac{\delta S}{\delta g_{rs}} + \sqrt{g}R=0 where g is the determinant of the metric tensor and R the Ricci scalar curvature of the 3d geometry (not including time), and the "δ" instead of "d" denotes the variational derivative rather than the ordinary derivative. These derivatives correspond to the field momenta "conjugate to the metric field": \pi^{ij}(\mathbf{r})=\pi^{ij}=\frac{\delta S}{\delta g_{ij}}\,, the rate of change of action with respect to the field coordinates gij(r). The g and π here are analogous to q and p = ∂S/∂q, respectively, in classical Hamiltonian mechanics. See canonical coordinates for more background. The equation describes how wavefronts of constant action propagate in superspace - as the dynamics of matter waves of a free particle unfolds in curved space. Additional source terms are needed to account for the presence of extra influences on the particle, which include the presence of other particles or distributions of matter (which contribute to space curvature), and sources of electromagnetic fields affecting particles with electric charge or spin. Like the Einstein field equations, it is non-linear in the metric because of the products of the metric components, and like the HJE it is non-linear in the action due to the product of variational derivatives in the action. The quantum mechanical concept, that action is the phase of the wavefunction, can be interpreted from this equation as follows. The phase has to satisfy the principle of least action; it must be stationary for a small change in the configuration of the system, in other words for a slight change in the position of the particle, which corresponds to a slight change in the metric components; g_{ij} \rightarrow g_{ij} + \delta g_{ij} \,, the slight change in phase is zero: \delta S = \int \frac{\delta S }{\delta g_{ij}(\mathbf{r})}\delta g_{ij}(\mathbf{r}) \mathrm{d}^3 \mathbf{r} = 0\,, (where d3r is the volume element of the volume integral). So the constructive interference of the matter waves is a maximum. This can be expressed by the superposition principle; applied to many non-localized wavefunctions spread throughout the curved space to form a localized wavefunction: \Psi = \sum_n c_n\psi_n \,, for some coefficients cn, and additionally the action (phase) Sn for each ψn must satisfy: \delta S = S_{n+1} - S_n = 0 \,, for all n, or equivalently, S_1 = S_2 = \cdots = S_n = \cdots \,. Regions where Ψ is maximal or minimal occur at points where there is a probability of finding the particle there, and where the action (phase) change is zero. So in the EHJE above, each wavefront of constant action is where the particle could be found. This equation still does not "unify" quantum mechanics and general relativity, because the semiclassical Eikonal approximation in the context of quantum theory and general relativity has been applied, to provide a transition between these theories. The equation takes various complicated forms in: See also[edit] 1. ^ A. Peres (1962). "On Cauchy’s problem in general relativity - II". Nuovo Cimento 26 (1) (Springer). pp. 53–62.  2. ^ U.H. Gerlach (1968). "Derivation of the Ten Einstein Field Equations from the Semiclassical Approximation to Quantum Geometrodynamics". Physical Review 177 (5) (Princeton, USA). pp. 1929–1941. doi:10.1103/PhysRev.177.1929.  3. ^ A. Shomer (2007). "A pedagogical explanation for the non-renormalizability of gravity". California, USA. arXiv:0709.3555v2.  4. ^ a b R.G. Lerner, G.L. Trigg (1991). Encyclopaedia of Physics (2nd ed.). VHC Publishers. p. 1285. ISBN 0-89573-752-3.  5. ^ J.A. Wheeler, C. Misner, K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. p. 1190. ISBN 0-7167-0344-0.  6. ^ J.A. Wheeler, C. Misner, K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. p. 1188. ISBN 0-7167-0344-0.  7. ^ J. Mehra (1973). The Physicist's Conception of Nature. Springer. p. 224. ISBN 9-02770-3450.  8. ^ J.J. Halliwell, J. Pérez-Mercader, W.H. Zurek (1996). Physical Origins of Time Asymmetry. Cambridge University Press. p. 429. ISBN 0-52156-8374.  Further reading[edit] Selected papers[edit]
9e575e3ef398ecc1
Journal of Fluid Mechanics The modulational instability in deep water under the action of wind and dissipation C. KHARIFa1 c1, R. A. KRAENKELa2, M. A. MANNAa3 and R. THOMASa1 a1 Institut de Recherche sur les Phénomènes Hors Équilibre, 49, rue F. Joliot-Curie, BP 146, 13384 Marseille CEDEX 13, France a2 Instituto de Fisica Teorica, UNESP, R. Pamplona 145, 01405-900 São Paulo, Brazil a3 Laboratoire de Physique Théorique et Astroparticules, CNRS-UMR 5207, Université Montpellier II, Place Eugène Bataillon, 34095 Montpellier CEDEX 05, France The modulational instability of gravity wave trains on the surface of water acted upon by wind and under influence of viscosity is considered. The wind regime is that of validity of Miles' theory and the viscosity is small. By using a perturbed nonlinear Schrödinger equation describing the evolution of a narrow-banded wavepacket under the action of wind and dissipation, the modulational instability of the wave group is shown to depend on both the frequency (or wavenumber) of the carrier wave and the strength of the friction velocity (or the wind speed). For fixed values of the water-surface roughness, the marginal curves separating stable states from unstable states are given. It is found in the low-frequency regime that stronger wind velocities are needed to sustain the modulational instability than for high-frequency water waves. In other words, the critical frequency decreases as the carrier wave age increases. Furthermore, it is shown for a given carrier frequency that a larger friction velocity is needed to sustain modulational instability when the roughness length is increased. (Received February 05 2010) (Revised August 11 2010) (Accepted August 11 2010) (Online publication November 01 2010) Key words: • air/sea interactions; • surface gravity waves; • wind–wave interactions c1 Email address for correspondence:
9d8b07933fd707ba
« · » Section 7.6: Exploring Energy Eigenfunctions and Potential Energy Energy = Please wait for the animation to completely load. A particle is confined to a box with hard walls at x = −3 and x = 3 and one of four unknown potential energy functions within the box. Change the energy slider and examine the solutions to the time-independent Schrödinger equation for this system.  In the animation, ħ = 2m = 1.  Examine each energy eigenfunction that satisfies the boundary conditions, and sketch a potential energy function that is consistent with your observations. The OSP Network: Open Source Physics - Tracker - EJS Modeling Physlet Physics Physlet Quantum Physics
cafb4d10dc0d0faa
Skip to main content Trinity College Dublin, The University of Dublin Trinity Menu Trinity Search You are here Study Physics > Current Undergraduate Physics - PY1P20/PY1N20/PY1T20 Hilary Term – lectures, practical laboratory, online & small group tutorials – 10 credits (G Cross, J Pethica, P Gallagher) Learning Outcomes • Describe observational insights into the structure and evolution of the Universe • Solve steady state time-varying electric current and electric potential problems • Solve electrostatic problems using Gaussian Surfaces • Describe how physics of matter and radiation is underpinned by quantum physics • Examination (written paper) - 60% • Laboratory practical work - 30% • Online tutorials - 10% Electricity and Magnetism I Lecturers: Professor G. Cross Duration: Hilary Term 20 lectures Description: Electrostatics: electric charge, Coulomb's law, electric field, electric dipoles, Gauss's law, electric potential energy, voltage, electric polarization, capacitance, dielectrics, Electric current, resistance, Ohm's law, electromotive force, power in electric circuits, Kirchoff's laws, RC circuits.  Magnetism, magnetic field lines and flux; Lorentz force on moving charge; Energy of and torque on a current loop in a magnetic field; Biot-Savart Law illustrated by magnetic fields of a straight wire and circular loop; forces between current-carrying straight wires; Ampere’s Law in integral form. Quantum Physics Lecturers: Professor J. Pethica Duration: Hilary Term 18 lectures Description:Origins of quantum physics. Photoelectric effect. Compton Effect. De Broglie's Postulate. The Uncertainty Principle. Black body radiation and specific heat. Atomic spectra. Bohr model of the atom. Correspondence Principle. Steady-state Schrödinger equation. Particle in a 1-D box. Finite potential well. Simple harmonic oscillator. Particle at potential step. Tunnelling through a barrier. Angular momentum and spin. Quantum theory of Hydrogen atom. The periodic table. Formation of chemical bonds. Quantum information. Astronomy and Astrophysics Lecturers: Professor P Gallagher Duration: Hilary Term 10 lectures Description: Survey of the night sky; planets, exoplanets and extra-terrestrial life; properties of stars and galaxies; structure of the Universe and the Big Bang.
acb9a14d807aaad2
Saturday, December 31, 2016 Necessary Existence Josh Rasmussen's and my Necessary Existence book is now complete. We just sent the final manuscript to Oxford. We're both quite happy with the book. Freezing a hard drive I had a hard drive that's around 15 years old fail to start a couple of months ago--I tried many times with no luck. Most but not all of the stuff was backed up, but not all (though what wasn't backed up wasn't very important). So yesterday I stuck the drive in a freezer, in two freezer bags without much air. Today I plugged it into an IDE-USB adapter. It didn't start up at first, but after a few minutes of warming it up, it started and I got all the data off without any difficulty.  This is the second time in my life that I've rescued data from an old hard drive using a freezer. (Of course, there is always the chance that this time it would have worked without the freezer. I didn't actually check yesterday if the drive was still not working.) Friday, December 30, 2016 Use 3D printer as a plotter/cutter My 3D printer is fun, but I like to extend functionality, so I designed some additional snap-on parts that lets me also use it as a pen plotter and cutter. For instance, I had it draw a butterfly coloring sheet on a blank T-shirt for our four-year-old to color with fabric markers. Here are instructions. Tuesday, December 27, 2016 Some weird languages Platonism would allow one to reduce the number of predicates to a single multigrade predicate Instantiates(x1, ..., xn, p), by introducing a name p for every property. The resulting language could have one fundamental quantifier ∃, one fundamental predicate Instantiates(x1, ..., xn, p), and lots of names. One could then introduce a “for a, which exists” existential quantifier ∃a in place of every name a, and get a language with one fundamental multigrade predicate, Instantiates(x1, ..., xn, p), and lots of fundamental quantifiers. In this language, we could say that Jim is tall as follows: ∃Jimx Instantiates(x, tallness). On the other hand, once we allow for a large plurality of quantifiers we could reduce the number of predicates to one in a different way by introducing a new n-ary existential quantifier ∃F(x1, …, xn) (with the corresponding ∀P defined by De Morgan duality) in place of each n-ary predicate F other than identity. The remaining fundamental predicate is identity. Then instead of saying F(a), one would say ∃Fx(x = a). One could then remove names from the language by introducing quantifiers for them as before. The resulting language would have many fundamental quantifiers, but only only one fundamental binary predicate, identity. In this language we would say that Jim is tall as follows: ∃JimxTally(x = y). We have two languages, in each of which there is one fundamental predicate and many quantifiers. In the Platonic language, the fundamental predicate is multigrade but the quantifiers are all unary. In the identity language, the fundamental predicate is binary but the quantifiers have many arities. And of course we have standard First Order Logic: one fundamental quantifier (say, ∃), many predicates and many names. We can then get rid of names by introducing an IsX(x) unary predicate for each name X. The resulting language has one quantifier and many predicates. So in our search for fundamental parsimony in our language we have a choice: • one quantifier and many predicates • one predicate and many quantifiers. Are these more parsimonious than many quantifiers and many predicates? I think so: for if there is only one quantifier or only one predicate, then we can collapse levels—to be a (fundamental) quantifier just is to be ∃ and to be a (fundamental) predicate just is to be Instantiates or identity. I wonder what metaphysical case one could make for some of these weird fundamental language proposals. Life science and physical science I've been thinking that in a nutshell one could put much of the distinctiveness of Aristotelian philosophy as follows: life science is at least as fundamental as physical science. Friday, December 23, 2016 Double Effect in daily life The Principle of Double Effect is often introduced in terms of weighty cases of killing, like bombing military installations or redirecting trolleys. But the importance of the distinction between intended and unintended but foreseen harms can be seen even more clearly in everyday cases. Yesterday, my wife went grocery shopping, while I was home with some of the kids. My son asked to be taken for a bike ride. The thought flashed into my head: “If I go, I probably won’t be home when my wife comes back with the groceries, and hence I won’t be able to help with unloading them.” There are three possible attitudes I could have with respect to this observation: 1. I shouldn’t take my son for a bike ride now. 2. Not being able to help my wife is an unfortunate side-effect of taking my son for a bike ride. 3. Being able to get out of helping my wife is a reason to take my son for a bike ride. In cases (2) and (3), the foreseen effects are the same. There are no deontic issues (I didn’t promise my wife to be home). But clearly if I take attitude (3), and hence intend not to be there when my wife comes back, I am being a bad husband, while if I go for (1) or (2), my behavior is defensible. (In fact, I never got around to taking my son for the bike ride.) Wednesday, December 21, 2016 What is this? Consider the black item to the right here on your screen. Is it a token of the Latin alphabet letter pee, the Greek letter rho or the Cyrillic letter er? The question cannot be settled by asking which font, and where in the font, the glyph is taken from, because I drew the drawing in Inkscape rather than using any font, precisely to block such an answer. Nor will my intentions answer the question, since I drew the thing precisely to pose such a philosophical question rather than to express any one of the three options. There are two interesting questions here. The first is an ontological one. Is a token on screen something different from the pattern of light? If it's the same as the pattern of light, then there is at most one token, there being at most one relevant pattern of light (perhaps none, if our ontology doesn't include patterns of light), though this token is a token of pee, and a token of rho and a token of er. If a token is not identical with a pattern of light, then we might as well keep on multiplying entities, and say that there is a pattern of light and three tokens, of pee, rho and er, respectively, with the first entity constituting the latter three. The second one is a philosophy of language one. What determines whether or not the pattern of light is or constitutes a token of, say, rho? Is it my intentions? If so, then indeed we have tokens of pee, rho and er, as making these was my intention, but we do not have a token of the Coptic letter ro or a token of the letter qof in 15th century Italian Hebrew cursive, since I didn't think of these when I was doing the drawing. Is it the linguistic context? But then it's not a token of any letter, since a displayed png file in an analytic philosophy post is not a the kind of linguistic context that determines a token. Or is it that the pattern of light is or constitutes tokens of all the letters it geometrically matches, whether or not it was intended as such? If so, then we also have a letter dee (just turn your screen). But now suppose a new alphabet is created, and it contains a letter that looks just like the drawing. It would be odd to say that if a new language were created on another planet this instantly would multiply the entities on earth (at the speed of light? faster?). So it seems that on this view, we should say that the pattern of light is or constitutes tokens of all the letters in all the alphabets that will ever exist. But future actions shouldn't affect how many things there now are. So on this view, we should be even more pluralistic: the pattern of light is or constitutes tokens of all the letters in all possible alphabets. We thus have two questions: one about ontology and one about what is being tokened. Both questions have parsimonious and profligate answers. The parsimonious answer to the ontology question is that there is one thing, which can be a token of multiple things. The profligate one is that we have many tokens. The parsimonious answers to the language question are that intentions and/or context determines what's been tokened. The profligate answer has an infinite amount of tokening. We probably shouldn't combine the two profligate answers. For then on your screen there are infinitely many physical things, all co-located (and some perhaps even with the same modal profile). That's too much. That still leaves three combinations. I think there is reason to reject the combination of ontological profligacy with parsimony on the philosophy of language side. The reason is that tokens get repurposed. Consider a Russian who has a Scrabble set and loses an er tile. She then buys a replacement pee tile, as it looks pretty much the same (I looked at online pictures--both have value 1 and look the same). Then it seems that a new entity, a token of er, comes into existence if we have ontological profligacy and linguistic parsimony. Does a mere intention to use the tile for an er what magically creates a new physical object, a token? That seems not very plausible. That leaves two combinations: • ontological and linguistic parsimony • ontological parsimony and linguistic profligacy. Tuesday, December 20, 2016 Bestowing harms and benefits A virtuous person happily confers justified benefits and unhappily bestows even justified harms. Moreover, it is not just that the virtuous person is happy about someone being benefitted and unhappy about someone being harmed, though she does have those attitudes. Rather, the virtuous person is happy to be the conferrer of justified benefits and unhappy to be the bestower even of justified harms. These attitudes on the part of the virtuous person are evidence that it is non-instrumentally good for one to confer justified benefits and non-instrumentally bad for one to bestow even justified harms. Of course, the bestowal of justified harms can be virtuous, and virtuous action is non-instrumentally good for one. But an action can be good for one qua virtuous and bad for one in another way—cases of self-sacrifice are like that. Virtuously bestowing justified harms is a case of self-sacrifice on the part of the virtuous agent. When multiple agents are necessary and voluntary causes of a single harm, the total bad of being a bestower of harm is not significantly diluted between the agents. Each agent non-instrumentally suffers from the total bad of bestowing harm, though the contingent psychological effects may—but need not—be diluted. (A thought experiment: One person hits a criminal in an instance of morally justified and legally sentenced corporal punishment while the other holds down the punishee. Both agents are equally responsible. It makes no difference to the badness of being the imposer of corporal punishment if instead of the other holding down the punishee, the punishee is simply tied down. Interestingly, one may have a different intuition on the other side—it might seem worse to hold down the punishee to be hit by a robot than by a person. But that’s a mistake.) If this is right, then we have a non-instrumental reason to reduce the number of people involved in the justified imposition of a harm, though in particular cases there may also be reasons, instrumental and otherwise, to increase the number of people involved (e.g., a larger number of people involved in punishing may better convey societal disapprovat). This in turn gives a non-instrumental reason to develop autonomous fighting robots for the military, since the use of such robots decreases the number of people who are non-instrumentally (as well as psychologically) harmed by killing. Of course, there are obvious serious practical problems there. Monday, December 19, 2016 Intending material conditionals and dispositions, with an excursus on lethally-armed robots Alice has tools in a shed and sees a clearly unarmed thief approaching the shed. She knows she is in no danger of her life or limb—she can easily move away from the thief—but points a gun at the thief and shouts: “Stop or I’ll shoot to kill.” The thief doesn’t stop. Alice fulfills the threat and kills the thief. Bob has a farm of man-eating crocodiles and some tools he wants to store safely. He places the tools in a shed in the middle of the crocodile farm, in order to dissuade thieves. The farm is correctly marked all-around “Man-eating crocodiles”, and the crocodiles are quite visible to all and sundry. An unarmed thief breaks into Bob’s property attempting to get to his tool shed, but a crocodile eats him on the way. Regardless of what local laws may say, Alice is a murderer. In fulfilling the threat, by definition she intended to kill the thief who posed no danger to life or limb. (The case might be different if the tools were needed for Alice to survive, but even then I think she shouldn’t intend death.) What about Bob? Well, there we don’t know what the intentions are. Here are two possible intentions: 1. Prospective thieves are dissuaded by the presence of the man-eating crocodiles, but as a backup any that not dissuaded are eaten. 2. Prospective thieves are dissuaded by the presence of the man-eating crocodiles. If Bob’s intention is (1), then I think he’s no different from Alice. But Bob’s intention could simply be (2), whereas Alice’s intention couldn’t simply be to dissuade the thief, since if that were simply her intention, she wouldn’t have fired. (Note: the promise to shoot to kill is not morally binding.) Rather, when offering the threat, Alice intended to dissuade and shoot to kill as a backup, and then when she shot in fulfillment of the threat, she intended to kill. If Bob’s intention is simply (2), then Bob may be guilty of some variety of endangerment, but he’s not a murderer. I am inclined to think this can be true even if Bob trained the crocodiles to be man-eaters (in which case it becomes much clearer that he’s guilty of a variety of endangerment). But let’s think a bit more about (2). The means to dissuading thieves is to put the shed in a place where there are crocodiles with a disposition to eat intruders. So Bob is also intending something like this: 1. There be a dispositional state of affairs where any thieves (and maybe other intruders) tend to die. However, in intending this dispositional state of affairs, Bob need not be intending the disposition’s actuation. He can simply intend the dispositional state of affairs to function not by actuation but by dissuasion. Moreover, if the thief dies, that’s not an accomplishment of Bob’s. On the other hand, if Bob intended the universal conditional 1. All thieves die or even: 1. Most thieves die then he would be accomplishing the deaths of thieves if any were eaten. Thus there is a difference between the logically complex intention that (4) or (5) be true, and the intention that there be a dispositional state of affairs to the effect of (4) or (5). This would seem to be the case even if the dispositional state of affairs entailed (4) or (5). Here’s why there is such a difference. If many thieves come and none die, then that constitutes or grounds the falsity of (4) and (5). But it does not constitute or ground the falsity of (3), and that would be true even if it entailed the falsity of (3). This line of thought, though, has a curious consequence. Automated lethally-armed guard robots are in principle preferable to human lethally-armed guards. For the human guard either has a policy of killing if the threat doesn’t stop the intruder or has a policy of deceiving the intruder that she has such a policy. Deception is morally problematic and a policy of intending to kill is morally problematic. On the other hand, with the robotic lethally-armed guards, nobody needs to deceive and nobody needs to have a policy of killing under any circumstances. All that’s needed is the intending of a dispositional state of affairs. This seems preferable even in circumstances—say, wartime—where intentional killing is permissible, since it is surely better to avoid intentional killing. But isn’t it paradoxical to think there is a moral difference between setting up a human guard and a robotic guard? Yet a lethally-armed robotic guard doesn’t seem significantly different from locating the guarded location on a deadly crocodile farm. So if we think there is no moral difference here, then we have to say that there is no difference between Alice’s policy of shooting intruders dead and Bob’s setup. I think the moral difference between the human guard and the robotic guard can be defended. Think about it this way. In the case of the robotic guard, we can say that the death of the intruder is simply up to the intruder, whereas the human guard would still have to make a decision to go with the lethal policy in response to the intruder’s decision not to comply with the threat. The human guard could say “It’s on the intruder’s head” or “I had no choice—I had a policy”, but these are simply false: both she and the intruder had a choice. None of this should be construed as a defence in practice of autonomous lethal robots. There are obvious practical worries about false positives, malfunctions, misuse and lowering the bar to a country’s initiating lethal hostilities. Friday, December 16, 2016 The sharpness of the Platonic realm I feel an intellectual pull to a view that also repels me. The view is that all contingent vague truths are grounded in contingent definite truths and necessary vague truths. For instance, that Jim is bald might be grounded in a contingent definite truth about the areal density of hair on his scalp and a necessary vague truth that anyone with that areal density of hair is bald. On this view, any vague differences between possible worlds are grounded in definite differences between possible worlds. But the view also repels me. I have the Platonic intuition that the realm of necessary truth should be clean, unchanging, sharp and definite. Plato would be very surprised to think that fuzziness in the physical world is grounded in fuzziness in the Platonic realm. Epistemicism, of course, nicely reconciles the Platonic intuition about necessary truths with the intellectual pull of the grounding claim. For it is no surprise that there be things in the Platonic realm that are not accessible to us. If vagueness is merely epistemic, then there is no difficulty about vagueness in the Platonic realm. Wednesday, December 14, 2016 Knowledge of vague truths Suppose that we know in lottery cases—i.e., if a lottery has enough tickets and one winner, then we know ahead of time that we won’t win. I know it’s fashionable to deny such knowledge, but such denial leads either to scepticism or to having to say things like “I agree that I have better evidence for p than for q, but I know q and I don’t know p” (after all, if a lottery has enough tickets, I can have better evidence that I won’t win than that I have two hands). Suppose also that classical logic holds even in vagueness cases. This is now a mainstream assumption in the vagueness literature, I understand. Finally, suppose that once the number of tickets in a lottery reaches about a thousand, I know I won’t win. (The example can be modified if a larger number is needed.) Now for each positive natural number n, let Tn be the proposition that a person whose height is n microns is tall but a person whose height is n−1 is not tall. At most one of the Tn propositions is true, since anybody taller than a tall person is tall, and anybody shorter than a non-tall person is short. Moreover, since there is a non-tall person and there is a tall person, classical logic requires that at least one of the Tn is true. Hence, exactly one of the Tn is true. Now, some of the Tn are definitely false. For instance, T1000000 is definitely false (since someone a meter tall is definitely not tall) and T2000000 is definitely false (since someone a micron short of two meters tall is definitely tall). But if anything is vague, it will be vague where exactly the cut-off between non-tall and tall lies. And if that is vague, then in the vague area between non-tall and tall, it will be vague whether Tn is true. That vague area is at least a millimeter long (in fact, it’s probably at least five centimeters long), and since there are a thousand microns to the millimeter, there will be at least a thousand values n such that Tn is vague. Moreover, these thousand Tn are pretty much epistemically on par. Let n be any number within that vague range, and suppose that in fact Tn is false. Then this is a lottery case with at least a thousand tickets. So, if in the lottery case I know I didn’t win, in this case I know that Tn is false. Hence, some vague truths can be known—assuming that we know in lottery cases and that classical logic holds. Of course, as usual, some philosophers will want to reverse the argument, and take this to be another argument that we don’t know in lottery cases, or that classical logic doesn’t hold, or that there is no vagueness. Doing things fast I was thinking about deadlines--papers and exams to grade--and realized that doing things fast is a similar kind of challenge to making things small. Companies try to fit phone electronics into as spatially thin a region of spacetime as possible, while runners try to fit a run of a particular distance into as temporally thin a region of spacetime as possible. (And while sometimes small spatial and temporal size has "utilitarian value", as in the case of getting my grades in, in the phone and running cases, the reasons are mainly of the aesthetic variety.) Tuesday, December 13, 2016 Vague propositions Suppose Jim says, in English, “2+2=4”. Then: 1. What Jim said is such that it is contigent that it is true, because it is contingent that “4” means four rather than five 1. What Jim said is a necessary truth, because it cannot but be true that 2+2=4. Here the apparent contradiction is resolved by disambiguating “what Jim said” between the uttered sounds and the expressed meaning. But when talking about vagueness, this straightfoward point can be a bit less clear. Suppose that it’s vaguely true that “4” in Jim’s dialect means four, rather than five, and Jim says “2+2=4” (and suppose that all the other relevant stuff is definite). Then: 1. What Jim said is vaguely true, because it’s vaguely true that “4” is four. 2. What Jim said is not vaguely true, because what Jim said is definitely true or definitely false, depending on what “4” means. Again, make the same move as in (1)-(2): in (3), “what Jim says” is the uttered sounds or words and in (4) it’s the proposition. This line of thought suggests one of two possibilities. Either, propositions are never vague, or there are two interestingly different kinds of vagueness. If propositions are never vague, then in the proposition sense of “what was said” it is never correct to say that what was said is vague. That’s a bit counterintuitive, but some counterintuitive things are true. But if some propositions are vague, then it seems that we have two interestingly different kinds of vagueness an utterance could suffer from. It could be vague which non-vague proposition an utterance expresses or it could be definite which vague proposition an utterance expresses—or one could have combinations, as when it’s vague which vague proposition is expressed. In the case above, I claimed that it was vaguely the case that Jim expressed the non-vague proposition that 2+2=4. But presumably if there are vague propositions, there will be one that has the kind of vagueness that makes the non-vague propositions that 2+2=4 and that 2+2=5 be its admissible precifications. So now we would have this interesting question: What determines whether Jim’s case was a case of vaguely expressing a non-vague proposition or non-vaguely expressing a vague proposition or some combination? Maybe there is a good answer to this question, but I have some doubts. In light of these doubts, I think that the proponent of vague and non-vague propositions should say is something like this. There are at least three senses of “what was said”: the sounds or words (and that makes for two, but I won’t be interested in this distinction in this post), the non-vague proposition and the vague proposition. What Jim said is vaguely true in the first and third sense, but not in the second. This is sufficiently complicated that one might prefer to go back to the less intuitive option, that in the proposition sense “what was said” is never vague. I am dreadfully confused. Monday, December 12, 2016 Actions that are gravely wrong for qualitative reasons Some types of wrongdoing vary in degree of seriousness from minor to grave. Stealing a dollar from a billionaire is trivially wrong while stealing a thousand dollars from someone poor is gravely wrong. A poke in the back with a finger and breaking someone’s leg with a carefully executed kick can both be instances of battery, but the former is likely to be a minor wrong while the latter is apt to be grave. On the other hand, there are types of wrongdoing that are always grave. An uninteresting (for my purposes) case is where the gravity is guaranteed because the description of wrongdoing includes a grave-making quantitative feature as in the case of “grand theft” or “grevious bodily harm”. The more interesting case is where for qualitative reasons the wrongdoing is always grave. For instance, murder and rape. There are no trivial murders or minor rapes. Of course, even if a type of act is always seriously wrong, the degree of culpability might be slight, say due to lack of freedom or invincible ignorance. Think of someone brainwashed into murder, but who still has a slight sense of moral discomfort—although her action is gravely wrong, she may be only slightly culpable. My interest right now, however, is in the degree of wrongness rather than of culpability. We can now distinguish types of wrongdoing that are always grave for qualitative reasons from those that are always grave merely for quantitative reasons. Here is a fairly precise characterization: if W is a type of wrongdoing that is always grave for qualitative reasons, then there is no sequence of acts, starting with a case of W, and with merely quantitative differences between the acts, such that the sequence ends with an act that isn’t grave. Grand theft and grevious bodily harm are examples of types of wrongdoings that are always grave merely for quantitative reasons. On the other hand, it is intuitively plausible that murder and rape are not gravely wrong for merely quantitative reasons. If this intuition is correct, then we get some very interesting substantive consequences. In the case of rape, I’ve explored some relevant issues in a past post, so I want to focus on murder here. The first consequence of taking murder to be always gravely wrong for qualitative reasons is that there is no continuous scale of mental abilities (whether of first or second potentiality) that takes us from people to lower animals. An unjustified killing of a lower animal is only a minor wrong (take this to constrain what “lower” means). If there were a continuous scale of mental abilities from people to lower animals, then murder would be gravely wrong only for quantitative reasons: because the victim’s mental abilities lie on such-and-such a position on the scale. So once we admit that murder is gravely wrong for qualitative reasons, we have to suppose a qualitative gap in the spectrum of mental abilities. This probably requires the rejection of naturalism. A second consequence is that if killing a consenting adult in normal health is murder—which it is—then euthanasia is gravely wrong. For variation in health and comfort is merely quantitative, and one cannot go from a case of murder to something that isn’t gravely wrong by merely quantitative variation, since murder is always gravely wrong for qualitative reasons. I suspect there are a number of other very interesting consequences of taking murder to be gravely wrong for qualitative reasons. I think these consequences will motivate some people to give up on the claim that murder is gravely wrong for qualitative reasons. But I think we should hold on to that claim and accept the consequences. Friday, December 9, 2016 Love and reasons Humans are fundamentally loving beings. This is more fundamental than their being rational, because the nature of reasons, and hence of rationality, is to be accounted for in terms of the nature of love. A sketchy approximation to a love-based account of external reasons is this: • A fact F is an external reason for ϕing if and only if F partially grounds ϕing being in some respect loving towards something or someone or not ϕing being in some respect unloving towards something or someone. • A plurality of facts is a conclusive external reason for ϕing if and only if the plurality grounds its being unloving not to ϕ. If I am right that love has the three fundamental aspects of benevolence, appreciation and union, these probably also provide the three basic kinds of reasons. There are reasons to do good and to prevent bad: these come from the benevolence aspect. There are reasons to, e.g., admire and be grateful that come from appreciation. Interestingly, I think appreciation also provides reasons for things like criticism and punishment. In criticism and punishment we appreciate someone or something qua someone or something that ought to do better: we appreciate nature over actual activity. And finally there is union, which needs to be appropriate to the love (I develop this at greater length in One Body). Internal reasons are occurrent beliefs that are in some sense about what there is external reason to do and that enter into the right way into choice. These beliefs come in a broad variety, and are not always explicitly about reasons as such. Tuesday, December 6, 2016 3D-printable cookie cutters with Inkscape and OpenSCAD We thought that our 4-year-old would enjoy a Pikachu cookie cutter for Christmas, but I didn't like the existing designs on Thingiverse. So I wrote a Python script, eventually packaged into an Inkscape extension, that generates a 3D-printable OpenSCAD file from a color-coded SVG path file. Instructions are here. Monday, December 5, 2016 A Trinitarian structure in love In One Body, I identified three crucial aspects in every form of love: benevolence, appreciation and unity. But I did not have an argument that there are no further equally central aspects. I still don’t. But I now have some suggestive evidence: There is a Trinitarian structure to these three aspects. The Father eternally conferring his divine nature—the nature of being the Good Itself—on the Son and, through the Son, on the Holy Spirit. The Son in turn eternally and gratefully contemplates the Father. And the Holy Spirit joins Father with Son. This makes for benevolence, appreciation and unity, respectively, all perichoretically interconnected. That there are only three Persons in the most blessed Trinity is thus evidence that these three aspects are what love is at base. Wednesday, November 30, 2016 No-collapse interpretations without a dynamically evolving wavefunction in reality Bohm’s interpretation of quantum mechanics has two ontological components: It has the guiding wave—the wavefunction—which dynamically evolves according to the Schrödinger equation, and it has the corpuscles whose movements are guided by that wavefunction. Brown and Wallace criticize Bohm for this duality, on the grounds that there is no reason to take our macroscopic reality to be connected with the corpuscles rather than the wavefunction. I want to explore a variant of Bohm on which there is no evolving wavefunction, and then generalize the point to a number of other no-collapse interpretations. So, on Bohm’s quantum mechanics, reality at a time t is represented by two things: (a) a wavefunction vector |ψ(t)⟩ in the Hilbert space, and (b) an assignment of values to hidden variables (e.g., corpuscle positions). The first item evolves according to the Schrödinger equation. Given an initial vector |ψ(0)⟩, the vector at time t can be mathematically given as |ψ(t)⟩ = Ut|ψ(0)⟩ where Ut is a mathematical time-evolution operator (dependent on the Hamiltonian). And then by a law of nature, the hidden variables evolve according to a differential equation—the guiding equation—that involves |ψ(t)⟩. But now suppose we change the ontology. We keep the assignment of values to hidden variables at times. But instead of supposing that reality has something corresponding to the wavefunction vector at every time, we merely suppose that reality has something corresponding to an initial wavefunction vector |ψ0⟩. There is nothing in physical reality corresponding to the wavefunction at t if t > 0. But nonetheless it makes mathematical sense to talk of the vector Ut|ψ0⟩, and then the guiding equation governing the evolution of the hidden variables can be formulated in terms of Ut|ψ0⟩ instead of |ψ(t)⟩. If we want an ontology to go with this, we could say that the reality corresponding to the initial vector |ψ0⟩ affects the evolution of the hidden variables for all subsequent times. We now have only one aspect of reality—the hidden variables of the corpuscles—evolving dynamically instead of two. We don’t have Schrödinger’s equation in the laws of nature except as a useful mathematical property of the Ut operator described by the initial vector). We can talk of the wavefunction Ut|ψ0⟩ at a time t, but that’s just a mathematical artifact, just as m1m2 is a part of the equation expressing Newton’s law of gravitation rather than a direct representation of physical reality. Of course, just as m1m2 is determined by physical things—the two masses—so too the wavefunction Ut|ψ0⟩ is determined by physical reality (the initial vector, the time, and the Hamiltonian). This seems to me to weaken the force of the Brown and Wallace point, since there no longer is a reality corresponding to the wavefunction at non-initial times, except highly derivatively. Interestingly, the exact same move can be made for a number of other no-collapse interpretations, such as Bell’s indeterministic variant of Bohm, other modal interpretations, the many-minds interpretation, the traveling minds interpretation and the Aristotelian traveling forms interpretation. There need be no time-evolving wavefunction in reality, but just an initial vector which transtemporally affects the evolution of the other aspects of reality (such as where the minds go). Or one could suppose a static background vector. It’s interesting to ask what happens if one plugs this into the Everett interpretation. There I think we get something rather implausible: for then all time-evolution will disappear, since all reality will be reduced to the physical correlate of the initial vector. So my move above is only plausible for those no-collapse interpretations on which there is something more beyond the wavefunction. There is also a connection between this approach and the Heisenberg picture. How close the connection is is not yet clear to me. Material conditionals and quantifiers 1. Every G is H it seems we should be able to infer for any x: 1. If x is G, then x is H. This pretty much forces one to read “If p, then q” as a material conditional, i.e., as q or not p. For the objection to reading the indicative conditional as a material conditional is that this leads to the paradoxes of material implication, such as that if it’s not snowing in Fairbanks, Alaska today, then it’s correct to say: 1. If it’s snowing in Fairbanks today, then it’s snowing in Mexico City today even if it’s not snowing in Mexico City, which just sounds wrong. But if we grant the inference from (1) to (2), we can pretty much recover the paradoxes of material implication. For instance, suppose it’s snowing neither in Fairbanks nor in Mexico City today. Then: 1. Every truth value of the proposition that it’s snowing in Fairbanks today is a truth value of the proposition that it’s snowing in Mexico City today. So, by the (1)→(2) inference: 1. If a truth value of the proposition that it’s snowing today in Fairbanks is true, then a truth value of the proposition that it’s snowing today in Mexico City is true. Or, a little more smoothly: 1. If it’s true that it’s snowing in Fairbanks today, then it’s true that it’s snowing in Mexico City today. It would be very hard to accept (6) without accepting (3). With a bit of work, we can tell similar stories about the other standard paradoxes. The above truth-value-quantification technique works equally well for both the true⊃true and the false⊃false paradoxes. The remaining family of paradoxes are the false⊃true ones. For instance, it’s paradoxical to say: 1. If it’s warm in the Antarctic today, it’s a cool day in Waco today even though the antecedent is false and the consequent is true, so the corresponding material conditional is true. But now: 1. Every day that’s other than today or on which it’s warm in the Antarctic is a day that’s other than today or on which it’s cool in Waco. So by (1)→(2): 1. If today is other than today or it’s warm in the Antarctic today, then today is other than today or today it’s cool in Waco. And it would be hard to accept (9) without accepting (7). (I made the example a bit more complicated than it might technically need to be in order not to have a case of (1) where there are no Fs. One might think for Aristotelian logic reasons that that case stands apart.) This suggests that if we object to the “material conditional” reading of “If… then…”, we should object to the “material quantification” reading of “Every F is G”. But many object to the first who do not object to the second. Monday, November 28, 2016 Are we all seriously impaired? When I taught calculus, the average grade on the final exam was around 55%. One could make the case that this means that our grading system is off: that everybody’s grades should be way higher. But I suspect that’s mistaken. The average grasp of calculus in my students probably really wasn’t good enough for one to be able to say with a straight face that they “knew calculus”. Now, I think I was a pretty rotten calculus teacher. But such grades are not at all unusual in calculus classes. And if one didn’t have the pre-selection that colleges have, but simply taught calculus to everybody, the grades would be even lower. Yet much of calculus is pretty straightforward. Differential calculus is just a matter of ploughing through and following simple rules. Integral calculus is definitely harder, and exceling at it requires real creativity, but one can presumably do decently just by internalizing a number of heuristics and using trial and error. I find myself with the feeling that a normal adult human being should be able to do calculus, understand basic Newtonian physics, write a well-argued essay, deal well with emotions, avoid basic formal and informal fallacies, sing decently, have a good marriage, etc. But I doubt that the average adult human being can learn all these things even with excellent teachers. Certainly the time investment would be prohibitive. There are two things one can say about this feeling. The first is that the feeling is simply mistaken. We’re all apes. A 55% grade in calculus from an ape is incredible. The kind of logical reasoning that an average person can demonstrate in an essay is super-impressive for an ape. There is little wrong with average people intellectually. Maybe the average human can’t practically learn calculus, but if so that’s no more problematic than the facts that the average human can’t practically learn to climb a 5.14 or run a four-minute mile. These things are benchmarks of human excellence rather than of human normalcy. That may in fact be the right thing to say. But I want to explore another possibility: the possibility that the feeling is right. If it is right, then all of us fall seriously short of what normal human beings should be able to do. We are all seriously impaired. How could that be? We are, after all, descendants of apes, and the average human being is, as far as we can tell, an order of magnitude intellectually ahead of the best non-human apes we know. Should the standards be another order of magnitude ahead of that? I don’t think there is a plausible naturalistic story that would do justice to the feeling that the average human falls that far short of where humans should be at. But the Christian doctrine of the Fall allows for a story to be told here. Perhaps God miraculously intervened just before the first humans were conceived, and ensured that these creatures would be significantly genetically different from their non-human parents: they would have capacities enabling them to do calculus, understand Newtonian physics, write a well-argued essay, deal well with emotions, avoid fallacies, sing decently, have a good marriage, etc. (At least once calculus, physics and writing are invented.) But then the first humans misused their new genetic gifts, and many of them were taken away, so that now only statistically exceptional humans have many of these capacities, and none have them all. And so we have more geneticaly in common with our ape forebears than would have been the case if the first humans acted better. However, in addition to genetics, on this story, there is the human nature, which is a metaphysical component of human beings defining what is and what is not normal for humans. And this human nature specifies that the capacities in question are in fact a part of human normalcy, so that we are all objectively seriously impaired. Of course, this isn’t the only way to read the Fall. Another way—which one can connect in the text of Genesis with the Tree of Life—is that the first humans had special gifts, but these gifts were due to miracles beyond human nature. This may in fact be the better reading of the story of the Fall, but I want to continue exploring the first reading. If this is right, then we have an interesting choice-point for philosophy of disability. One option will be to hold that everyone is disabled. If we take this option then for policy reasons (e.g., disability accommodation) we will need a more gerrymandered concept than disability, say disability*, such that only a minority (or at least not an overwhelming majority) is disabled*. This concept will no doubt have a lot of social construction going into it, and objective impairment will be at best a necessary condition for disability*. The second option is to say only a minority (or not an overwhelming majority) is disabled, which requires disability to differ significantly from impairment. Again, I suspect that the concept will have a lot of social construction in it. So, either way, if we accept the story that we are all seriously impaired, for policy reasons we will need a disability-related concept with a lot more social construction in it. Should we accept the story that we are all seriously impaired? I think there really is an intuition that we should do many things that we can’t, and that intuition is evidence for the story. But far from conclusive. Still, maybe we are all seriously impaired, in multiple intellectual dimensions. We may even be all physically impaired. Monday, November 21, 2016 The identity of countries and persons Suppose Canada is dissolved, and a country is created, with the same people, in the same place, with the same name, symbols, and political system. Moreover, the new country isn’t like the old one by mere happenstance, but is deliberately modeled on the old. Then very little has been lost, even if it turns out that on the correct metaphysics of countries the new country is a mere replica of Canada. On the other hand, suppose Jean Vanier is dissolved, and a new person is created, with the same matter and shape, in the same place, with the same name, apparent memories and character. Moreover, the new person isn’t like the old one by mere happenstance, but is deliberately modeled on the old. Then if on the correct metaphysics of persons the new person is a mere replica of Jean Vanier, much has been lost, even if Vanier’s loving contributions continue through the new person. This suggests an interesting asymmetry between social entities and persons. For social entities, the causal connections and qualitative and material similarities across time matter much more than identity itself. For persons, the identity itself matters at least as much as these connections and similarities. Perhaps the explanation of this fact is that for social entities there is nothing more to the entity than the persons and relationships caught up in them, while for persons there is something more than temporal parts and their relationships. Friday, November 18, 2016 An Aristotelian picture of set theory There are some sets we need just because of the fundamental axioms of set theory, whatever these are (ZF? ZFC?). Probably, we could satisfy the fundamental axioms of set theory with a collection of sets that in some sense is countable. But then we need to add some sets because the world is arranged thus and so. For instance, we may need to add a real number representing the exact distance between my thumbs in Planck units. (If the world is describable as a vector in a separable Hilbert space, all we need to add can be encoded as a single real number.) This is a very Aristotelian paper: the sets are an abstraction from the concrete reality of the world. On this Aristotelian picture, what sets exist might well have been different had I wiggled my thumb. Perhaps, then, some of the non-fundamental axioms of set theory are contingent. Thursday, November 17, 2016 Against isotropy We think of Euclidean space as isotropic: any two points in space are exactly alike both intrinsically and relationally, and if we rotated or translated space, the only changes would be to the bare numerical identities to the points—qualitatively everything would stay the same, both at the level of individual points and of larger structures. But our standard mathematical models of Euclidean space are not like that. For instance, we model Euclidean space on the set of triples (x, y, z) of real numbers. But that model is far from isotropy. For instance, some points, like (2, 2, 2) have the property that all three of their coordinates are the same, while others like (2, 3, 2) have the property that they have exactly two coordinates that are the same, and yet others like (3, 1, 2) have the property that their coordinates are all different. Even in one-dimension, say that of time, when we represent the dimension by real numbers we do not have isotropy. For instance, if we start with the standard set-theoretic construction of the natural numbers as 0 = ⌀, 1 = {0}, 2 = {0, 1}, 3 = {0, 1, 2}, ... and ensure that the natural numbers are a subset of the reals, then 0 will be qualitatively very different from, say, 3. For instance, 0 has no members, while 3 has three members. (Perhaps, though, we do not embed the set-theoretic natural numbers into the reals, but make all reals—including those that are natural—into Dedekind cuts. But we will still have qualitative differences, just buried more deeply.) The way we handle this in practice is that we ignore the mathematical structure that is incompatible with isotropy. We treat the Cartesian coordinate structure of Euclidean space as a mere aid to computation, while the set-theoretic construction of the natural numbers is ignored completely. Imagine the look of incomprehension we’d get from a scientist if one said something like: “At a time t2, the system behaved thus-and-so, because at a time t1 that is a proper subset of t2, it was arranged thus-and-so.” Times, even when represented mathematically as real numbers, just don’t seem the sort of thing to stand in subset relations. But on the Dedekind-cut construction of real numbers, an earlier time is indeed a proper subset of a later time. But perhaps there is something to learn from the fact that our best mathematical models of isotropic space and time themselves lack true isotropy. Perhaps true isotropy cannot be achieved. And if so, that might be relevant to solving some problems. First, probabilities. If a particle is on a line, and I have no further information about it except that the line is truly isotropic, so should my probabilities for the particle’s position be. But that cannot be coherently modeled in classical (countably additive and normalized) probabilities. This is just one of many, many puzzles involving isotropy. Well, perhaps there is no isotropy. Perhaps points differ qualitatively. These differences may not be important to the laws of nature, but they may be important to the initial conditions. Perhaps, for instance, nature prefers the particles to start out at coordinates that are natural numbers. Second, the Principle of Sufficient Reason. Leibniz argued against the substantiality of space on the grounds that there could be no explanation of why things are where they are rather than being shifted or rotated by some distance. But that assumed real isotropy. But if there is deep anisotropy, there could well be reasons for why things are where they are. Perhaps, for instance, there is a God who likes to put particles at coordinates whose binary digits encode his favorite poems. Of course, one can get out of Leibniz’s own problem by supposing with him that space is relational. But if the relation that constitutes space is metric, then the problem of shifts and rotations can be replaced by a problem of dilation—why aren’t objects all 2.7 times as far apart as they are? Again, that problem assumes that there isn’t a deep qualitative structure underneath numbers. Wednesday, November 16, 2016 Universal countable numerosity: A hypothesis worth taking seriously? Here’s a curious tale about sets and possible worlds: What sets there are varies between metaphysically possible worlds and for any possible world w1, the sets at w1 satisfy the full ZFC axioms and there is also a possible world w2 at which there exists a set S such that: 1. At w2, there is a bijection of S onto the natural numbers (i.e., a function that is one-to-one and whose range is all of the natural numbers). 2. The members of S are precisely the sets that exist at w1. Suppose that this tale is true. Then assume S5 and this further principle: 1. If two sets A and B are such that possibly there is a bijection between them, then they have the same numerosity. (Here I distinguish between “numerosity” and “cardinality”: to have the same cardinality, they need to actually have a bijection.) Then: 1. Necessarily, all infinite sets have the same numerosity, and in particular necessarily all infinite sets have the same numerosity as the set of natural numbers. For if A and B are infinite sets in w1, then at w2 they are subsets of the countable-at-w2 set S, and hence at w2 they have a bijection with the naturals, and so by (3) they have the same numerosity. Given the tale, there is then an intuitive sense in which all infinite sets are the same size. But it gets more fun than that. Add this principle: 1. If two pluralities are such that possibly there is a bijection between them, then the two pluralities have the same numerosity. (Here, a bijection between the xs and the ys is a binary relation R such that each of the xs stands in R to a unique one of the ys, and vice versa.) Then: 1. Necessarily, the plurality of sets has the same numerosity as the plurality of natural numbers. For if the xs are the plurality of sets of w1, then there will be a world w2 and a countable-at-w2 set S such that the xs are all and only the members of S. Hence, there will be a bijection between the xs and the natural numbers at w2, and hence at w1 they will have the same numerosity by (5). So if my curious tale is true, not only does each infinite set have the same numerosity, but the plurality of sets has the same numerosity as each of these infinite sets. We can now say that a set or plurality has countable numerosity provided that it is either finite or has the same numerosity as the naturals. Then the conclusion of the tale is that each set (finite and infinite), as well as the plurality of sets, has countable numerosity. I.e., universal countable numerosity. But hasn’t Cantor proved this is all false? Not at all. Cantor proved that this is false if we put “cardinality” in place of “numerosity”, where cardinality is defined in terms of actual bijections while numerosity is defined in terms of possible bijections. And I think that possible bijections are a better way to get at the intuitive concept of the count of members. Still, is my curious tale mathematically consistent? I think nobody knows. Will Brian, a colleague in the Mathematics Department, sent me a nice proof which, assuming my interpretation of its claims is correct, shows that if ZFC + “there is an inaccessible cardinal” is consistent, then so is my tale. And we have no reason to doubt that ZFC + “there is an inaccessible cardinal” is consistent. So we have no reason to doubt the consistency of the tale. As for its truth, that's a different matter. One philosophically deep question is whether there could in fact be so much variation as to what the sets are in different metaphysically possible worlds. Monday, November 14, 2016 From a principle about looking down on people to some controversial consequences It’s wrong to look down on people simply for having physical or intellectual disabilities. But it doesn’t seem wrong to look down on, say, someone who has devoted her life to the pursuit of money above all else. Where is the line to be drawn? Whom is it permissible for people to look down on? Before answering that question, I need to qualify it. I think that a plausible case can be made that it is not permissible for us to look down on anyone. The reason for that is that (a) we have all failed morally in many ways, (b) we would very likely have failed in many more had we been in certain other circumstances that we are lucky not to have been in, and (c) we are not epistemically in a position to judge that a specific other person’s failures are morally worse than our own would likely be in circumstances that it is only our luck (or divine providence) not to be in, especially when we take into account the fact that we know much less about other people’s responsibility than about our own. So I want to talk, instead, about when it is intrinsically permissible to look down on people—when it would be permissible if we were in a position to throw the first mental stone. Let’s go back to the person who has devoted her life to the pursuit of money above all else. Suppose that it turns out that she suffered from a serious intellectual disability that rendered her incapable of grasping values. But her parents, with enormous but misguided rehabilitative effort, have managed to instill in her the grasp of one value: that of money. Given this backstory, it’s clear that looking down on her for pursuing money above all else is not relevantly different from looking down on her for having a disability. On the other hand, it still doesn’t seem wrong to look down on a person of normal intellectual capacities in normal circumstances who has devoted her life to the pursuit of money through making greedy choice after greedy choice. This suggests a plausible principle: 1. It is only permissible to look down on someone if one is looking down on her for morally wrong choices she is responsible for and conditions that are caused by these choices in a relevant way. If so, then it is wrong to look down on people for reasoning badly, unless this bad reasoning is a function of morally wrong choices they are responsible for. This has some interesting implications. It sure seems typically intrinsically permissible to look down on someone who reasons badly because she is trying to avoid finding out that she’s wrong about something. If this is right, then typically trying to avoid finding out that one is wrong is itself morally wrong. This suggests that typically: 1. We typically have a moral duty (an imperfect one, to be sure) to strive to avoid error. Moreover, I think it is implausible to think that this moral duty holds simply in virtue of the practical consequences of error. Suppose that Sally has an esoteric astronomical theory that she isn’t going to share with anybody but you and you tell her that the latest issue of Nature has an article refuting the theory. Sally, however, refuses to look at the data. This seems like the kind of avoidance of finding out that one is wrong that it seems intrinsically permissible to look down on, even though it has no negative practical consequences for Sally or anybody else. Thus: 1. We typically have a moral duty (an imperfect one) to strive for its own sake to avoid error. But the intrinsic bad in being wrong is primarily to oneself (there might be some derivative bad to the community, but this does not seem strong enough to ground the duty in question). Hence: 1. We have duties to self. Thus, the principle (1), together with some plausible considerations, leads to a controversial conclusion about the morals of the intellectual life, namely (3), and to the controversial conclusion that we have duties to self. Friday, November 11, 2016 Cambridge events and objects Suppose Joe Shmoe died on February 17, 1982, sadly leaving no relatives or friends behind. Every year, on February 17, the anniversary of Shmoe's death occurs. No one marks it in any way. But it occurs, every year, invariably. It is what one might call a Cambridge event, whose occurrence does not mark any real change in the world. Similarly, there seem to be Cambridge objects. Just as the anniversary is defined by a certain temporal distance, we can define an object by a certain spatial distance. For instance, let me introduce an object: my visual focus. My visual focus is a moving object a certain distance in front of my eyes--sometimes moving very fast (in principle, a visual focus could move faster than light!). My visual focus is a persisting object, unless I close my eyes (I am not sure whether it persists when I blink or just blinks out of existence). Curiously, my visual focus, while typically having a spatiotemporal location, could also exist outside of spacetime. Imagine that I am focused a meter ahead of my nose, and space has an edge. I walk towards that edge, unblinking and never refocusing, rapt in thought about ontology. Before my nose touches the edge of space, my visual focus will have moved beyond it! We can say that the visual focus is "a meter ahead of my face", but that isn't an actual place. So we cannot identify the visual focus with a whole made up of spacetime locations. My brief remarks have taught you, I think, a little bit about how to talk about visual foci. You now know roughly when my, or your, visual focus exists. You know something about its persistence conditions. You know a little bit about what predicates apply to it. And there is a vast range of stuff that's as yet underdetermined, and could be determined in more than one way. For instance, how wide is the visual focus? Does it shift very quickly with saccades? But of course it's also clear that there has to be a sense in which there really are no visual foci. Objects that can leave our spacetime so easily, that can move faster than light, and that are entirely outside us but are entirely grounded in our state just aren't really there. They are Cambridge objects instead of real ones, akin to Cambridge events, Cambridge properties and Cambridge changes. This post is inspired by John Giannini's dissertation. Tuesday, November 8, 2016 A Traveling Forms Interpretation of Quantum Mechanics Paper is here. Abstract: The Traveling Minds interpretation of Quantum Mechanics is a no-collapse interpretation on which the wavefunction evolves deterministically like in the Everett branching multiple-worlds interpretation. As in the Many Minds interpretation, minds navigate the Everett branching structure following the probabilities given by the Born rule. However, in the Traveling Minds interpretation (a variant by Squires and Barrett of the single-mind interpretation), the minds are guaranteed to all travel together--they are always found in the same branch. The Traveling Forms interpretation extends the Traveling Minds interpretation in an Aristotelian way by having forms of non-minded macroscopic entities that have forms, such as plants, lower animals, bacteria and planets, travel along the branching structure together with the minds. As a result, while there is deterministic wavefunction-based physics in the branches without minds, non-reducible higher-level structures like life are found only in the branch with minds. Ontological grounding nihilism Some people are attracted to nihilism about proper parthood: no entity has proper parts. I used to be rather attracted to that myself, but I am now finding that a different thesis fits better with my intuitions: no entity is (fully) grounded. Or to put it positively: only fundamental entities exist. This has some of the same consequences that nihilism about proper parthood would. For instance, on nihilism about proper parthood, there are no artifacts, since if there were any, they'd have proper parts. But on nihilism about ontological grounding, we can also argue that there are no artifacts, since the existence of an artifact would be grounded in social and physical facts. Moreover, nihilism about ontological grounding implies nihilism about mereological sum: for the existence of a mereological sum would be grounded in the existence of its proper parts. However, nihilism about ontological grounding is compatible with some things having parts--but they have to be things that go beyond their parts, things whose existence is not grounded in the existence and relations of their parts. Monday, November 7, 2016 The direction of fit for belief It’s non-instrumentally good for me to believe truly and it’s non-instrumentally bad for me to believe falsely. Does that give you non-instrumental reason to make p true? Saying “Yes” is counterintuitive. And it destroys the direction-of-fit asymmetry between beliefs and desires. But it’s hard to say “No”, given that surely if something is non-instrumentally good for me, you thereby have have non-instrumental reason to provide it. Here is a potential solution. We sometimes have desires that we do not want other people to take into account in their decision-making. For instance, a parent might want a child to become a mathematician, but would nonetheless be committed to having the child to decide on their life-direction independently of the parent’s desires. In such a case, the parent’s desire that the child become a mathematician might provide the child with a first-order reason to become a mathematician, but this reason might be largely or completely excluded by the parent’s higher-order commitment. And we can explain why it is good to have such an exclusion: if a parent couldn’t have such an exclusion, she’d either have to exercise great self-control over her desires or would have to have hide them from their children. Perhaps we similarly have a blanket higher-order reason that excludes promoting p on the grounds that someone believes p. And we can explain why it is good to have such an exclusion, in order to decrease the degree of conflict of interest between epistemic and pragmatic reasons. For instance, without such an exclusion, I’d have pragmatic reason to avoid pessimistic conclusions because as soon as we came to them, we and others would have reason to make the conclusions true. By suggesting that exclusionary reasons are more common than I previously thought, this weakens some of my omnirationality arguments. Friday, November 4, 2016 My new toy Wednesday, November 2, 2016 Cessation of existence and theories of persistence Suppose I could get into a time machine and instantly travel forward by a hundred years. Then over the next hundred (external) years I don’t exist. But this non-existence is not intrinsically a harm to me (it might be accidentally a harm if over these ten years I miss out on things). So a temporary cessation of existence is not an intrinsic harm to me. On the other hand, a permanent cessation of existence surely is an intrinsic harm to me. These observations have interesting connections with theories of persistence and time. First, observe that whether a cessation of existence is bad for me depends on whether I will come back into existence. This fits neatly with four-dimensionalism and less neatly with three-dimensionalism. If I am a four-dimensional entity, it makes perfect sense that as such I would have an overall well-being, and that this overall well-being should depend on the overall shape and size of my four-dimensional life, including my future life. Hence it makes sense that whether I undergo a permanent or impermanent cessation of existence makes a serious difference to me. But suppose I am three-dimensional and consider these two scenarios: 1. In 2017 I will permanently cease to exist. 2. In 2017 I will temporarily cease to exist and come back into existence in 2117. I am surely worse off in (1). But if I am three-dimensional, then to be worse off, I need to be worse off as a three-dimensional being, at some time or other. Prior to 2117, I’m on par as a three-dimensional being in the two scenarios. So if there is to be a difference in well-being, it must have something to do with my state after 2117. But it seems false that, say, in 2118, I am worse off in (1) than in (2). For how can I be better or worse off when I don’t exist? The three-dimensionalist’s best move, I think, is to say that I am actually worse off prior to 2017 in scenario (1) than in scenario (2). For, prior to 2017, it is true in scenario (1) that I will permanently cease to exist while in (2) it is false that I will do so. It can indeed happen that one is worse off at time t1 in virtue of how things will be at a later time t2. Perhaps the athlete who attains a world-record that won’t be beaten for ten years is worse off at the time of the record than the athlete who attains a world-record that won’t be beaten for a hundred years. Perhaps I am worse off when publishing a book that will be ignored than when publishing a book that will be taken seriously. But these are differences in external well-being, like the kind of well-being we have in virtue of our friends doing badly or well. And it is counterintuitive that permanent cessation of existence is only a harm to one’s external well-being. (The same problem afflicts Thomas Nagel’s theory that the badness of death has to do with unfinished projects.) The problem is worst on open future views. For on open future views, prior to the cessation of existence there may be no fact of the matter of whether I will come back into existence, and hence no difference in well-being. The problem is also particularly pressing on exdurantist views on which I am a three-dimensional stage, and future stages are numerically different from me. For then the difference, prior to 2017, between the two scenarios is a difference about what will happen to something numerically different from me. The problem is also particularly pressing on presentist and growing block views, for it is odd to say that I am better or worse off in virtue of non-existent future events. Of the three-dimensionalists, probably the best off is the eternalist endurantist. But even there the assimilation of the difference between (1) and (2) to external well-being is problematic. Tuesday, November 1, 2016 I was doing logic problems on the board in class and thinking about rock climbing, and I was struck by the joy of knowing one's made progress on a finite task. You can be pretty confident that if you've got an existential premise and you've set up an existential elimination subproof then you've made progress. You can be pretty confident that if you've got to a certain position on the wall and there is no other way to be at that height then you've made progress. And there is a delight in being really confident that one has made progress. Moreover, the value of the progress doesn't seem here to be merely instrumental. Even if in the end you fail, still having made progress feels valuable in and of itself. One can try to say that what's valuable is the practice one gets, or what the progress indicates about one's skills, but that doesn't seem right. It seems that the progress itself is valuable. Of course, it has to be genuine progress, not mere going down a blind alley (though recognizing a blind alley, in a scenario where there are only finitely many options, is itself progress). The value of progress (as such) at a task derives from the value of fulfilling the task, much as the value of striving at a task derives from the value of fulfilling it. But in both cases this is not a case of end-to-means value transfer. Maybe this has something to do with the idea developed by Robert M. Adams of standing for a good. Striving and a fortiori progress are ways of standing and moving in favor of a task. And that's worthwhile even if one does not accomplish the task. Monday, October 31, 2016 Realism and anti-reductionism The ordinary sentence "There are four chairs in my office" is true (in its ordinary context). Furthermore, its being true tells us very little about fundamental ontology. Fundamental physical reality could be made out of a single field, a handful of fields, particles in three-dimensional space, particles in ten-dimensional space, a single vector in a Hilbert space, etc., and yet the sentence could be true. An interesting consequence: Even if in fact physical reality is made out of particles in three-dimensional space, we should not analyze the sentence to mean that there are four disjoint pluralities of particles each arranged chairwise in my office. For if that were what the sentence meant, it would tell us about which of the fundamental physical ontologies is correct. Rather, the sentence is true because of a certain arrangement of particles (or fields or whatever). If there is such a broad range of fundamental ontologies that "There are four chairs in my office" is compatible with, it seems that the sentence should also be compatible with various sceptical scenarios, such as that I am a brain in a vat being fed data from a computer simulation. In that case, the chair sentence would be true due to facts about the computer simulation, in much the way that "There are four chairs in this Minecraft house" is true. It would be very difficult to be open to a wide variety of fundamental physics stories about the chair sentence without being open to the sentence being true in virtue of facts about a computer simulation. But now suppose that the same kind of thing is true for other sentences about physical things like tables, dogs, trees, human bodies, etc.: each of these sentences can be made true by a wide array of physical ontologies. Then it seems that nothing we say about physical things rules out sceptical scenarios: yes, I know I have two hands, but my having two hands could be grounded by facts about a computer simulation. At this point the meaningfulness of the sceptical question whether I know I am not a brain in a vat is breaking down. And with it, realism is breaking down. In order for the sceptical question to make sense, we need the possibility of saying things that cannot simply be made true by a very wide variety of physical theories, since such things will also be made true by computer simulations. This gives us an interesting anti-reductionist argument. If the statement "I have two hands" is to be understood reductively (and I include non-Aristotelian functionalist views as reductive), then it could still be literally true in the brain-in-a-vat scenario. But if anti-reductionism about hands is true, then the statement wouldn't be true in the brain-in-a-vat scenario. And so I can deny that I am in that scenario simply by saying "I have two hands." But maybe I am moving too fast here. Maybe "I have two hands" could be literally true in a brain-in-a-vat scenario. Suppose that the anti-reductionism consists of there being Aristotelian forms of hands (presumably accidental forms). But if, for all we know, the form of a hand can inform a bunch of particles, a fact about a vector or the region of a field, then the form of a hand can also inform an aspect of a computer simulation. And so, for all we know, I can literally and non-reductively have hands even if I am a brain in a vat. I am not sure, however, that I need to worry about this. What is important is form, not the precise material substrate. If physical reality is the memory of a giant computer but it isn't a mere simulation but is in fact informed by a multiplicity of substantial and accidental forms corresponding to people, trees, hands, hearts, etc., and these forms are real entities, then the scenario does not seem to me to be a sceptical scenario. Friday, October 28, 2016 Accretion, excretion and four-dimensionalism Suppose we are four-dimensional. Parthood simpliciter then is an eternal relation between, typically, four-dimensional entities. My heart is a four-dimensional object that is eternally a part of me, who am another four-dimensional object. But there is surely also such a thing as having a part at a time t. Thus, in utero my umbilical cord was a part of me, but it no longer is. What does it mean to have a part at a time? Here is the simplest thing to say: 1. x is a part of y at t if and only if x is a part of y and both x and y exist at t. But (1) then has a very interesting metaphysical consequence that only a few Aristotelian philosophers endorse: parts cannot survive being accreted by or excreted from the whole. For if, say, my finger survived its removal from the whole (and not just because I became a scattered object), there would be a time at which my finger would exist but wouldn’t be a part of me. And that violates (1) together with the eternality of parthood simpliciter. This may seem to be a reductio of (1). But if we reject (1), what do we put in its place, assuming four-dimensionalism? I suspect we will have to posit a second relation of parthood, parthood-at-a-time, which is not reducible to parthood simpliciter. And that seems to be unduly complex. So I propose that the four-dimensionalist embrace (1) and conclude to the thesis that parts cannot survive their accretion or excretion. Dualist survivalism According to dualist survivalism, at death our bodies perish but we continue to exist with nothing but a soul (until, Christians believe, the resurrection of the dead, when we regain our bodies). A lot of the arguments against dualist survivalism focus on how we could exist as mere souls. First, such existence seems to violate weak supplementation: my souls is proper part of me, but if the body perished, my soul would be my only part—and yet it would still be a proper part (since identity is necessary). Second, it seems to be an essential property of animals that they are embodied, an essential property of humans that they are animals, and an essential property of us that we are humans. There are answers to these kinds of worries in the literature, but I want to note that things become much simpler for the dualist survivalist if she accepts a four-dimensionalism that says that we are four-dimensional beings (this won't be endurantist, but it might not be perdurantist either). First, there will be a time t after my death (and before the resurrection) such that the only proper part of mine that is located at t is my soul. However, the soul won’t be my only part. My arms, legs and brain are eternally my parts. It’s just that they aren’t located at t, as the only proper part of me that is located at t is my soul. There is no violation of weak supplementation. (We still get a violation of weak supplementation for the derived relation of parthood-at-t, where x is a part-at-t of y provided that x is a part of y and both x and y exist at t. But why think there is weak supplementation for parthood-at-t? We certainly wouldn’t expect weak supplementation to hold for parthood-at-z, where z is a spatial location and x is a part-at-z of y provided that x is a part of y and both x and y are located at z.) Second, it need not follow from its being an essential property of animals that they are embodied that they have bodies at every time at which they exist. Compare: It may be an essential property of a cell that it is nucleated. But the cell is bigger spatially than the nucleus, so it had better not follow that the nucleus exists at every spatial location at which the cell does. So why think that the body needs to exist at every temporal location at which the animal does? Why can’t the animal be bigger temporally than its body? Of course, those given to three-dimensional thinking will say that I am missing crucial differences between space and time. Thursday, October 27, 2016 Three strengths of desire Plausibly, having satisfied desires contributes to my well-being and having unsatisfied desires contributes to my ill-being, at least in the case of rational desires. But there are infinitely many things that I’d like to know and only finitely many that I do know, and my desire here is rational. So my desire and knowledge state contributes infinite misery to me. But it does not. So something’s gone wrong. That’s too quick. Maybe the things that I know are things that I more strongly desire to know than the things that I don’t know, to such a degree that the contribution to my well-being from the finite number of things I know outweighs the contribution to my ill-being from the infinite number of things I don’t know. In my case, I think this objection holds, since I take myself to know the central truths of the Christian faith, and I take that to make me know things that I most want to know: who I am, what I should do, what the point of my life is, etc. And this may well outweigh the infinitely many things that I don’t know. Yes, but I can tweak the argument. Consider some area of my knowledge. Perhaps my knowledge of noncommutative geometry. There is way more that I don’t know than that I know, and I can’t say that the things that I do know are ones that I desire so much more strongly to know than the ones I don’t know so as to balance them out. But I don’t think I am made more miserable by my desire and knowledge state with respect to noncommutative geometry. If I neither knew anything nor cared to know anything about noncommutative geometry, I wouldn’t be any better off. Thinking about this suggests there are three different strengths in a desire: 1. Sp: preferential strength, determined by which things one is inclined to choose over which. 2. Sh: happiness strength, determined by how happy having the desire fulfilled makes one. 3. Sm: misery strength, determined by how miserable having the desire unfulfilled makes one. It is natural to hypothesize that (a) the contribution to well-being is Sh when the desire is fulfilled and −Sm when it is unfulfilled, and (b) in a rational agent, Sp = Sh + Sm. As a result of (b), one can have the same preferential strength, but differently divided between the happiness and misery strengths. For instance, there may be a degree of pain such that the preferential strength of my desire not to have that pain equals the preferential strength of my desire to know whether the Goldbach Conjecture is true. I would be indifferent whether to avoid the pain or learn whether the Goldbach Conjecture is true. But they are differently divided: in the pain case Sm >> Sh and in the Goldbach case Sm << Sh. There might be some desires where Sm = 0. In those cases we think “It would be nice…” For instance, I might have a desire that some celebrity be my friend. Here, Sm = 0: I am in no way made miserable by having that desire be unfulfilled, although the desire might have significant preferential strength—there might be significant goods I would be willing trade for that friendship. On the other hand, when I desire that a colleague be my friend, quite likely Sm >> 0: I would pine if the friendship weren’t there. (We might think a hedonist has a story about all this: Sh measures how pleasant it is to have the desire fulfilled and Sm measures how painful the unfulfilled desire is. But that story is mistaken. For instance, consider my desire that people not say bad things behind my back in such a way that I never find out. Here, Sm >> 0, but there is no pain in having the desire unfulfilled, since when it’s unfulfilled I don’t know about it.) Wednesday, October 26, 2016 "Should know" I’ve been thinking about the phrase “x should know that s”. (There is probably a literature on this, but blogging just wouldn’t be as much fun if one had to look up the literature!) We use this phrase—or its disjunctive variant “x knows or should know that s”—very readily, without its calling for much evidence about x. • “As an engineer Alice should know that more redundancy was needed in this design.” • “Bob knows or should know that his behavior is unprofessional for a librarian.” • “Carl should have known that genocide is wrong.” Here’s a sense of “x should know that s”: x has some relevant role R and it is normal for those in R to know that s under the relevant circumstances. In that sense, to say that x should know that s we don’t need to know anything specific about x’s history or mental state, other than that x has role R. Rather, we need to know about R: it is normal engineering practice to build in sufficient redundancy; librarians have an unwritten code of professional behavior; human beings normally have a moral law written in their hearts. This role-based sense of “should know” is enough to justify treating x as a poor exemplar of the role R when x does not in fact know that s. When R is a contingent role, like engineer or librarian, it could be a sufficient for drumming x out of R. But we sometimes seem use a “should know” claim to underwrite moral blame. And the normative story I just gave about “should know” isn’t strong enough for that. Alice might have had a really poor education as an engineer, and couldn’t have known better. If the education was sufficiently poor, we might kick her out of the profession, but we shouldn’t blame her morally. Carl, of course, is a case apart. Carl’s ignorance makes him a defective human being, not just a defective engineer or librarian. Still a defective human being is not the same as a morally blameworthy human being. And in Carl’s case we can’t drum him out of the relevant role without being able to levy moral blame on him, as drumming him out of humanity is, presumably, capital punishment. However, we can lock him up for the protection of society. On the other hand, we could take “x should know that s” as saying something about x’s state, like that it is x’s own fault if x doesn’t know. But in that case, I think people often use the phrase without sufficient justification. Yes, it’s normal to know that genocide is wrong. But we live in a fallen world where people can fall very far short of what is normal through no fault of their own, by virtue of physical and mental disease, the intellectual influence of others, and so on. I worry that in common use the phrase “x should know that s” has two rationally incompatible features: • Our evidence only fits with the role-based normative reading. • The conclusions only fit with the personal fault reading. Monday, October 24, 2016 Two senses of "decide" 1. Alice sacrifices her life to protect her innocent comrades. 2. Bob decides that if he ever has the opportunity to sacrifice his life to protect his innocent comrades, he’ll do it. We praise Alice. But as for Bob, while we commend his moral judgment, we think that he is not yet in the crucible of character. Bob’s resolve has not yet been tested. And it’s not just that it hasn’t been tested. Alice’s decision not only reveals but also constitutes her as a courageous individual. Bob’s decision falls short both in the revealing but also in the constituting department (it’s not his fault, of course, that the opportunity hasn’t come up). Now compare Alice and Bob to Carl: 1. Carl knows that tomorrow he’ll have the opportunity to sacrifice his life to protect his innocent comrades, and he decides he will make the sacrifice. Carl is more like Bob than like Alice. It’s true that Carl’s decision is unconditional while Bob’s is conditional. But even though Carl’s decision is unconditional, it’s not final. Carl knows (at least on the most obvious way of spelling out the story) that he will have another opportunity to decide come tomorrow, just as Bob will still have to make a final decision once the opportunity comes up. I am not sure how much Bob and Carl actually count as deciding. They are figuring out what would or will (respectively) be the thing to do. They are making a prediction (hypothetical or future-oriented) about their action. They may even be trying by an act of will to form their character so as to determine that they would or will make the sacrifice. But if they know how human beings function, they know that their attempt is very unlikely to be successful: they would or will still have a real choice to make. And in the end it probably wouldn’t surprise us too much if, put to the test, Bob and Carl failed to make the sacrifice. Alice did something decisive. Bob and Carl have yet to do so. There is an important sense in which only Alice decided to sacrifice her life. The above were cases of laudable action. But what about the negative side? We could suppose that David steals from his employer; Erin decides that she will steal if she has the opportunity; and Frank knows he’ll have the opportunity to steal and decides he’ll take it. I think we’ll blame Erin and Frank much more than we’ll praise Bob and Carl (this is an empirical prediction—feel free to test it). But I think that’s wrong. Erin and Frank haven’t yet gone into the relevant crucible of character, just as Bob and Carl haven’t. Bob and Carl may be praiseworthy for their present state; Erin and Frank may be blameworthy for theirs. But the praise and the blame shouldn’t go quite as far as in the case of Alice and David, respectively. (Of course, any one of the six people might for some other reason, say ignorance, fail to be blameworthy or praiseworthy.) This is closely to connected to my previous post. Thursday, October 20, 2016 Two senses of "intend"? Consider these sentences: 1. Intending to kill the wolverine, Alice pulled the trigger 1. Intending to get to the mall, Bob started his car. If Alice pulls the trigger intending to kill the wolverine and the wolverine survives, then necessarily Alice’s action is a failure. But suppose that Bob intends to get to the mall, starts his car, changes his mind, and drives off for a hike in the woods. None of the actions described is a failure. He just changed his mind. If nanoseconds after the bullet leaving the muzzle Alice changed her mind, and it so happens the wolverine survived, it is still true that Alice’s action failed. Given her intention, she tried to kill the wolverine, and failed. In the change of mind case, Bob, however, didn’t try to get to the mall. Rather, he tried to start to get to the mall, and he also started trying to get to the mall. His trying to start was successful—he did start to get to the mall. But it makes no sense to attribute either success or failure to a mere start of trying. There seems to be a moral difference, too. Suppose that killing the wolverine and getting to the mall are both wrong (maybe the wolverine is no danger to Alice, and Bob has promised his girlfriend not to go to this mall). Then Alice gets the opprobrium of being an attempted wolverine killer by virtue of (1), while Bob isn’t yet an attempted mall visitor by virtue of (2)—only when he strives to propel his body through the door does he become an attempted mall visitor. Even if killing the wolverine and getting to the mall are equally wrong, Bob has done something less bad—for the action he took in virtue of (2) was open to the possibility of changing his mind, as bringing it to completion would require further voluntary decisions. What Bob did was still wicked, but less so than what Alice did. Action (1) commits Alice to killing the wolverine: if the wolverine fails to die, Alice is still an attempted wolverine killer. But Bob has undertaken no commitment to visiting the mall by starting the car. This suggests to me that perhaps “intends” may be used in different senses in (1) and (2). In (1), it may be an “intends” that commits Alice to wolverine killing; in (2), it may be an “intends” that only commits Bob to starting trying to visit the mall. In (1), we have an intending that p that constitutes an action as a trying to bring it about that p.
0dd8adfea0ef489a
Dismiss Notice Join Physics Forums Today! Hamiltonian in terms of ladders, question. 1. Jun 21, 2008 #1 I'm preparing for an exam at the moment and in one of the past exams the is a question asking to prove that the hamiltonian operator can be expressed in terms of the ladder operators. The solution is this (The minus sign didn't come out in the last line, and obviously there is one more step that I left of the end, but you get the picture.) Following this from the second to the third line seems to imply But I would expect because the derivative of x wrt x is 1. My maths isn't terribly good, and there is obviously something simple that I am missing, but I've been staring at this for days and can't seem to get it. Could someone please explain what I'm missing? I could answer this question if it comes up on the exam, because I can remember how it goes, but I want to actually be able to understand it. Thanks in advance. 2. jcsd 3. Jun 21, 2008 #2 User Avatar Staff Emeritus Science Advisor Gold Member These are operators, acting on functions. If you want to prove an operator identity A=B, you can do it by proving that Af=Bf for all f. In this case, writing D instead of d/dx because I'm lazy... Dx should be interpreted as the operator that takes f to D(xf). So This implies that 4. Jun 21, 2008 #3 Thank you so much. I've been agonising over that for days, and you've just made it so simple. :) 5. Jun 21, 2008 #4 BTW, an interesting issue by writing the Hamiltonian in terms of creation and annihilation operators is due to "cluster decomposition principle." By counting the number of adjustable parameters, we can show that all operators can be written as a sum of products of creation and annihilation operators. Those creation/annihilation operators would satisfy commutation or anticommutation relation, [tex]\[a(q'),a^\dagger(q)] = \delta(q'-q)[/tex] or [tex]\{a(q'),a^\dagger(q')\}=\delta(q'-q)[/tex] according to the field being bosonic or fermionic. Then you write down the scattering matrix element, and using the (anti-)commutation relations to move all the annihilation operators to the right, you would generate lots of delta functions. Now, assigning some graphical rule to the scattering matrix element, e.g. the delta function represented by a line, the interaction represented as a vertex...etc. You will see that the action that moving all the annihilation operators to the right decomposes the scattering matrix element into sum of connected pieces of diagrams. For each connected scattering matrix element, we can argue from the basic topological theorem that the number of vertices and lines would satisfy certain relation. From this we could prove that the connected scattering matrix element can only contain exactly one spatial momentum delta function! which is required by cluster decomposition principle, i.e. distant experiments yield uncorrelated results. (I'm reading Weinberg's QFT book, my status is at chap 4. The Cluster Decomposition Principle is addressed in chapter four, it's interesting. Actually, I have a little study group here, anybody who interested in reading Weinberg's book is welcome to discuss with me.) (BTW, Everybody is welcome to correct my concepts~) 6. Jun 21, 2008 #5 Is there an 'over my head' emoticon? 7. Jun 21, 2008 #6 User Avatar Staff Emeritus Science Advisor Gold Member Don't worry about it. :smile: He's talking about quantum field theories where operators very similar to yours show up as creation and annihilation operators. They take n-particle states to (n+1)- or (n-1)-particle states, unless an annihilation operator acts on the vaccum (the 0-particle state) in which case the result is zero. Anyway, you were clearly talking about solving the Schrödinger equation with a harmonic oscillator potential, so you can forget about this for a couple of years.
3c258d851495fca3
Thursday, December 30, 2010 Stuart Kauffman Blog Series Part One: Beyond Einstein and Schrodinger? Part Two: The Quantum Mechanics of Closed Quantum Systems Part Three: The Quantum Mechanics of Open Quantum Systems Part Four: The 'Poised Realm' is Real Part Five: The Non-Algorithmic Trans-Turing System Part Six: We Seem to be Zombies Part Seven: How Mind can Act Acausally on Brain? Part Nine: What is Consciousness? A Hypothesis Part Ten: Standing the Brain on its Head Part Eleven: Can We Have a Responsible Free Will? integralscience said... Perhaps someone can help explain certain statements in Kauffman's series of posts. In part 3, he writes, "Decoherence is a dissipative process, phase information is LOST from the open quantum system to the environment, thus during decohrence, the Schrodinger equation cannot propagate unitarily." In part 4, he says, "During decoherence, the Schrodinger equation does not apply as it does in closed quantum systems where it is time reversible." And in part 7 he states, "It is essential that decoherence is NOT a causal process." These statements seem misleading, if not incorrect. Consider a closed system composed of an open system and its environment. The Schrödinger equation evolves unitarily and causally, including during decoherence. The phase information is not actually lost - it is only lost FAPP. One can of course choose to divide this system into two subsystems, the open system and its environment, but that imaginative act does not magically produce some new physics. It seems that Kauffman is attributing new physics to open systems when all that is really happening is that one is ignoring the information loss from the open system to the environment. It's like saying that Newton's law doesn't apply to a falling object in an open system because we've chosen to ignore air resistance due to the interaction of the ball with the environment. It's not suddenly some mysterious phenomenon that doesn't obey causality or Newton's law. Steve said... Hi and thanks for the comment. I think you’re right that there’s nothing new in decoherence theory as sketched in those quotes, and the total system-plus-environment evolves unitarily. If there is new physics it is in the idea he also discusses of a system oscillating between entangled and classical (either docohered or post-measurement collapse) status. boboniboni said... Steve, your blog is amazing. I like Stuart Kauffman, gonna check this. Nicolás Díaz said... Whoa. I did the same thing today, I was following Kauffman posts and put links to each post in my blog. But reading this I see I missed one of them. I still have my doubts about his hypothesis (using quantum entanglement for this is way too risky) but it is a great stuff to read. I'm following you now in Google Reader. Saludos desde Mexico. Steve said... Thank you Nicolás. We can't be sure entanglement is involved in the brain, but I think it's a worthy speculation that should be testable some day.
9fb62e428f4f19cc
Wednesday, November 20, 2019 Can we tell if there’s a wormhole in the Milky-Way? This week I got a lot of questions about an article by Dennis Overbye in the New York Times, titled “How to Peer Through a Wormhole.” This article says “Theoretically, the universe may be riddled with tunnels through space and time” and goes on to explain that “Wormholes are another prediction of Einstein’s theory of general relativity, which has already delivered such wonders as an expanding universe and black holes.” Therefore, so Overbye tells his readers, it is reasonable to study whether the black hole in the center of our Milky Way is such a wormhole. The trouble with this article is that it makes it appear as if wormholes are a prediction of general relativity comparable to the prediction of the expansion of the universe and the prediction of black holes. But this is most definitely not so. Overbye kind of says this by alluding to some “magic” that is necessary to have wormholes, but unfortunately he does not say it very clearly. This has caused quite some confusion. On twitter, for example, Natalie Wolchover, has put wormholes on par with gravitational waves. So here are the facts. General Relativity is based on Einstein’s field equations which determine the geometry of space-time as a consequence of the energy and matter that is in that space-time. General Relativity has certain kinds of wormholes as solutions. These are the so-called Einstein-Rosen bridges. There are two problems with those. First, no one knows how to create them with a physically possible process. It’s one thing to say that the solution exists in the world of mathematics. It’s another thing entirely to say that such a solution describes something in our universe. There are whole books full with solutions to Einstein’s field equations. Most of these solutions have no correspondence in the real world. Second, even leaving aside that they won’t be created during the evolution of the universe, nothing can travel through these wormholes. If you want to keep a wormhole open, you need some kind of matter that has a negative energy density, which is stuff that for all we know does not exist. Can you write down the mathematics for it? Yes. Do we have any reason whatsoever to think that this mathematics describes the real world? No. And that, folks, is really all there is to say about it. It’s mathematics and we have no reason to think it’s real. In this, wormholes are very, very different to the predictions of the expanding universe, gravitational waves, and black holes. The expanding universe, gravitational waves and black holes are consequences of general relativity. You have to make an effort to avoid that they exist. It’s the exact opposite with wormholes. You have to bend over backwards to make the math work so that they can exist. Now, certain people like to tell me that this should count as “healthy speculation” and I should stop complaining about it. These certain people are either physicists who produce such speculations or science writers who report about it. In other words, they are people who make a living getting you to believe this mathematical fiction. But there is nothing healthy about this type of speculation. It’s wasting time and money that would be better used on research that could actually advance physics. Let me give you an example to see the problem. Suppose the same thing would happen in medicine. Doctors would invent diseases that we have no reason to think exist. They would then write papers about how to diagnose those invented diseases and how to cure those invented diseases and, for good measure, argue that someone should do an experiment to look for their invented diseases. Sounds ridiculous? Yeah, it is ridiculous. But that’s exactly what is going on in the foundations of physics, and it has been going on for decades, which is why no one sees anything wrong with it anymore. Is there at least something new that would explain why the NYT reports on this? What’s new is that two physicists have succeeded in publishing a paper which says that if the black hole in the center of our galaxy is a traversable wormhole then maybe we might be able to see this. The idea is that if there is stuff moving around the other end of the wormhole then we might notice the gravitational influence of that stuff on our side of the wormhole. Is it possible to look for this? Yes, it is also possible to look for alien spaceships coming through, and chances are, next week a paper will get published about this and the New York Times reports it. On a more technical note, a quick remark about the paper, which you find here: The authors look at what happens with the gravitational field on one side of a non-traversable wormhole if a shell of matter is placed around the other side of the wormhole. They conclude: “[T]he gravitational field can cross from one to the other side of the wormhole even from inside the horizon... This is very interesting since it implies that gravity can leak even through the non-traversable wormhole.” But the only thing their equation says is that the strength of the gravitational field on one side of the wormhole depends on the matter on the other side of the wormhole. Which is correct of course. But there is no information “leaking” through the non-traversable (!) wormhole because it’s a time-independent situation. There is no change that can be measured here. This isn’t simply because they didn’t look at the time-dependence, but because the spherically symmetric case is always time-independent. We know that thanks to Birkhoff’s theorem. We also know that gravitational waves have no monopole contribution, so there are no propagating modes in this case either. The case that they later discuss, the one that is supposedly observable, instead talks of objects on orbits around the other end of the wormhole. This is, needless to say, not a spherically symmetric case and therefore this argument that the effect is measurable for non-traversable wormholes is not supported by their analysis. If you want more details, this comment gets it right. Friday, November 15, 2019 Did scientists get climate change wrong? On my recent trip to the UK, I spoke with Tim Palmer about the uncertainty in climate predictions. Saturday, November 09, 2019 How can we test a Theory of Everything? How can we test a Theory of Everything? That’s a question I get a lot in my public lectures. In the past decade, physicists have put forward some speculations that cannot be experimentally ruled out, ever, because you can always move predictions to energies higher than what we have tested so far. Supersymmetry is an example of a theory that is untestable in this particular way. After I explain this, I am frequently asked if it is possible to test a theory of everything, or whether such theories are just entirely unscientific. It’s a good question. But before we get to the answer, I have tell you exactly what physicists mean by “theory of everything”, so we’re on the same page. For all we currently know the world is held together by four fundamental forces. That’s the electromagnetic force, the strong and the weak nuclear force, and gravity. All other forces, like for example Van-der-Waals forces that hold together molecules or muscle forces derive from those four fundamental forces. The electromagnetic force and the strong and the weak nuclear force are combined in the standard model of particle physics. These forces have in common that they have quantum properties. But the gravitational force stands apart from the three other forces because it does not have quantum properties. That’s a problem, as I have explained in an earlier video. A theory that solves the problem of the missing quantum behavior of gravity is called “quantum gravity”. That’s not the same as a theory of everything. If you combine the three forces in the standard model to only one force from which you can derive the standard model, that is called a “Grand Unified Theory” or GUT for short. That’s not a theory of everything either. The name is somewhat misleading. Such a theory of everything would of course *not explain everything. That’s because for most purposes it would be entirely impractical to use it. It would be impractical for the same reason it’s impractical to use the standard model to explain chemical reactions, not to mention human behavior. The description of large objects in terms of their fundamental constituents does not actually give us much insight into what the large objects do. A theory of everything, therefore, may explain everything in principle, but still not do so in practice. The other problem with the name “theory of everything” is that we will never know that not at some point in the future we will discover something that the theory does not explain. Maybe there is indeed a fifth fundamental force? Who knows. So, what physicists call a theory of everything should really be called “a theory of everything we know so far, at least in principle.” The best known example of a theory of everything is string theory. There are a few other approaches. Alain Connes, for example, has an approach based on non-commutative geometry. Asymptotically safe gravity may include a grand unification and therefore counts as a theory of everything. Though, for reasons I don’t quite understand, physicists do not normally discuss asymptotically safe gravity as a candidate for a theory of everything. If you know why, please leave a comment. These are the large programs. Then there are a few small programs, like Garrett Lisi’s E8 theory, or Xiao-Gang Wen’s idea that the world is really made of qbits, or Felix Finster’s causal fermion systems. So, are these theories testable? Yes, they are testable. The reason is that any theory which solves the problem with quantum gravity must make predictions that deviate from general relativity. And those predictions, this is really important, cannot be arbitrarily moved to higher and higher energies. We know that because combining general relativity with the standard model, without quantizing gravity, just stops working near an energy known as the Planck energy. These approaches to a theory of everything normally also make other predictions. For example they often come with a story about what happened in the early universe, which can have consequences that are still observable today. In some cases they result in subtle symmetry violations that can be measurable in particle physics experiments. The details about this differ from one theory to the next. Saturday, November 02, 2019 Have we really measured gravitational waves? A few days ago I met a friend on the subway. He tells me he’s been at a conference and someone asked if he knows me. He says yes, and immediately people start complaining about me. One guy, apparently, told him to slap me. What were they complaining about, you want to know? Well, one complaint came from a particle physicist, who was clearly dismayed that I think building a bigger particle collider is not a good way to invest $40 billion dollars. But it was true when I said it the first time and it is still true: There are better things we can do with this amount money. (Such as, for example, make better climate predictions, which can be done for as “little” as 1 billion dollars.) Back to my friend on the subway. He told me that besides the grumpy particle physicist there were also several gravitational wave people who have issues with what I have written about the supposed gravitational wave detections by the LIGO collaboration. Most of the time if people have issues with what I’m saying it’s because they do not understand what I’m saying to begin with. So with this video, I hope to clear the situation up. Let me start with the most important point. I do not doubt that the gravitational wave detections are real. But. I spend a lot of time on science communication, and I know that many of you doubt that these detections are real. And, to be honest, I cannot blame you for this doubt. So here’s my issue. I think that the gravitational wave community is doing a crappy job justifying the expenses for their research. They give science a bad reputation. And I do not approve of this. Before I go on, a quick reminder what gravitational waves are. Gravitational waves are periodic deformations of space and time. These deformations can happen because Einstein’s theory of general relativity tells us that space and time are not rigid, but react to the presence of matter. If you have some distribution of matter that curves space a lot, such as a pair of black holes orbiting one another, these will cause space-time to wobble and the wobbles carry energy away. That’s what gravitational waves are. We have had indirect evidence for gravitational waves since the 1970s because you can measure how much energy a system loses through gravitational waves without directly measuring the gravitational waves. Hulse and Taylor did this by closely monitoring the orbiting frequency of a pulsar binary. If the system loses energy, the two stars get closer and they orbit faster around each other. The predictions for the emission of gravitational waves fit exactly on the observations. Hulse and Taylor got a Nobel prize for that in 1993. For the direct detection of gravitational waves you have to measure the deformation of space and time that they cause. You can do this by using very sensitive interferometers. An interferometer bounces laser light back and forth in two orthogonal directions and then combines the light. Light is a wave and depending on whether the crests of the waves from the two directions lie on top of each other or not, the resulting signal is strong – that’s constructive interference – or washed out – that’s destructive interference. Just what happens depends very sensitively on the distance that the light travels. So you can use changes in the strength of the interference pattern to figure out whether one of the directions of the interferometer was temporarily shorter or longer. A question that I frequently get is how can this interferometer detect anything if both the light and the interferometer itself deform with space-time? Wouldn’t the effect cancel out? No, it does not cancel out, because the interferometer is not made of light. It’s made of massive particles and therefore reacts differently to a periodic deformation of space-time than light does. That’s why one can use light to find out that something happened for real. For more details, please check these papers. The first direct detection of gravitational waves was made by the LIGO collaboration in September 2015. LIGO consists of two separate interferometers. They are both located in the United States, some thousand kilometers apart. Gravitational waves travel at the speed of light, so if one comes through, it should trigger both detectors with a small delay that comes from the time it takes the wave to travel from one detector to the other. Looking for a signal that appears almost simultaneously in the two detectors helps to identify the signal in the noise. This first signal measured by LIGO looks like a textbook example of a gravitational wave signal from a merger of two black holes. It’s a periodic signal that increases in frequency and amplitude, as the two black holes get closer to each other and their orbiting period gets shorter. When the horizons of the two black holes merge, the signal is suddenly cut off. After this follows a brief period in which the newly formed larger black hole settles in a new state, called the ringdown. A Nobel Prize was awarded for this measurement in 2017. If you plot the frequency distribution over time, you get this banana. Here it's the upward bend that tells you that the frequency increases before dying off entirely. Now, what’s the problem? The first problem is that no one seems to actually know where the curve in the famous LIGO plot came from. You would think it was obtained by a calculation, but members of the collaboration are on record saying it was “not found using analysis algorithms” but partly done “by eye” and “hand-tuned for pedagogical purposes.” Both the collaboration and the journal in which the paper was published have refused to comment. This, people, is highly inappropriate. We should not hand out Nobel Prizes if we don’t know how the predictions were fitted to the data. The other problem is that so far we do not have a confirmation that the signals which LIGO detects are in fact of astrophysical origin, and not misidentified signals that originated on Earth. The way that you could show this is with a LIGO detection that matches electromagnetic signals, such as gamma ray bursts, measured by telescopes. The collaboration had, so far, one opportunity for this, which was an event in August 2017. The problem with this event is that the announcement from the collaboration about their detection came after the announcement of the incoming gamma ray. Therefore, the LIGO detection does not count as a confirmed prediction, because it was not a prediction in the first place – it was a postdiction. It seems to offend people in the collaboration tremendously if I say this, so let me be clear. I have no reason to think that something fishy went on, and I know why the original detection did not result in an automatic alert. But this isn’t the point. The point is that no one knows what happened before the official announcement besides members of the collaboration. We are waiting for an independent confirmation. This one missed the mark. Since 2017, the two LIGO detectors have been joined by a third detector called Virgo, located in Italy. In their third run, which started in April this year, the LIGO/Virgo collaboration has issued alerts for 41 events. From these 41 alerts, 8 were later retracted. Of the remaining gravitational wave events, 10 look like they are either neutron star mergers, or mergers of a neutron star with a black hole. In these cases, there should also be electromagnetic radiation emitted which telescopes can see. For black hole mergers, one does not expect this to be the case. However, no telescope has so far seen a signal that fits to any of the gravitational wave events. This may simply mean that the signals have been too weak for the telescopes to see them. But whatever the reason, the consequence is that we still do not know that what LIGO and Virgo see are actually signals from outer space. You may ask isn’t it enough that they have a signal in their detector that looks like it could be caused by gravitational waves? Well, if this was the only thing that could trigger the detectors, yes. But that is not the case. The LIGO detectors have about 10-100 “glitches” per day. The glitches are bright and shiny signals but do not look like gravitational wave events. The cause of some of these glitches is known. The cause of other glitches not. LIGO uses a citizen science project to classify these glitches and has given them funky names like “Koi Fish” or “Blip.” What this means is that they do not really know what their detector detects. They just throw away data that don’t look like they want it to look. This is not a good scientific procedure. Here is why. Think of an animal. Let me guess, it’s... an elephant. Right? Right for you, right for you, not right for you? Hmm, that’s a glitch in the data, so you don’t count. Does this prove that I am psychic? No, of course it doesn’t. Because selectively throwing away data that’s inconvenient is a bad idea. Goes for me, goes for LIGO too. At least that’s what you would think. If we had an independent confirmation that the good-looking signal is really of astrophysical origin, this wouldn’t matter. But we don’t have that either. So that’s the situation in summary. The signals that LIGO and Virgo see are well explained by gravitational wave events. But we cannot be sure that these are actually signals coming from outer space and not some unknown terrestrial effect. Let me finish by saying once again that personally I do not actually doubt these signals are caused by gravitational waves. But in science, it’s evidence that counts, not opinion. Wednesday, October 30, 2019 The crisis in physics is not only about physics downward spiral In the foundations of physics, we have not seen progress since the mid 1970s when the standard model of particle physics was completed. Ever since then, the theories we use to describe observations have remained unchanged. Sure, some aspects of these theories have only been experimentally confirmed later. The last to-be-confirmed particle was the Higgs-boson, predicted in the 1960s, measured in 2012. But all shortcomings of these theories – the lacking quantization of gravity, dark matter, the quantum measurement problem, and more – have been known for more than 80 years. And they are as unsolved today as they were then. The major cause of this stagnation is that physics has changed, but physicists have not changed their methods. As physics has progressed, the foundations have become increasingly harder to probe by experiment. Technological advances have not kept size and expenses manageable. This is why, in physics today we have collaborations of thousands of people operating machines that cost billions of dollars. With fewer experiments, serendipitous discoveries become increasingly unlikely. And lacking those discoveries, the technological progress that would be needed to keep experiments economically viable never materializes. It’s a vicious cycle: Costly experiments result in lack of progress. Lack of progress increases the costs of further experiment. This cycle must eventually lead into a dead end when experiments become simply too expensive to remain affordable. A $40 billion particle collider is such a dead end. The only way to avoid being sucked into this vicious cycle is to choose carefully which hypothesis to put to the test. But physicists still operate by the “just look” idea like this was the 19th century. They do not think about which hypotheses are promising because their education has not taught them to do so. Such self-reflection would require knowledge of the philosophy and sociology of science, and those are subjects physicists merely make dismissive jokes about. They believe they are too intelligent to have to think about what they are doing. The consequence has been that experiments in the foundations of physics past the 1970s have only confirmed the already existing theories. None found evidence of anything beyond what we already know. But theoretical physicists did not learn the lesson and still ignore the philosophy and sociology of science. I encounter this dismissive behavior personally pretty much every time I try to explain to a cosmologist or particle physicists that we need smarter ways to share information and make decisions in large, like-minded communities. If they react at all, they are insulted if I point out that social reinforcement – aka group-think – befalls us all, unless we actively take measures to prevent it. Instead of examining the way that they propose hypotheses and revising their methods, theoretical physicists have developed a habit of putting forward entirely baseless speculations. Over and over again I have heard them justifying their mindless production of mathematical fiction as “healthy speculation” – entirely ignoring that this type of speculation has demonstrably not worked for decades and continues to not work. There is nothing healthy about this. It’s sick science. And, embarrassingly enough, that’s plain to see for everyone who does not work in the field. This behavior is based on the hopelessly naïve, not to mention ill-informed, belief that science always progresses somehow, and that sooner or later certainly someone will stumble over something interesting. But even if that happened – even if someone found a piece of the puzzle – at this point we wouldn’t notice, because today any drop of genuine theoretical progress would drown in an ocean of “healthy speculation”. And so, what we have here in the foundation of physics is a plain failure of the scientific method. All these wrong predictions should have taught physicists that just because they can write down equations for something does not mean this math is a scientifically promising hypothesis. String theory, supersymmetry, multiverses. There’s math for it, alright. Pretty math, even. But that doesn’t mean this math describes reality. Physicists need new methods. Better methods. Methods that are appropriate to the present century. And please spare me the complaints that I supposedly do not have anything better to suggest, because that is a false accusation. I have said many times that looking at the history of physics teaches us that resolving inconsistencies has been a reliable path to breakthroughs, so that’s what we should focus on. I may be on the wrong track with this, of course. But for all I can tell at this moment in history I am the only physicist who has at least come up with an idea for what to do. Why don’t physicists have a hard look at their history and learn from their failure? Because the existing scientific system does not encourage learning. Physicists today can happily make career by writing papers about things no one has ever observed, and never will observe. This continues to go on because there is nothing and no one that can stop it. You may want to put this down as a minor worry because – $40 billion dollar collider aside – who really cares about the foundations of physics? Maybe all these string theorists have been wasting tax-money for decades, alright, but in the large scheme of things it’s not all that much money. I grant you that much. Theorists are not expensive. But even if you don’t care what’s up with strings and multiverses, you should worry about what is happening here. The foundations of physics are the canary in the coal mine. It’s an old discipline and the first to run into this problem. But the same problem will sooner or later surface in other disciplines if experiments become increasingly expensive and recruit large fractions of the scientific community. Indeed, we see this beginning to happen in medicine and in ecology, too. Small-scale drug trials have pretty much run their course. These are good only to find in-your-face correlations that are universal across most people. Medicine, therefore, will increasingly have to rely on data collected from large groups over long periods of time to find increasingly personalized diagnoses and prescriptions. The studies which are necessary for this are extremely costly. They must be chosen carefully for not many of them can be made. The study of ecosystems faces a similar challenge, where small, isolated investigations are about to reach their limits. How physicists handle their crisis will give an example to other disciplines. So watch this space. Tuesday, October 22, 2019 What is the quantum measurement problem? Today, I want to explain just what the problem is with making measurements according to quantum theory. Quantum mechanics tells us that matter is not made of particles. It is made of elementary constituents that are often called particles, but are really described by wave-functions. A wave-function is a mathematical object which is neither a particle nor a wave, but it can have properties of both. The curious thing about the wave-function is that it does not itself correspond to something which we can observe. Instead, it is only a tool by help of which we calculate what we do observe. To make such a calculation, quantum theory uses the following postulates. First, as long as you do not measure the wave-function, it changes according to the Schrödinger equation. The Schrödinger equation is different for different particles. But its most important properties are independent of the particle. One of the important properties of the Schrödinger equation is that it guarantees that the probabilities computed from the wave-function will always add up to one, as they should. Another important property is that the change in time which one gets from the Schrödinger equation is reversible. But for our purposes the most important property of the Schrödinger equation is that it is linear. This means if you have two solutions to this equation, then any sum of the two solutions, with arbitrary pre-factors, will also be a solution. The second postulate of quantum mechanics tells you how you calculate from the wave-function what is the probability of getting a specific measurement outcome. This is called the “Born rule,” named after Max Born who came up with it. The Born rule says that the probability of a measurement is the absolute square of that part of the wave-function which describes a certain measurement outcome. To do this calculation, you also need to know how to describe what you are observing – say, the momentum of a particle. For this, you need further postulates, but these do not need to concern us today. And third, there is the measurement postulate, sometimes called the “update” or “collapse” of the wave-function. This postulate says that after you have made a measurement, the probability of what you have measured suddenly changes to 1. This, I have to emphasize, is a necessary requirement to describe what we observe. I cannot stress this enough because a lot of physicists seem to find it hard to comprehend. If you do not update the wave-function after measurement, then the wave-function does not describe what we observe. We do not, ever, observe a particle that is 50% measured. The problem with the quantum measurement is now that the update of the wave-function is incompatible with the Schrödinger equation. The Schrödinger equation, as I already said, is linear. That means if you have two different states of a system, both of which are allowed according to the Schrödinger equation, then the sum of the two states is also an allowed solution. The best known example of this is Schrödinger’s cat, which is a state that is a sum of both dead and alive. Such a sum is what physicists call a superposition. We do, however, only observe cats that are either dead or alive. This is why we need the measurement postulate. Without it, quantum mechanics would not be compatible with observation. The measurement problem, I have to emphasize, is not solved by decoherence, even though many physicists seem to believe this to be so. Decoherence is a process that happens if a quantum superposition interacts with its environment. The environment may simply be air or, even in vacuum, you still have the radiation of the cosmic microwave background. There is always some environment. This interaction with the environment eventually destroys the ability of quantum states to display typical quantum behavior, like the ability of particles to create interference patterns. The larger the object, the more quickly its quantum behavior gets destroyed. Decoherence tells you that if you average over the states of the environment, because you do not know exactly what they do, then you no longer have a quantum superposition. Instead, you have a distribution of probabilities. This is what physicists call a “mixed state”. This does not solve the measurement problem because after measurement, you still have to update the probability of what you have observed to 100%. Decoherence does not tell you to do that. Why is the measurement postulate problematic? The trouble with the measurement postulate is that the behavior of a large thing, like a detector, should follow from the behavior of the small things that it is made up of. But that is not the case. So that’s the issue. The measurement postulate is incompatible with reductionism. It makes it necessary that the formulation of quantum mechanics explicitly refers to macroscopic objects like detectors, when really what these large things are doing should follow from the theory. A lot of people seem to think that you can solve this problem by way of re-interpreting the wave-function as merely encoding the knowledge that an observer has about the state of the system. This is what is called a Copenhagen or “neo-Copenhagen” interpretation. (And let me warn you that this is not the same as a Psi-epistemic interpretation, in case you have heard that word.) Now, if you believe that the wave-function merely describes the knowledge an observer has then you may say, of course it needs to be updated if the observer makes a measurement. Yes, that’s very reasonable. But of course this also refers to macroscopic concepts like observers and their knowledge. And if you want to use such concepts in the postulates of your theory, you are implicitly assuming that the behavior of observers or detectors is incompatible with the behavior of the particles that make up the observers or detectors. This requires that you explain when and how this distinction is to be made and none of the existing neo-Copenhagen approaches explain this. I already told you in an earlier blogpost why the many worlds interpretation does not solve the measurement problem. To briefly summarize it, it’s because in the many worlds interpretation one also has to use a postulate about what a detector does. What does it take to actually solve the measurement problem? I will get to this, so stay tuned. Wednesday, October 16, 2019 Dark matter nightmare: What if we are just using the wrong equations? Dark matter filaments. Computer simulation. [Image: John Dubinski (U of Toronto)] Einstein’s theory of general relativity is an extremely well-confirmed theory. Countless experiments have shown that its predictions for our solar system agree with observation to utmost accuracy. But when we point our telescopes at larger distances, something is amiss. Galaxies rotate faster than expected. Galaxies in clusters move faster than they should. The expansion of the universe is speeding up. General relativity does not tell us what is going on. Physicists have attributed these puzzling observations to two newly postulated substances: Dark matter and dark energy. These two names are merely placeholders in Einstein’s original equations; their sole purpose is to remove the mismatch between prediction and observation. This is not a new story. We have had evidence for dark matter since the 1930s, and dark energy was on the radar already in the 1990. Both have since occupied thousands of physicists with attempts to explain just what we are dealing with: Is dark matter a particle, and if so what type, and how can we measure it? If it is not a particle, then what do we change about general relativity to fix the discrepancy with measurements? Is dark energy maybe a new type of field? Is it, too, made of some particle? Does dark matter have something to do with dark energy or are the two unrelated? To answer these questions, hundreds of hypotheses have been proposed, conferences have been held, careers have been made – but here we are, in 2019, and we still don’t know. Bad enough, you may say, but the thing that really keeps me up at night is this: Maybe all these thousands of physicists are simply using the wrong equations. I don’t mean that general relativity needs to be modified. I mean that we incorrectly use the equations of general relativity to begin with. The issue is this. General relativity relates the curvature of space and time to the sources of matter and energy. Put in a distribution of matter and energy at any one moment of time, and the equations tell you what space and time do in response, and how the matter must move according to this response. But general relativity is a non-linear theory. This means, loosely speaking, that gravity gravitates. More concretely, it means that if you have two solutions to the equations and you take their sum, this sum will not also be a solution. Now, what we do when we want to explain what a galaxy does, or a galaxy cluster, or even the whole universe, is not to plug the matter and energy of every single planet and star into the equations. This would be computationally unfeasible. Instead, we use an average of matter and energy, and use that as the source for gravity. Needless to say, taking an average on one side of the equation requires that you also take an average on the other side. But since the gravitational part is non-linear, this will not give you the same equations that we use for the solar system: The average of a function of a variable is not the same as the function of the average of the variable. We know it’s not. But whenever we use general relativity on large scales, we assume that this is the case. So, we know that strictly speaking the equations we use are wrong. The big question is, then, just how wrong are they? Nosy students who ask this question are usually told these equations are not very wrong and are good to use. The argument goes that the difference between the equation we use and the equation we should use is negligible because gravity is weak in all these cases. But if you look at the literature somewhat closer, then this argument has been questioned. And these questions have been questioned. And the questioning questions have been questioned. And the debate has remained unsettled until today. That it is difficult to average non-linear equations is of course not a problem specific to cosmology. It’s a difficulty that condensed matter physicists have to deal with all the time, and it’s a major headache also for climate scientists. These scientists have a variety of techniques to derive the correct equations, but unfortunately the known methods do not easily carry over to general relativity because they do not respect the symmetries of Einstein’s theory. It’s admittedly an unsexy research topic. It’s technical and tedious and most physicists ignore it. And so, while there are thousands of physicists who simply assume that the correction-terms from averaging are negligible, there are merely two dozen or so people trying to make sure that this assumption is actually correct. Given how much brain-power physicists have spent on trying to figure out what dark matter and dark energy is, I think it would be a good idea to definitely settle the question whether it is anything at all. At the very least, I would sleep better. Further reading: Does the growth of structure affect our dynamical models of the universe? The averaging, backreaction and fitting problems in cosmology, by Chris Clarkson, George Ellis, Julien Larena, and Obinna Umeh. Rept. Prog. Phys. 74 (2011) 112901, arXiv:1109.2314 [astro-ph.CO]. Monday, October 07, 2019 What does the future hold for particle physics? In my new video, I talk about the reason why the Large Hadron Collider, LHC for short, has not found fundamentally new particles besides the Higgs boson, and what this means for the future of particle physics. Below you find a transcript with references. Before the LHC turned on, particle physicists had high hopes it would find something new besides the Higgs boson, something that would go beyond the standard model of particle physics. There was a lot of talk about the particles that supposedly make up dark matter, which the collider might produce. Many physicists also expected it to find the first of a class of entirely new particles that were predicted based on a hypothesis known as supersymmetry. Others talked about dark energy, additional dimensions of space, string balls, black holes, time travel, making contact to parallel universes or “unparticles”. That’s particles which aren’t particles. So, clearly, some wild ideas were in the air. To illustrate the situation before the LHC began taking data, let me quote a few articles from back then. Here is Valerie Jamieson writing for New Scientist in 2008: “The Higgs and supersymmetry are on firm theoretical footing. Some theorists speculate about more outlandish scenarios for the LHC, including the production of extra dimensions, mini black holes, new forces, and particles smaller than quarks and electrons. A test for time travel has also been proposed.” Or, here is Ian Sample for the Guardian, also in 2008: “Scientists have some pretty good hunches about what the machine might find, from creating never-seen-before particles to discovering hidden dimensions and dark matter, the mysterious substance that makes up 25% of the universe.” Paul Langacker in 2010, writing for the APS: “Theorists have predicted that spectacular signals of supersymmetry should be visible at the LHC.” Michael Dine for Physics Today in 2007: The Telegraph in 2010: “[The LHC] could answer the question of what causes mass, or even surprise its creators by revealing the existence of a fifth, sixth or seventh secret dimension of time and space.” A final one. Here is Steve Giddings writing in 2010 for “LHC collisions might produce dark-matter particles... The collider might also shed light on the more predominant “dark energy,”... the LHC may reveal extra dimensions of space... if these extra dimensions are configured in certain ways, the LHC could produce microscopic black holes... Supersymmetry could be discovered by the LHC...” The Large Hadron collider has been running since 2010. It has found the Higgs boson. But why didn’t it find any of the other things? This question is surprisingly easy to answer. There was never a good reason to expect any of these things in the first place. The more difficult question is why did so many particle physicists think those were reasonable expectations, and why has not a single one of them told us what they have learned from their failed predictions? To see what happened here, it is useful to look at the difference between the prediction of the Higgs-boson and the other speculations. The standard model without the Higgs does not work properly. It becomes mathematically inconsistent at energies that the LHC is able to reach. Concretely, without the Higgs, the standard model predicts probabilities larger than one, which makes no sense. We therefore knew, before the LHC turned on, that something new had to happen. It could have been something else besides the Higgs. The Higgs was one way to fix the problem with the standard model, but there are other ways. However, the Higgs turned out to be right. All other proposed ideas, extra dimensions, supersymmetry, time-travel, and so on, are unnecessary. These theories have been constructed so that they are compatible with all existing observations. But they are not necessary to solve any problem with the standard model. They are basically wishful thinking. The reason that many particle physicists believed in these speculations is that they mistakenly thought the standard model has another problem which the existence of the Higgs would not fix. I am afraid that many of them still believe this. This supposed problem is that the standard model is not “technically natural”. This means the standard model contains one number that is small, but there is no explanation for why it is small. This number is the mass of the Higgs-boson divided by the Planck mass, which happens to be about 10-15. The standard model works just fine with that number and it fits the data. But a small number like this, without explanation, is ugly and particle physicists didn’t want to believe nature could be that ugly. Well, now they know that nature doesn’t care what physicists want it to be like. What does this mean for the future of particle physics? This argument from “technical naturalness” was the only reason that physicists had to think that the standard model is incomplete and something to complete it must appear at LHC energies. Now that it is clear this argument did not work, there is no reason why a next larger collider should see anything new either. The standard model runs into mathematical trouble again at energies about a billion times higher than what a next larger collider could test. At the moment, therefore, we have no good reason to build a larger particle collider. But particle physics is not only collider physics. And so, it seems likely to me, that research will shift to other areas of physics. A shift that has been going on for two decades already, and will probably become more pronounced now, is the move to astrophysics, in particular the study of dark matter and dark energy and also, to some extent, the early universe. The other shift that we are likely to see is a move away from high energy particle physics and move towards high precision measurements at lower energies, or to table top experiments probing the quantum behavior of many particle systems, where we still have much to learn. Wednesday, October 02, 2019 Has Reductionism Run its Course? For more than 2000 years, ever since Democritus’ first musings about atoms, reductionism has driven scientific inquiry. The idea is simple enough: Things are made of smaller things, and if you know what the small things do, you learn what the large things do. Simple – and stunningly successful. After 2000 years of taking things apart into smaller things, we have learned that all matter is made of molecules, and that molecules are made of atoms. Democritus originally coined the word “atom” to refer to indivisible, elementary units of matter. But what we have come to call “atoms”, we now know, is made of even smaller particles. And those smaller particles are yet again made of even smaller particles. © Sabine Hossenfelder The smallest constituents of matter, for all we currently know, are the 25 particles which physicists collect in the standard model of particle physics. Are these particles made up of yet another set of smaller particles, strings, or other things? It is certainly possible that the particles of the standard model are not the ultimate constituents of matter. But we presently have no particular reason to think they have a substructure. And this raises the question whether attempting to look even closer into the structure of matter is a promising research direction – right here, right now. It is a question that every researcher in the foundations of physics will be asking themselves, now that the Large Hadron Collider has confirmed the standard model, but found nothing beyond that. 20 years ago, it seemed clear to me that probing physical processes at ever shorter distances is the most reliable way to better understand how the universe works. And since it takes high energies to resolve short distances, this means that slamming particles together at high energies is the route forward. In other words, if you want to know more, you build bigger particle colliders. This is also, unsurprisingly, what most particle physicists are convinced of. Going to higher energies, so their story goes, is the most reliable way to search for something fundamentally new. This is, in a nutshell, particle physicists’ major argument in favor of building a new particle collider, one even larger than the presently operating Large Hadron Collider. But this simple story is too simple. The idea that reductionism means things are made of smaller things is what philosophers more specifically call “methodological reductionism”. It’s a statement about the properties of stuff. But there is another type of reductionism, “theory reductionism”, which instead refers to the relation between theories. One theory can be “reduced” to another one, if the former can be derived from the latter. Now, the examples of reductionism that particle physicists like to put forward are the cases where both types of reductionism coincide: Atomic physics explains chemistry. Statistical mechanics explains the laws of thermodynamics. The quark model explains regularities in proton collisions. And so on. But not all cases of successful theory reduction have also been cases of methodological reduction. Take Maxwell’s unification of the electric and magnetic force. From Maxwell’s theory you can derive a whole bunch of equations, such as the Coulomb law and Faraday’s law, that people used before Maxwell explained where they come from. Electromagnetism, is therefore clearly a case of theory reduction, but it did not come with a methodological reduction. Another well-known exception is Einstein’s theory of General Relativity. General Relativity can be used in more situations than Newton’s theory of gravity. But it is not the physics on short distances that reveals the differences between the two theories. Instead, it is the behavior of bodies at high relative speed and strong gravitational fields that Newtonian gravity cannot cope with. Another example that belongs on this list is quantum mechanics. Quantum mechanics reproduces classical mechanics in suitable approximations. It is not, however, a theory about small constituents of larger things. Yes, quantum mechanics is often portrayed as a theory for microscopic scales, but, no, this is not correct. Quantum mechanics is really a theory for all scales, large to small. We have observed quantum effects over distances exceeding 100km and for objects weighting as “much” as a nanogram, composed of more than 1013 atoms. It’s just that quantum effects on large scales are difficult to create and observe. Finally, I would like to mention Noether’s theorem, according to which symmetries give rise to conservation laws. This example is different from the previous ones in that Noether’s theorem was not applied to any theory in particular. But it has resulted in a more fundamental understanding of natural law, and therefore I think it deserve a place on the list. In summary, history does not support particle physicists’ belief that a deeper understanding of natural law will most likely come from studying shorter distances. On the very contrary, I have begun to worry that physicists’ confidence in methodological reductionism stands in the way of progress. That’s because it suggests we ask certain questions instead of others. And those may just be the wrong questions to ask. If you believe in methodological reductionism, for example, you may ask what dark energy is made of. But maybe dark energy is not made of anything. Instead, dark energy may be an artifact of our difficulty averaging non-linear equations. It’s similar with dark matter. The methodological reductionist will ask for a microscopic theory and look for a particle that dark matter is made of. Yet, maybe dark matter is really a phenomenon associated with our misunderstanding of space-time on long distances. The maybe biggest problem that methodological reductionism causes lies in the area of quantum gravity, that is our attempt to resolve the inconsistency between quantum theory and general relativity. Pretty much all existing approaches – string theory, loop quantum gravity, causal dynamical triangulation (check out my video for more) – assume that methodological reductionism is the answer. Therefore, they rely on new hypotheses for short-distance physics. But maybe that’s the wrong way to tackle the problem. The root of our problem may instead be that quantum theory itself must be replaced by a more fundamental theory, one that explains how quantization works in the first place. Approaches based on methodological reductionism – like grand unified forces, supersymmetry, string theory, preon models, or technicolor – have failed for the past 30 years. This does not mean that there is nothing more to find at short distances. But it does strongly suggest that the next step forward will be a case of theory reduction that does not rely on taking things apart into smaller things. Sunday, September 29, 2019 Travel Update The coming days I am in Brussels, for a workshop that I’m not sure where it is or what it is about. It also doesn’t seem to have a website. In any case, I’ll be away, just don’t ask me exactly where or why. On Oct 15, I am giving a public lecture at the University of Minnesota. On Oct 17, I am giving a colloquium in Cleveland. On Oct 25, I am giving a public lecture in Göttingen (in German). On Oct 29, I’m in Genoa giving a talk at the “Festival della Scienza” to accompany the publication of the Italian translation of my book “Lost in Math.” I don’t speak Italian, so this talk will be in English. On Nov 5th I’m speaking in Berlin about dark matter. On Nov 6th I am supposed to give a lecture at the Einstein Forum on Potsdam, though that doesn’t seem to be on their website. These two talks in Berlin and Potsdam will also be in German. On Nov 12th I’m giving a seminar in Oxford, in case Britain still exists at that point. Dec 9th I’m speaking in Wuppertal, details to come, and that will hopefully be the last trip this year. Next time I’m in the USA will probably be late March 2020. In case you are interested that I stop by at your place, please get in touch. I am always happy to meet readers of my blog, so in case our paths cross, do not hesitate to say hi. Friday, September 27, 2019 The Trouble with Many Worlds Wednesday, September 18, 2019 Windows Black Screen Nightmare Folks, I have a warning to utter that is somewhat outside my usual preaching. For the past couple of days one of my laptops has tried to install Windows updates but didn’t succeed. In the morning I would find an error message that said something went wrong. I ignored this because really I couldn’t care less what problems Microsoft causes itself. But this morning, Windows wouldn’t properly start. All I got was a black screen with a mouse cursor. This is the computer I use for my audio and video processing. Now, I’ve been a Windows user for 20+ years and I don’t get easily discouraged by spontaneously appearing malfunctions. After some back and forth, I managed to open a command prompt from the task manager to launch the Windows explorer by hand. But this just produced an error message about some obscure .dll file being corrupted. Ok, then, I thought, I’ll run an sfc /scandisk. But this didn’t work; the command just wouldn’t run. At this point I began to feel really bad about this. I then rebooted the computer a few times with different login options, but got the exact same problem with an administrator login and in the so-called safe mode. The system restore produced an error message, too. Finally, I tried the last thing that came to my mind, a factory reset. Just to have Windows inform me that the command couldn’t be executed. With that, I had run out of Windows-wisdom and called a helpline. Even the guy on the helpline was impressed by this system’s fuckedupness (if that isn’t a word, it should be) and, after trying a few other things that didn’t work, recommended I wipe the disk clean and reinstall Windows. So that’s basically how I spent my day, today. Which, btw, happens to be my birthday. The system is running fine now, though I will have to reinstall all my software. Luckily enough my hard-disk partition seems to have saved all my video and audio files. It doesn’t seem to have been a hardware problem. It also doesn’t smell like a virus. The two IT guys I spoke with said that most likely something went badly wrong with one of those Windows updates. In fact, if you ask Google for Windows Black Screen you’ll find that similar things have happened before after Windows updates. Though, it seems, not quite as severe as this case. The reason I am telling you this isn’t just to vent (though there’s that), but to ask you that in case you encounter the same problem, please let us know. Especially if you find a solution that doesn’t require reinstalling Windows from scratch. Update: Managed to finish what I meant to do before my computer became dysfunctional Monday, September 16, 2019 Why do some scientists believe that our universe is a hologram? Today, I want to tell you why some scientists believe that our universe is really a 3-dimensional projection of a 2-dimensional space. They call it the “holographic principle” and the key idea is this. Usually, the number of different things you can imagine happening inside a part of space increases with the volume. Think of a bag of particles. The larger the bag, the more particles, and the more details you need to describe what the particles do. These details that you need to describe what happens are what physicists call the “degrees of freedom,” and the number of these degrees of freedom is proportional to the number of particles, which is proportional to the volume. At least that’s how it normally works. The holographic principle, in contrast, says that you can describe what happens inside the bag by encoding it on the surface of that bag, at the same resolution. This may not sounds all that remarkable, but it is. Here is why. Take a cube that’s made of smaller cubes, each of which is either black or white. You can think of each small cube as a single bit of information. How much information is in the large cube? Well, that’s the number of the smaller cubes, so 3 cube in this example. Or, if you divide every side of the large cube into N pieces instead of three, that’s N cube. But if you instead count the surface elements of the cube, at the same resolution, you have only 6 x N square. This means that for large N, there are many more volume bits than surface bits at the same resolution. The holographic principle now says that even though there are so many fewer surface bits, the surface bits are sufficient to describe everything that happens in the volume. This does not mean that the surface bits correspond to certain regions of volume, it’s somewhat more complicated. It means instead that the surface bits describe certain correlations between the pieces of volume. So if you think again of the particles in the bag, these will not move entirely independently. And that’s what is called the holographic principle, that really you can encode the events inside any volume on the surface of the volume, at the same resolution. But, you may say, how come we never notice that particles in a bag are somehow constrained in their freedom? Good question. The reason is that the stuff that we deal with in every-day life, say, that bag of particles, doesn’t remotely make use of the theoretically available degrees of freedom. Our present observations only test situations well below the limit that the holographic principle says should exist. The limit from the holographic principle really only matters if the degrees of freedom are strongly compressed, as is the case, for example, for stuff that collapses to a black hole. Indeed, the physics of black holes is one of the most important clues that physicists have for the holographic principle. That’s because we know that black holes have an entropy that is proportional to the area of the black hole horizon, not to its volume. That’s the important part: black hole entropy is proportional to the area, not to the volume. Now, in thermodynamics entropy counts the number of different microscopic configurations that have the same macroscopic appearance. So, the entropy basically counts how much information you could stuff into a macroscopic thing if you kept track of the microscopic details. Therefore, the area-scaling of the black hole entropy tells you that the information content of black holes is bounded by a quantity which proportional to the horizon area. This relation is the origin of the holographic principle. The other important clue for the holographic principle comes from string theory. That’s because string theorists like to apply their mathematical methods in a space-time with a negative cosmological constant, which is called an Anti-de Sitter space. Most of them believe, though it has strictly speaking never been proved, that gravity in an Anti-de Sitter space can be described by a different theory that is entirely located on the boundary of that space. And while this idea came from string theory, one does not actually need the strings for this relation between the volume and the surface to work. More concretely, it uses a limit in which the effects of the strings no longer appear. So the holographic principle seems to be more general than string theory. I have to add though that we do not live in an Anti-de Sitter space because, for all we currently know, the cosmological constant in our universe is positive. Therefore it’s unclear how much the volume-surface relation in Anti-De Sitter space tells us about the real world. And for what the black hole entropy is concerned, the mathematics we currently have does not actually tell us that it counts the information that one can stuff into a black hole. It may instead only count the information that one loses by disconnecting the inside and outside of the black hole. This is called the “entanglement entropy”. It scales with the surface for many systems other than black holes and there is nothing particularly holographic about it. Whether or not you buy the motivations for the holographic principle, you may want to know whether we can test it. The answer is definitely maybe. Earlier this year, Erik Verlinde and Kathryn Zurek proposed that we try to test the holographic principle using gravitational wave interferometers. The idea is that if the universe is holographic, then the fluctuations in the two orthogonal directions that the interferometer arms extend into would be more strongly correlated than one normally expects. However, not everyone agrees that the particular realization of holography which Verlinde and Zurek use is the correct one. Personally I think that the motivations for the holographic principle are not particularly strong and in any case we’ll not be able to test this hypothesis in the coming centuries. Therefore writing papers about it is a waste of time. But it’s an interesting idea and at least you now know what physicists are talking about when they say the universe is a hologram. Tuesday, September 10, 2019 Book Review: “Something Deeply Hidden” by Sean Carroll Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime Sean Carroll Dutton, September 10, 2019 Of all the weird ideas that quantum mechanics has to offer, the existence of parallel universes is the weirdest. But with his new book, Sean Carroll wants to convince you that it isn’t weird at all. Instead, he argues, if we only take quantum mechanics seriously enough, then “many worlds” are the logical consequence. Most remarkably, the many worlds interpretation implies that in every instance you split into many separate you’s, all of which go on to live their own lives. It takes something to convince yourself that this is reality, but if you want to be convinced, Carroll’s book is a good starting point. “Something Deeply Hidden” is an enjoyable and easy-to-follow introduction to quantum mechanics that will answer your most pressing questions about many worlds, such as how worlds split, what happens with energy conservation, or whether you should worry about the moral standards of all your copies. The book is also notable for what it does not contain. Carroll avoids going through all the different interpretations of quantum mechanics in detail, and only provides short summaries. Instead, the second half of the book is dedicated to his own recent work, which is about constructing space from quantum entanglement. I do find this a promising line of research and he presents it well. I was somewhat perplexed that Carroll does not mention what I think are the two biggest objections to the many world’s interpretation, but I will write about this in a separate post. Like Carroll’s previous books, this one is engaging, well-written, and clearly argued. I can unhesitatingly recommend it to anyone who is interested in the foundations of physics. [Disclaimer: Free review copy] Sunday, September 08, 2019 Away Note Friday, September 06, 2019 The five most promising ways to quantize gravity Today, I want to tell you what ideas physicists have come up with to quantize gravity. But before I get to that, I want to tell you why it matters. That we do not have a theory of quantum gravity is currently one of the biggest unsolved problems in the foundations of physics. A lot of people, including many of my colleagues, seem to think that a theory of quantum gravity will remain an academic curiosity without practical relevance. I think they are wrong. That’s because whatever solves this problem will tell us something about quantum theory, and that’s the theory on which all modern electronic devices run, like the ones on which you are watching this video. Maybe it will take 100 years for quantum gravity to find a practical application, or maybe it will even take a 1000 years. But I am sure that understanding nature better will not forever remain a merely academic speculation. Before I go on, I want to be clear that quantizing gravity by itself is not the problem. We can, and have, quantized gravity the same way that we quantize the other interactions. The problem is that the theory which one gets this way breaks down at high energies, and therefore it cannot be how nature works, fundamentally. This naïve quantization is called “perturbatively quantized gravity” and it was worked out in the 1960s by Feynman and DeWitt and some others. Perturbatively quantized gravity is today widely believed to be an approximation to whatever is the correct theory. So really the problem is not just to quantize gravity per se, you want to quantize it and get a theory that does not break down at high energies. Because energies are proportional to frequencies, physicists like to refer to high energies as “the ultraviolet” or just “the UV”. Therefore, the theory of quantum gravity that we look for is said to be “UV complete”. Now, let me go through the five most popular approaches to quantum gravity. 1. String Theory The most widely known and still the most popular attempt to get a UV-complete theory of quantum gravity is string theory. The idea of string theory is that instead of talking about particles and quantizing them, you take strings and quantize those. Amazingly enough, this automatically has the consequence that the strings exchange a force which has the same properties as the gravitational force. This was discovered in the 1970s and at the time, it got physicists very excited. However, in the past decades several problems have appeared in string theory that were patched, which has made the theory increasingly contrived. You can hear all about this in my earlier video. It has never been proved that string theory is indeed UV-complete. 2. Loop Quantum Gravity Loop Quantum Gravity is often named as the biggest competitor of string theory, but this comparison is somewhat misleading. String theory is not just a theory for quantum gravity, it is also supposed to unify the other interactions. Loop Quantum Gravity on the other hand, is only about quantizing gravity. It works by discretizing space in terms of a network, and then using integrals around small loops to describe the space, hence the name. In this network, the nodes represent volumes and the links between nodes the areas of the surfaces where the volumes meet. Loop Quantum Gravity is about as old as string theory. It solves the problem of combining general relativity and quantum mechanics to one consistent theory but it has remained unclear just exactly how one recovers general relativity in this approach. 3. Asymptotically Safe Gravity Asymptotic Safety is an idea that goes back to a 1976 paper by Steven Weinberg. It says that a theory which seems to have problems at high energies when quantized naively, may not have a problem after all, it’s just that it’s more complicated to find out what happens at high energies than it seems. Asymptotically Safe Gravity applies the idea of asymptotic safety to gravity in particular. This approach also solves the problem of quantum gravity. Its major problem is currently that it has not been proved that the theory which one gets this way at high energies still makes sense as a quantum theory. 4. Causal Dynamical Triangulation The problem with quantizing gravity comes from infinities that appear when particles interact at very short distances. This is why most approaches to quantum gravity rely on removing the short distances by using objects of finite extensions. Loop Quantum Gravity works this way, and so does String Theory. Causal Dynamical Triangulation also relies on removing short distances. It does so by approximating a curved space with triangles, or their higher-dimensional counterparts respectively. In contrast to the other approaches though, where the finite extension is a postulated, new property of the underlying true nature of space, in Causal Dynamical Triangulation, the finite size of the triangles is a mathematical aid, and one eventually takes the limit where this size goes to zero. The major reason why many people have remained unconvinced of Causal Dynamical Triangulation is that it treats space and time differently, which Einstein taught us not to do. 5. Emergent Gravity Emergent gravity is not one specific theory, but a class of approaches. These approaches have in common that gravity derives from the collective behavior of a large number of constituents, much like the laws of thermodynamics do. And much like for thermodynamics, in emergent gravity, one does not actually need to know all that much about the exact properties of these constituents to get the dynamical law. If you think that gravity is really emergent, then quantizing gravity does not make sense. Because, if you think of the analogy to thermodynamics, you also do not obtain a theory for the structure of atom by quantizing the equations for gases. Therefore, in emergent gravity one does not quantize gravity. One instead removes the inconsistency between gravity and quantum mechanics by saying that quantizing gravity is not the right thing to do. Which one of these theories is the right one? No one knows. The problem is that it’s really, really hard to find experimental evidence for quantum gravity. But that it’s hard doesn’t mean impossible. I will tell you some other time how we might be able to experimentally test quantum gravity after all. So, stay tuned. Wednesday, September 04, 2019 What’s up with LIGO? The Nobel-Prize winning figure. We don’t know exactly what it shows. [Image Credits: LIGO] Almost four years ago, on September 14 2015, the LIGO collaboration detected gravitational waves for the first time. In 2017, this achievement was awarded the Nobel Prize. Also in that year, the two LIGO interferometers were joined by VIRGO. Since then, a total of three detectors have been on the lookout for space-time’s subtle motions. By now, the LIGO/VIRGO collaboration has reported dozens of gravitational wave events: black hole mergers (like the first), neutron star mergers, and black hole-neutron star mergers. But not everyone is convinced the signals are really what the collaboration claims they are. Already in 2017, a group of physicists around Andrew Jackson in Denmark reported difficulties when they tried to reproduce the signal reconstruction of the first event. In an interview dated November last year, Jackson maintained that the only signal they have been able to reproduce is the first. About the other supposed detections he said: “We can’t see any of those events when we do a blind analysis of the data. Coming from Denmark, I am tempted to say it’s a case of the emperor’s new gravitational waves.” For most physicists, the 170817 neutron-star merger – the strongest signal LIGO has seen so-far – erased any worries raised by the Danish group’s claims. That’s because this event came with an electromagnetic counterpart that was seen by multiple telescopes, which can demonstrate that LIGO indeed sees something of astrophysical origin and not terrestrial noise. But, as critics have pointed out correctly, the LIGO alert for this event came 40 minutes after NASA’s gamma-ray alert. For this reason, the event cannot be used as an independent confirmation of LIGO’s detection capacity. Furthermore, the interpretation of this signal as a neutron-star merger has also been criticized. And this criticism has been criticized for yet other reasons. It further fueled the critics’ fire when Michael Brooks reported last year for New Scientist that, according to two members of the collaboration, the Nobel-prize winning figure of LIGO’s seminal detection was “not found using analysis algorithms” but partly done “by eye” and “hand-tuned for pedagogical purposes.” To this date, the journal that published the paper has refused to comment. The LIGO collaboration has remained silent on the matter, except for issuing a statement according to which they have “full confidence” in their published results (surprise), and that we are to await further details. Glaciers are now moving faster than this collaboration. In April this year, LIGO started the third observation run (O3) after an upgrade that increased the detection sensitivity by about 40% over the previous run.  Many physicists hoped the new observations would bring clarity with more neutron-star events that have electromagnetic counterparts, but that hasn’t happened. Since April, the collaboration has issued 33 alerts for new events, but so-far no electromagnetic counterparts have been seen. You can check the complete list for yourself here. 9 of the 33 events have meanwhile been downgraded because they were identified as likely of terrestrial origin, and been retracted. The number of retractions is fairly high partly because the collaboration is still coming to grips with the upgraded detector. This is new scientific territory and the researchers themselves are still learning how to best analyze and interpret the data. A further difficulty is that the alerts must go out quickly in order for telescopes to be swung around and point at the right location in the sky. This does not leave much time for careful analysis. With the still lacking independent confirmation that LIGO sees events of astrophysical origin, critics are having a good time. In a recent article for the German online magazine Heise, Alexander Unzicker – author of a book called “The Higgs Fake” – contemplates whether the first event was a blind injection, ie, a fake signal. The three people on the blind injection team at the time say it wasn’t them, but Unzicker argues that given our lack of knowledge about the collaboration’s internal proceedings, there might well have been other people able to inject a signal. (You can find an English translation here.) In the third observation run, the collaboration has so-far seen one high-significance binary neutron star candidate (S190425z). But the associated electromagnetic signal for this event has not been found. This may be for various reasons. For example, the analysis of the signal revealed that the event must have been far away, about 4 times farther than the 2017 neutron-star event. This means that any electromagnetic signal would have been fainter by a factor of about 16. In addition, the location in the sky was rather uncertain. So, the electromagnetic signal was plausibly hard to detect. More recently, on August 14th, the collaboration reported a neutron-star black hole merger. Again the electromagnetic counterpart is missing. In this case they were able to locate the origin to better precision. But they still estimate the source is about 7 times farther away than the 2017 neutron-star event, meaning it would have been fainter by a factor of about 50. Still, it is somewhat perplexing the signal wasn’t seen by any of the telescopes that looked for it. There may have been physical reasons at the source, such that the neutron-star was swallowed in one bite, in which case there wouldn’t be much emitted, or that the system was surrounded by dust, blocking the electromagnetic signal. A second neutron star-black hole merger on August 17 was retracted And then there are the “glitches”. LIGO’s “glitches” are detector events of unknown origin whose frequency spectrum does not look like the expected gravitational wave signals. I don’t know exactly how many of those the detector suffers from, but the way they are numbered, by a date and two digits, indicates between 10 and 100 a day. LIGO uses a citizen science project, called “Gravity Spy” to identify glitches. There isn’t one type of glitch, there are many different types of them, with names like “Koi fish,” “whistle,” or “blip.” In the figures below you see a few examples. Examples for LIGO's detector glitches. [Image Source] This gives me some headaches, folks. If you do not know why your detector detects something that does not look like what you expect, how can you trust it in the cases where it does see what you expect? Here is what Andrew Jackson had to say on the matter: Jackson: “The thing you can conclude if you use a template analysis is [...] that the results are consistent with a black hole merger. But in order to make the stronger statement that it really and truly is a black hole merger you have to rule out anything else that it could be. “And the characteristic signal here is actually pretty generic. What do they find? They find something where the amplitude increases, where the frequency increases, and then everything dies down eventually. And that describes just about every catastrophic event you can imagine. You see, increasing amplitude, increasing frequency, and then it settles into some new state. So they really were obliged to rule out every terrestrial effects, including seismic effects, and the fact that there was an enormous lightning string in Burkina Faso at exactly the same time [...]” Interviewer: “Do you think that they failed to rule out all these other possibilities? Jackson: “Yes…” If it was correct what Jackson said, this would be highly problematic indeed. But I have not been able to think of any other event that looks remotely like a gravitational wave signal, even leaving aside the detector correlations. Unlike what Jackson states, a typical catastrophic event does not have a frequency increase followed by a ring-down and sudden near-silence. Think of an earthquake for example. For the most part, earthquakes happen when stresses exceed a critical threshold. The signal don’t have a frequency build-up, and after the quake, there’s a lot of rumbling, often followed by smaller quakes. Just look at the below figure that shows the surface movement of a typical seismic event. Example of typical earthquake signal. [Image Source] It looks nothing like that of a gravitational wave signal. For this reason, I don’t share Jackson’s doubts over the origin of the signals that LIGO detects. However, the question whether there are any events of terrestrial origin with similar frequency characteristics arguably requires consideration beyond Sabine scratching her head for half an hour. So, even though I do not have the same concerns as were raised by the LIGO critics, I must say that I do find it peculiar indeed there is so little discussion about this issue. A Nobel Prize was handed out, and yet we still do not have confirmation that LIGO’s signals are not of terrestrial origin. In which other discipline is it considered good scientific practice to discard unwelcome yet not understood data, like LIGO does with the glitches? Why do we still not know just exactly what was shown in the figure of the first paper? Where are the electromagnetic counterparts? LIGO’s third observing run will continue until March 2020. It presently doesn’t look like it will bring the awaited clarity. I certainly hope that the collaboration will make somewhat more efforts to erase the doubts that still linger around their supposed detections.
6a43fc321c61cf34
You are here Nuclear theory Nuclear-reaction theory The physics of low-energy nuclear reactions is essential to explain the evolution of the Universe. Nuclear reactions characterize the different phases in a variety of astrophysical environments, from hydrogen burning in main-sequence stars to explosive nucleosynthesis during the last stages of stellar evolution. In this context, the properties of nuclear systems away from stability provide invaluable insight. Moreover, understanding the mechanisms of nuclear collisions has implications beyond nuclear astrophysics, with applications in energy, medical and material sciences. At ECT* (Jesús Casal), research on nuclear-reaction theory is carried out to answer questions regarding the structure and reaction dynamics of weakly bound systems. For exotic nuclei at the limit of nuclear stability, coupled-channels methods are used to incorporate continuum effects. This includes the description of i) radiative capture reactions, ii) low-energy breakup, transfer and proton-target knockout reactions, and iii) nucleon-nucleon correlations in two-nucleon decays. Nuclear many-body theory and infinite matter  The study of nuclear systems, from finite to infinite ones, requires the knowledge of the nuclear interaction and the choice of a many-body approach to solve the Schrödinger equation. On one side, the nuclear interaction can be constructed as an effective interaction which is fit to reproduce the properties of either finite nuclei along the nuclear chart or even infinite matter, and the Schrödinger equation can usually be solved within the mean field approximation (Hartree-Fock). This approach goes under the name of energy-density functional theory.  On the other side, one can construct a realistic interaction which is instead fit to reproduce the nucleon-nucleon scattering data or even properties of few-body systems. In this latter case on then needs to solve the many-body problem via more sophisticated approximations beyond the mean field level, in order to build in the nuclear correlations. This defines the so called ab initio nuclear theory. The main difference between the two approaches is that the former is strongly model dependent, while the latter strives to be predictive.  At ECT*  we are currently investigating the properties of infinite nuclear matter employing the ab initio self-consistent Green’s function approach (Arianna Carbone). This method is based on the use of the Green’s function to calculate both microscopic and bulk properties of the nuclear system. The use of interactions derived from chiral effective field theory gives us the possibility to be the more consistent as possible with the underlying quantum theory, QCD. We are investigating both zero and finite-temperature properties of nuclear matter, with the objective of providing model independent nuclear physics input for the determination of the neutron star equation of state, to be used in astrophysical simulations of core-collapse supernovae and binary neutron star mergers.
cbdd642b89bce396
Open main menu Stochastic quantum mechanics   (Redirected from Stochastic interpretation) Stochastic quantum mechanics (or the stochastic interpretation) is an interpretation of quantum mechanics. The modern application of stochastics to quantum mechanics involves the assumption of spacetime stochasticity, the idea that the small-scale structure of spacetime is undergoing both metric and topological fluctuations (John Archibald Wheeler's "quantum foam"), and that the averaged result of these fluctuations recreates a more conventional-looking metric at larger scales that can be described using classical physics, along with an element of nonlocality that can be described using quantum mechanics. A stochastic interpretation of quantum mechanics is due to persistent vacuum fluctuation. The main idea is that vacuum or spacetime fluctuations are the reason for quantum mechanics and not a result of it as it is usually considered. Stochastic mechanicsEdit The first relatively coherent stochastic theory of quantum mechanics was put forward by Hungarian physicist Imre Fényes[1] who was able to show the Schrödinger equation could be understood as a kind of diffusion equation for a Markov process.[2][3] Louis de Broglie[4] felt compelled to incorporate a stochastic process underlying quantum mechanics to make particles switch from one pilot wave to another.[5] Perhaps the most widely known theory where quantum mechanics is assumed to describe an inherently stochastic process was put forward by Edward Nelson[6] and is called stochastic mechanics. This was also developed by Davidson, Guerra, Ruggiero and others.[7] Stochastic electrodynamicsEdit Stochastic quantum mechanics can be applied to the field of electrodynamics and is called stochastic electrodynamics (SED).[8] SED differs profoundly from quantum electrodynamics (QED) but is nevertheless able to account for some vacuum-electrodynamical effects within a fully classical framework.[9] In classical electrodynamics it is assumed there are no fields in the absence of any sources, while SED assumes that there is always a constantly fluctuating classical field due to zero-point energy. As long as the field satisfies the Maxwell equations there is no a priori inconsistency with this assumption.[10] Since Trevor W. Marshall[11] originally proposed the idea it has been of considerable interest to a small but active group of researchers.[12] See alsoEdit 1. ^ See I. Fényes (1946, 1952) 2. ^ Davidson (1979), p. 1 3. ^ de la Peña & Cetto (1996), p. 36 4. ^ de Broglie (1967) 5. ^ de la Peña & Cetto (1996), p. 36 6. ^ See E. Nelson (1966, 1985, 1986) 7. ^ de la Peña & Cetto (1996), p. 36 8. ^ de la Peña & Cetto (1996), p. 65 9. ^ Milonni (1994), p. 128 10. ^ Milonni (1994), p. 290 11. ^ See T. W. Marshall (1963, 1965) 12. ^ Milonni (1994), p. 129 • de Broglie, L. (1967). "Le Mouvement Brownien d'une Particule Dans Son Onde". C. R. Acad. Sci. B264: 1041. • Davidson, M. P. (1979). "The Origin of the Algebra of Quantum Operators in the Stochastic Formulation of Quantum Mechanics" (PDF). Letters in Mathematical Physics. 3 (5): 367–376. arXiv:quant-ph/0112099. Bibcode:1979LMaPh...3..367D. doi:10.1007/BF00397209. ISSN 0377-9017. • Fényes, I. (1946). "A Deduction of Schrödinger Equation". Acta Bolyaiana. 1 (5): ch. 2. • Fényes, I. (1952). "Eine wahrscheinlichkeitstheoretische Begründung und Interpretation der Quantenmechanik". 132 (1): 81–106. Bibcode:1952ZPhy..132...81F. doi:10.1007/BF01338578. ISSN 1434-6001. Cite journal requires |journal= (help) • Marshall, T. W. (1963). "Random Electrodynamics". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 276 (1367): 475–491. Bibcode:1963RSPSA.276..475M. doi:10.1098/rspa.1963.0220. ISSN 1364-5021. • Marshall, T. W. (1965). "Statistical Electrodynamics". Mathematical Proceedings of the Cambridge Philosophical Society. 61 (02): 537. Bibcode:1965PCPS...61..537M. doi:10.1017/S0305004100004114. ISSN 0305-0041. • Nelson, E. (1966). Dynamical Theories of Brownian Motion. Princeton: Princeton University Press. OCLC 25799122. • Nelson, E. (1985). Quantum Fluctuations. Princeton: Princeton University Press. ISBN 0-691-08378-9. LCCN 84026449. OCLC 11549759. • Nelson, E. (1986). "Field Theory and the Future of Stochastic Mechanics". In Albeverio, S.; Casati, G.; Merlini, D. (eds.). Stochastic Processes in Classical and Quantum Systems. Berlin: Springer-Verlag. pp. 438–469. doi:10.1007/3-540-17166-5. ISBN 978-3-662-13589-1. OCLC 864657129.
9f1d9dab15729c04
Promoting learning in large enrollment courses Michael Palmer - Bio Photo“Stand up if you teach science or math.” The majority of the people present stood, as we explored one way of creating connections between students in a large classes.  We followed directions by Michael Palmer, Associate Professor and Assistant Director of University of Virginia’s Teaching Resource Center, as he demonstrated techniques for teaching large courses recently as a part of Duke’s Teaching IDEAS workshops. Michael teaches a large lecture chemistry class of about 100 students, and, because “we learn by doing”, he demonstrated what he does in “two of the most important days in class: the first day, and every other day”.  I took extensive notes, on behalf of several Duke faculty members who could not attend because they were teaching their large classes. The first day of class sets expectations and precedent for the semester.  Michael began by having participants meet a few of their neighbors, ask their names (he warned that he would quiz on names) and asked us to list the positive advantages of teaching large enrollment courses.  He told us we would have 3 minutes, and at the end, he would raise his hand.  When we saw his hand raised, we were to stop talking and raise our hands. As we talked, he briefly joined several of the groups.  Then, he called on people by name (presumably he learned a few names as floated between groups) to share what their group talked about. This activity created a sense of participation, set the ground rules for participating (hand raising being a signal to stop, timed activities) and set up small groups within the larger class.  I can imagine this adapting well to a large class in any subject, as the instructor could come up with a question to relate the subject to students’ lives. Next, he asked some questions about the participants in the workshop. He asked people to stand up if they were grad students. Then, if they were postdocs. They sat, and he asked faculty to stand. Then, he repeated asking people to stand based on discipline. Attendees were faculty and grad students, mostly in the sciences. Later, he pointed out that by standing to indicate their interests, students in his class can look around and see who has similar interests, so that they would feel they had things in common with classmates and not feel faceless and disconnected in a large class. Then, Michael stated the learning goal for the first day – to help students think like scientists. He talked about making observations, seeing into things we see everyday.  He showed a few images, and asked participants to shout out where they see chemical processes or products of chemistry.  Then, with our neighbor, we discussed a question about chemistry from one of the pictures. This activity could be adapted to any large enrollment course (asking about the course subject).  In addition, using observations and asking questions worked towards achieving his learning goal of students thinking like scientists. One faculty member asked how to get through required material if the instructor did activities. Michael stated that there’s always a balance between content and learning, and he is willing to sacrifice some content to help students learn. He pointed out that when learners take an active role in their learning, it’s slower at  first, then learning goes much faster. In addition, when students are actively engaged in class, you can ask them to do a lot more outside of the class, to incorporate more content. An example class  focused on the Schrödinger equation. First, Michael created a visual image of what this equation meant, by having volunteers represent protons and electrons, while asking people about their properties (size, speed, charge) and how the volunteers should behave.  He then asked a question about terms in the equation (which required applying what he had just covered) and captured participant answers using “clickers”, aka personal response systems.  He displayed the range of responses and asked for justifications for each of the two most popular responses.  Then, he asked participants to justify their answers to their neighbors, and then polled us again (note: this technique  follows Eric Mazur’s method for teaching conceptual understanding using clickers). Participants then discussed what they did and how that affected them as learners. Many commented how effective it was to have a visual device instead of the equation to focus on.  Michael pointed out that we learn by constructing knowledge, and we construct knowledge based on things we already know.  In class, students need enough structure so they can tie on information, not by giving them all of the information, but enough so they can build on it.  Teaching is not “okay, they got it”, but it is building on what students know, dealing with misconceptions,  and circling back to the main concepts to reinforce knowledge.  Participants liked using the clickers more than once to answer the question – justifying your answer to your neighbor stresses that it’s not just about getting the right answer, but making the argument to get the right answer.  Michael expanded on this, pointing out that students often equate familiarity with understanding, so he builds in opportunities in class to practice.  In this way, students are confronted with their questions during class, which can be addressed in immediately in class rather than later as students attempt homework. In summary, if you want to teach a large enrollment course well, engage the students. Michael observed that managing a large course is like preparing for a wedding every week, only without the honeymoon.  You have to figure out how to do everything ahead of time, and think through the timing, how students will form groups, how much time they will have, prepare all technology.  “You cannot wing a large class.” To manage communicating with students, put limits on communcation, like email.  Michael puts an FAQ on the course website – if students ask anything that is in the FAQ, answer “look at the FAQ”.  He sends out a weekly reminder of due dates, and what is due for class for that week. He also maintains a bank of stock answer emails so he can cut and paste his common answers. What were the advantages of teaching a large class (from our first activity)?  Here are some:  you can share your enthusiasm with many other people, a large class is good for group activities, diversity in the classroom generates lots of ideas, you can use small groups to shrink size of class, and large classes are an efficient use of resources. He shared an article he had written on Little Things Matter in Large Course Instruction Want more?  Here’s his list of references: • Carbone, Elisa. Teaching Large Classes: Tools and Strategies, Thousand Oaks, CA: Sage Publications, 1998. • Heppner, Frank. Teaching the Large College Class: A Guidebook for Instructors with Multitudes, San Francisco: Jossey-Bass, 2007 • McKeachie, Wilbert J. Teaching Tips: Strategies, Research, and Theory for College and University Teachers, 11th Ed. Boston: Houghton Mifflin, 2002. • Stanley, Christina A. and M. Erin Porter. Engaging Large Classes: Strategies and Techniques for College Faculty. Boston: Anker Publishing, 2002. • Weimer, Maryellen Gleason. Teaching Large Classes Well. San Francisco:Jossey-Bass, 1987 Andrea Novicki Author: Andrea Novicki
093b11bd94076ad6
Sunday, April 30, 2006 ... Deutsch/Español/Related posts from blogosphere The Final Theory: two stars A short comment: a reader has pointed out that right now, the crackpot book by • Mark McCutcheon has the average rating of 2 stars because of 200 one-star reviews that suddenly appeared on the website. The new reviews are a lot of fun: many reviews come from Brian Powell, Jack Sarfatti, Greg Jones, Quantoken, David Tong, me, and many others. Many of the readers have written several reviews - and you can see how they struggled to make their reviews acceptable. ;-) When we first informed about the strange system, McCutcheon's book had the average of 5 stars and no bad reviews at all. The previous blog article about this story is here. Saturday, April 29, 2006 ... Deutsch/Español/Related posts from blogosphere How to spend 6 billion dollars? Related: What can you buy for 300 billion dollars? What is the best way to spend 6 billion dollars? Two weeks of Kyoto ILC: the linear collider One month of war in Iraq Millions of free PCs for kids Ten space shuttle flights Free polls from Additional comments: the world pays about 6 billion dollars for two weeks of the Kyoto protocol which cools down the Earth by 0.00006 degrees or so. The International Linear Collider would have the capacity to measure physics at several TeV more accurately than the LHC, but it is also more expensive - about 6 billion dollars. The U.S. pays 6 billion dollars for one month of the military presence in Iraq. One could buy 60 million computers for kids if the price were $100 as MIT promises. Whenever you launch space shuttle, you pay about 600 million USD. Klaus meets Schwarzenegger Figure 1: Californian leader Schwarzenegger with his Czech counterpart during a friendly encounter in Sacramento. Arnold has accepted Klaus' invitation to the Czech Republic. When Czech president Václav Klaus visited Harvard, he complained that the capitalism of the European Union is not the genuine capitalism the he always believed - the capitalism as taught by the Chicago school - but rather a kind of distorted, socialized capitalism, something that could be taught here at Harvard. ;-) Finally, he could have spoken to the peers - at the Graduate School of Business at University of Chicago. The other speakers over there agree with Klaus' opinions. In his speech, he explained that the Velvet Revolution was done by the people inside the country: it was not imported. Equally importantly, Americans have a naive understanding of the European unification because they don't see the centralized, anti-liberal dimension of this process. Twenty years after Chernobyl On Wednesday morning, it's been 20 years since the Chernobyl disaster; see The communist regimes could not pretend that nothing had happened (although in the era before Gorbachev, they could have tried to do so) but they had attempted to downplay the impact of the meltdown. At least this is what we used to say for twenty years. You may want to look how BBC news about the Chernobyl tragedy looked like 20 years ago. Ukraine remembered the event (see the pictures) and Yushchenko wants to attract tourists to Chernobyl. You may see a photo gallery here. Despite the legacy, Ukraine has plans to expand nuclear energy. Today I think that the communist authorities did more or less exactly what they should have done - for example try to avoid irrational panic. It seems that only 56 people were killed directly and 4,000 people indirectly. See here. On the other hand, about 300,000 people were evacuated which was a reasonable decision, too. And animals are perhaps the best witnesses for my statements: the exclusion zone - now an official national park - has become a haven for wildlife - as National Geographic also explains: • Reappeared: Lynx, eagle owl, great white egret, nesting swans, and possibly a bear • Introduced: European bison, Przewalski's horse • Booming mammals: Badger, beaver, boar, deer, elk, fox, hare, otter, raccoon dog, wolf • Booming birds: Aquatic warbler, azure tit, black grouse, black stork, crane, white-tailed eagle (the birds especially like the interior of the sarcophagus) Ecoterrorists in general and Greenpeace in particular are very wrong whenever they say that the impact of technology on wildlife must always have a negative sign. In other words, the impact of that event has been exaggerated for many years. Moreover, it is much less likely that a similar tragedy would occur today. Nuclear power has so many advantages that I would argue that even if the probability of a Chernobyl-like disaster in the next 20 years were around 10%, it would still be worth to use nuclear energy. Friday, April 28, 2006 ... Deutsch/Español/Related posts from blogosphere Yuval Ne'eman died Because this is a right-wing physics blog, it is necessary to inform you about the saddening news - news I heard from Ari Pakman yesterday - that Yuval Ne'eman (*1925), an eminent Israeli physicist and right-wing politician, died yesterday. If you're interested, you can read the article about him on Wikipedia and Peter Woit's blog, much like the text of Yisrael Medad, Ne'eman's political advisor. News summarized by Google are here. In 1961, Ne'eman published a paper with a visionary title • Derivation of strong interactions from a gauge invariance As far as I understand, the symmetry he was talking about was the flavor symmetry which is not really a gauge symmetry. Ne'eman co-authored the book "The Eightfold Way" with Murray Gell-Mann, contributed tremendously to the development of nuclear and subnuclear physics in Israel (which includes the nuclear weapons), and was the president of Tel Aviv University, among many other organizations. Science and fundamental science Chad Orzel did not like the proposals to build the ILC because they are derived from the assumption that high-energy physics is more fundamental a part of physics than other parts of physics - and he disagrees with this assumption. Instead, he argues that technology is what matters and it does not depend on particle physics. Also, Chad explains that one can have a long career without knowing anything about high-energy physics - which seems to be a rather lousy method to determine the fundamental value of different things. There are three main motivations why people stretch their brains and think about difficult things and science. We may describe the corresponding branches of science as follows: • recreational mathematics • applied science • pure science Recreational mathematics is studied by the people to entertain themselves and show others (and themselves) that they are bright. Chess in flash or without it may be viewed as a part of this category. People do this sort of activity because it is fun. Comedians are doing similar things although their work requires rather different skills. In this category, entertainment value is probably the main factor that determines the importance. People do whatever makes them happy and excited. If someone else does things on their behalf, they prefer those with a higher entertainment value. The invisible hand of freedom and the free market pretty much takes care of this activity. The rules of chess depend on many historical coincidences. Other civilizations could have millions of other games with different rules and the details really don't matter: what matters is that you have a game that requires you to turn your brain on. Applied science is studied because scientific insights can lead to economical benefits. They can improve people's lives, their health, give them new gadgets, and so forth. The practical applications are the driving factor behind applied science. People, corporations, and scientists pay for applied science because it brings them practical benefits. It is often (but not always) the case that the benefits occur at shorter time scales, and it is possible for many corporations and individuals to provide applied scientists with funding. And if you look around, you will see that many fields of applied science are led by laboratories of large corporations - such as IBM, drug companies, and others. Pure science is studied because the human beings have an inherent desire to learn the truth. In our Universe, the truth turns out to be hierarchical in nature. It is composed of a large number of particular statements and insights that can typically be derived from others. For equivalent insights, the derivations can work in both directions. In many other cases, one can only derive A from B but not B from A. The primary axioms, equations, and principles that can be used to derive many others are, by definition, more fundamental. The word "fundamental" means "elementary, related to the foundation or base, forming an essential component or a core of a system, entailing major change". If you respect the dictionaries, the physics of polymers may be interesting, useful, and important - but it is not too fundamental. If Chad Orzel or anyone else offers a contradictory statement, he or she abuses the language. Among the disciplines of physics, high-energy physics is more fundamental than low-energy physics. Moreover, I think that as long as we talk about pure science, being "fundamental" in this sense is a key component of being important. If we want to learn the scientific truth about the world, we want the most fundamental and accurate truth we can get. I am not saying that other fields should be less supported. Nor am I proposing a hierarchical structure between the people who chose different specializations. What I am saying is that other fields that avoid fundamental questions about Nature are being chosen as interesting not only because of their pure scientific value but also because of their practical or entertainment value. You may be trying to figure out what happens with a particular superconductor composed of 150-atom molecules under particular conditions. The number of similar problems may exceed the number of F-theory flux compactifications. How can you decide whether a problem like that - or any other problem in science - is important? As argued above, there are many different factors that decide about the answer: entertainment value, practical applications, and the ability to reveal major parts of the general truth. I guess that the practical applications will remain the most likely justification of a particular specialized research of a very particular type of superconductors. People and societies may have different motivations to study different questions of science. If you extend this line of reasoning, you will realize that people can also do many things - and indeed, they do many things - that have no significant relation with science. And they can spend - and indeed, do spend - their money for many things that have nothing to do with science, especially pure science. And it's completely legitimate and many of these things are important or cool. When you think about the support of science in general, what kind of activity do you really have in mind? I think that pure science is the primary category that we consider. Pure science is the most "scientific" part of science - one that is not motivated by practical applications. As we explained above, pure science has a rather hierarchical structure of insights. If something belongs to pure science, it does not mean that it won't have any applications in the future. In the 1910s-1930s, radioactivity was abstract science. By various twists and turns, nuclear energy became pretty useful. There are surely many examples of this kind. The criterion that divides science into pure science and applied science is not the uncertain answer to the question whether the research will ever be practically useful: the criterion is whether the hypothetical practical applications are the main driving force behind the research. Societies may be more interested in pure science or less interested in pure science. The more they are interested in pure science, the more money they are willing to pay for pure science. A part of this money is going to pure science that is only studied as pure science; another part will end up in fields that are partly pure and partly applied. Chad Orzel thinks that if America saves half a billion dollars for the initial stages of the ILC collider, low-energy physics will get extra half a billion dollars. I think he is not right. The less a society cares about pure science - even about the most fundamental questions in pure science such as those in high-energy physics - the less it is willing to pay for other things without predictable practical applications or entertainment value. Eliminating high-energy experimental physics in the U.S. would be a step towards the suppression of experimental pure science in general. Thursday, April 27, 2006 ... Deutsch/Español/Related posts from blogosphere Iran may nuke Czechia, Italy, Romania According to Haaretz, Iran has just received a first batch of BM-25 missiles from its ally in the Axis of Evil, namely North Korea. They are able to carry nuclear warheads and attack countries such as the Czech Republic, Italy, and Romania. Such a conflict is not hard to start. Imagine that sometime in the future, for example on August 22nd, 2006, Iranian troops suddenly attack Romanian oil rigs on their territory. Romania will respond nervously - and the mad president of Iran will have an opportunity to check out his nukes. The Czech Republic is, together with England, one of two European countries on an Iranian black list of countries whose citizens are not allowed to get 15-day visa for Iran. Some Muslims in the Czech Republic preach that Islamic Shari'a law should be adopted by Czechia. The diplomatic relations between Czechia and Iran cooled down 8 years ago when the Radio Liberty (more precisely in Iran: Radio Tomorrow) started to broadcast anti-government programs in Persian from Prague. See here. US told to invest in particle physics National Academy of Sciences has also recommended the U.S. to invest into neutrino experiments and high-precision tests of the Standard Model to stop the motion of the center of mass of particle physics away from the U.S. New York Times Dennis Overbye from the New York Times describes the same story: the ILC must be on American soil. See also and Nature. CERN new tax Meanwhile, CERN has adopted the digital solidarity principle: 1% of ITC-related transactions must be paid to CERN. Matt Strassler has just described their fascinating work on with Richard Brower, Chung-I Tan, and Joe Polchinski. Return 40 years into the past. The research that eventually evolves into string theory is proposed as a theory of strong interactions: something that would be known as a failed theory of strong interactions for the following 30 years. Things only start to slowly change after the 1997 discovery by Juan Maldacena and a steady flow of new insights eventually leads to a nearly full revival of the description of strong interactions using a "dual" string theory, albeit this string theory is more complicated than what was envisioned in the late 1960s. QCD can be equivalently described as the old string theory with some modern updates: higher-dimensional and braney updates. The basic concepts of the Regge physics included the Regge trajectory, a linear relation between the maximum spin "J" that a particle of squared mass "m^2" can have; the slope - the coefficient "alphaprime" of the linear term "alphaprime times m^2" - is comparable to the inverse squared QCD scale. The dependence of "J" could be given by a general Taylor expansion but both experimentally as well as theoretically, the linear relation was always preferred. Note that "alphaprime" in "the" string theory that unifies all forces is much much smaller area than the inverse squared QCD scale (the cross section of the proton). We are talking about a different setup in AdS/QCD where the four-dimensional gravity may be forgotten. This picture is not necessarily inconsistent with the full picture of string theory with gravity as long as you appreciate the appropriately warped ten-dimensional geometry. At this moment, you should refresh your memory about the chapter 1 of the Green-Schwarz-Witten textbook. There is an interesting limit of scattering in string theory (a limit of the Veneziano amplitude) called the Regge limit: the center-of-mass energy "sqrt(s)" is sent to infinity but the other Mandelstam variable "t" - that is negative in the physical scattering - is kept finite. The scattering angle "sqrt(-t/s)" therefore goes to zero. In this limit, the Veneziano amplitude is dominated by the exchange of intermediate particles of spin "J". Because the indices from the spin must be contracted, the interaction contains "J" derivatives, and it therefore scales like "Energy^J". Because there are two cubic vertices like that in the simple Feynman diagram of the exchange type, the full amplitude goes like "Energy^{2J}=s^J" where the most important value of the spin "J" is the linear function of "t" given by the linear Regge relation above. The amplitude behaves in the Regge limit like "s^J(t)" where "J(t)" is the appropriate linear Regge relation. You can also write it as "exp(J(t).ln(s))". Because "t=-s.angle^2", you see that the amplitude is Gaussian in the "angle". The width of the Gaussian goes like "1/sqrt(ln(s))" in string units. Correspondingly, the width of the amplitude Fourier-transformed into the transverse position space goes like "sqrt(ln(s))" in string units. That should not be surprising: "sqrt(ln(s))" is exactly the typical transverse size of the string that you obtain by regulating the "integral dsigma x^2" which equals, in terms of the oscillators, "sum (1/n)" whose logarithmic divergence must be regulated. The sum goes like "ln(n_max)" where "n_max" must be chosen proportional to "alphaprime.s" or so. If you scatter two heavy quarkonia (or 7-7 "flavored" open strings in an AdS/CFT context, think about the Polchinski-Strassler N=1* theory) - which is the example you want to consider - the interaction contains a lot of contributions from various particles running in the channel. But the formula for the amplitude can be written as a continuous function of "s,t". So it seems that you are effectively exchanging an object whose angular momentum "J" is continuous. Whatever this "object" is, you will call it a pomeron. In perturbative gauge theory, such pomeron exchange is conveniently and traditionally visualized in terms of Feynman diagrams that are proportional to the minimum power of "alpha_{strong}" that is allowed for a given power of "ln(s)" that these diagrams also contain: you want to maximize the powers of "ln(s)" and minimize the power of the coupling constant and keep the leading terms. When you think for a little while, this pomeron exchange leads to the exchange of DNA-like diagrams: the diagrams look like ladder diagrams or DNA. There are two vertical strands - gluons - stretched in between two horizontal external quarks in the quarkonia scattering states. And you may insert horizontal sticks in between these two gluons, to keep the diagrams planar. If you do so, every new step in the ladder adds a factor of "alpha_{strong}.ln(s)". You can imagine that "ln(s)" comes from the integrals over the loops. What is the spin of the particles being exchanged for small values of "t", the so-called intercept (the absolute term in the linear relation)? It is a numerical constant between one and two. Matt essentially confirmed my interpretation that you can imagine QCD to be something in between an open string exchange (whose intercept is one) and a closed string exchange (whose intercept is two). The open string exchange with "J=1" is valid at the weak QCD coupling - it corresponds to a gluon exchange. At strong coupling, you are exchanging closed strings with "J=2". For large positive values of "t", you are in the deeply unphysical region because the physical scattering requires negative values of "t" (spacelike momentum exchange). But you can still talk about the analytical structure of the scattering amplitude - Mellin-transformed from "(s,t)" to "(s,J)". For large positive "t", you will discover the Regge behavior which agrees with string theory well. Unfortunately, this is the limit of scattering that can't be realized experimentally. Nevertheless, for every value of "t", you find a certain number of effective "particles" that can be exchanged - with spins up to "J" which is linear in "t". The negative values of "t" can be probed experimentally, and this is where string theory failed drastically in the 1970s: string theory gave much too soft (exponentially decreasing) behavior of the amplitude at high energies even though the experimental data only indicated a much harder (power law) behavior. So now you isolate two different classes of phenomena: • the naive string theory is OK for large positive "t" • the old string theory description of strong interactions fails for negative "t"; the linear Regge relation must break down here But the old string theory only fails for negative "t" if you don't take all the important properties of that string theory into account. The most important property that was forgotten 35 years ago was the new, fifth dimension. The spectrum of particles - eigenvalues of "J" - is related to the Laplacian but it is not just a four-dimensional Laplacian; it also includes a term in the additional six dimensions, especially the fifth holographic dimension of the anti de Sitter space. And this term can become - and indeed, does become - important. What is the spectrum of allowed values of "J" of intermediate states that you can exchange at a given value of "t"? Recall that each allowed value of "J" of the intermediate objects generates a pole in the complex "J" plane - or a cut whenever the spectrum of allowed "J" becomes continuous. For large positive "t", the spectrum contains a few (roughly "alphaprime.t") eigenvectors with positive "J"s, and a continuum with "J" being anything below "J=1". For negative values of "t", you only see the continuum spectrum (a cut) for "J" smaller than one. Don't forget that the value of "J" appears as the exponent of "s" in the amplitude for the Regge scattering. We are talking about something like "s^{1.08}" or "s^{1.3}" - both of these exponents appear in different kinds of experiments and can't be calculated theoretically at this moment. Matt argues convincingly that the Regge behavior for large positive "t", with many poles plus the cut below "J=1", is universal. The "empty" behavior at large negative "t" where you only see the continuum below "J=1" is also universal. It is only the crossover region around "t=0" that is model-dependent and where the details of the string-theoretical background enter. And they can calculate the spectrum of "J" as a function of "t" in toy models from string theory. They assume that the string-theoretical scattering in the AdS space takes place locally in ten dimensions, and just multiply the corresponding amplitudes by various kinematical and warp factors - the usual Polchinski-Strassler business. The spectrum of poles and cuts in the "J" plane reduces to the problem to find the eigenvalues of a Laplacian - essentially to a Schrödinger equation for a particle propagating on a line. You just flip the sign of the energy eigenvalues "E" from the usual quantum mechanical textbooks to obtain the spectrum of possible values of "J". And they can determine a lot of things just from the gravity subsector of string theory - where you exchange particles of spin two (graviton) plus a small epsilon that arises as a string-theoretical correction. For large positive "t", you obtain a quantum mechanical problem with a locally negative (binding) potential that leads to the discrete states - those that are seen at the Regge trajectory. When all these things are put together, they can explain a lot about physics observed at HERA. The calculation is not really a calculation from the first principles because they are permanently looking at the HERA experiments to see what they should obtain. But they are not the first physicists who use these dirty tricks: in the past, most physicists were constantly cheating and looking at the experiments most of their time. ;-) Wednesday, April 26, 2006 ... Deutsch/Español/Related posts from blogosphere Rae Ann: Alien recycling By Rae Ann, one of the four winners who have seen the #400,000 figure. My first grader brought home some interesting EPA publications for school children. While I totally support teaching children to recycle and be mindful of wise use of resources I think it's a little off to tell them that 'garbage leads to climate change'. And what's with the little flying saucers and aliens (graphics in the publications)? What do they have to do with climate change and garbage?? One publication does open with the statement, "Space creatures might think the idea of reusing containers is an alien concept but here on Earth it's easy to keep an old jar out of the trash and give it new life." (That is a direct quote and the missing comma is their punctuation error.) Well, how does the government know that aliens don't recycle? Is it because they have left a bunch of their stuff here? Hmm? Sounds like a very prejudiced and discriminatory attitude to me. What is that teaching our kids about aliens?? The Czech Fabric of the Cosmos My friend Olda Klimánek has translated Brian Greene's book "The Fabric of the Cosmos" into Czech - well, I was checking him a bit, reading his translation twice - and the book was just released by Paseka, a Czech publisher, under the boring name "Struktura vesmíru" (The Structure of the Universe). The other candidate titles were just far too poetic. I think he is a talented writer and translator and there will surely be many aspects in which his translation is gonna be better than my "Elegantní vesmír" (The Elegant Universe). What I find very entertaining is the different number of pages of this book (in its standard hardcover editions) in various languages: • Czech: Struktura vesmíru, 488 pages • Polish: Struktura kosmosu, 552 pages • English: The Fabric of the Cosmos, 576 pages • Portuguese: O tecido do cosmo, 581 pages • Italian: La trama del cosmo, 612 pages • French: La magie du Cosmos, 666 pages • Korean: 우주의 구조, 747 pages • German: Der Stoff, aus dem der Kosmos ist, 800 pages I am not kidding and as far as I know, Olda's translation is complete. If you need to know, 800/488 = 1.64. ;-) The Czech Elegant Universe was also much shorter than the German one but the ratio was less dramatic. I like the rigid rules of German but this inflation of the volume is simply off the base. The Czech language has similar grammar rules but it avoids the articles and it has much more free word order. A slightly more complex system of declination removes many prepositions. And Olda may simply be a more concise translator. :-) Tuesday, April 25, 2006 ... Deutsch/Español/Related posts from blogosphere Uncle Al: on the equivalence principle By Uncle Al who has submitted a #400,000 screenshot. Does the Equivalence Principle have a parity violation? Weak interactions (e.g., the Weak Interaction) routinely violate parity conservation. Gravitation is the weakest interaction. Either way, half of contemporary gravitation theory is dead wrong. Gravitation theory can be written parity-even or parity-odd; spacetime curvature or spacetime torsion. Classical gravitation has Green~Rs function Newton and metric Einstein or affine Weitzenböck and teleparallel Cartan. String theory has otherwise and heterotic subsets. Though their maths are wildly different, testable empirical predictions within a class are exactly identical... ...with one macroscopic disjoint exception: Do identical chemical composition local left and right hands vacuum free fall identically? Parity-even spacetime is blind to geometric parity (chirality simultaneously in all directions). Parity-odd spacetime would manifest as a background pseudoscalar field. The left foot of spacetime would be energetically differently fit by a sock or left shoe compared to a right shoe. String theory could be marvelously pruned. Does a single crystal solid sphere of space group P3(1)21 quartz (right-handed screw axes) vacuum freefall identically to an otherwise macroscopically identical single crystal solid sphere of space group P3(2)21 quartz (left-handed screw axes)? Both will fall along minimum action paths. In parity-odd spacetime those local paths will be diastereotopic and measurably non-parallel -- a background left foot fit with left and right shoes. Frank Wilczek: Fantastic Realities Technical note: Everyone who will be a visitor number 400,000 and who will submit an URL for the screenshot proving the number today will be allowed to post any article on this blog, up to 6 kilobytes. The reader #400,000 was Rae Ann who just returned from a trip - what a timing. :-) UncleAl has still opened the page (reload) when it was showing #400,000, much like Doug McNeil. I have no way to tell who was the first one. The others just reloaded the page and obtained the same number because it was not their first visit today, and it thus generated no increase of the counter. Congratulation to all three. fantastic realities Yes, I just saw this irresistable book cover at Betsy Devine's blog. The book is called and Frank Wilczek is apparently using a QCD laser. The journeys include many of Wilczek's award-winning Reference Frame columns. Have you heard of Wilczek's Reference Frame columns in Physics Today? Let me admit that I have not. ;-) Because of the highly positive reviews, your humble correspondent has just decided to double the number of copies that Frank Wilczek is going to sell. Right now, the yesterday's rank is 100,000 and today's rank is 130,000. Look at the promotional web pages of the book, buy the book, and see tomorrow what it does with the rank. Remember that the rank is approximately inversely proportional to the rates of selling the books. Update: At 7:00 p.m., the rank was about 11,000, better than 136,000 in the morning. On Wednesday 8:30 a.m., the rank was 9,367, an improvement by a factor of fifteen from the rank 24 hours earlier. The promotional web pages also reveal that Betsy is proud to be the 4th Betsy found by Google. Congratulations, and I wish her to capture the most important Frank Wilczek blog award, too. ;-) Monday, April 24, 2006 ... Deutsch/Español/Related posts from blogosphere Bruce Rosen: brain imaging Bruce Rosen started the colloquium by saying that it is useful to have two degrees - PhD and MD - because every time he gives a talk for the physicians, he may impress them by physics, and every time he speaks in front of the physicists, he may impress them by medicine. And he did. Although there are many methods to study the anatomy and physiology of the brain - such as EEG and/or flattening the brain by a hammer which is what some of Rosen's students routinely do - Rosen considers NMR to be the epicenter of all these methods. (This is a conservative physics blog, so we still refer to these procedures as NMR and not MRI.) This bias should not be unexpected because Rosen's advisor was Ed Purcell. Some of the results he shown were obtained by George Bush who is an extremely smart scientist as well as psychiatrist, besides being a good expert in B-physics. Rosen has shown a lot of pictures and videosequences revealing how the activity of the brains depends on time in various situations, on the presence of various diseases, on the age, and on the precise way how the brains are being monitored. Many of these pictures were very detailed and methods already exist to extract useful data from the pictures and videos that can't be seen by a naked eye. Human brains are being observed at 10 Tesla or so, and magnetic field of 15 Tesla is the state-of-the-art environment to scan the brains of smaller animals. The frequency used in these experiments is about half a gigahertz. Many tricks how to drastically reduce the required amount of drugs that the subject must take before the relevant structures are transparent have been found. Most of the data comes from observations of water that is a dominant compound in the human body and not only the human body. It turns out that the blood that carries oxygen and the blood that carries carbon dioxide is diamagnetic and paramagnetic, respectively. That simplifies the NMR analysis considerably. There's a lot of data in the field and fewer ways to draw the right conclusions and interpretations out of the data. OVV in higher dimensions? Brett McInnes proposes a generalization of the Hartle-Hawking approach to the vacuum selection problem pioneered by Ooguri, Vafa, and Verlinde (OVV) and described by this blog article to higher dimensions. McInnes identifies the existence of two possible Lorentz geometries associated with one Euclidean geometry as the key idea of the OVV paradigm. He argues that the higher-dimensional geometries must have flat compact sections which is certainly a non-trivial and possibly incorrect statement: Everything you wanted to know about Langlands ... geometric duality but you were afraid to ask could be answered in this 225-page-long paper by Edward Witten and Anton Kapustin: Previous blog articles about the Langlands program the following ones: A semi-relevant discussion about related topics occurs at Not Even Wrong. Translation and related news Just a technical detail: I've added two utilities to the web pages of individual articles: • related news and searches, powered by Google (blue box under each article) • translations of the blog articles to German, French, and Spanish, powered by Google (three flags at the top of the articles) I apologize to the readers from the remaining 142 countries that also visit this website - according to the Neocounter - besides the three countries indicated above that their language has yet to be included. :-) Recent comments Also, "recent comments" were added to the sidebar of the main page. The recent slow comments in the lower Manhattan (skyscraper) area are sorted according to the corresponding article. You may find out which article the comment belongs to if you hover over the timestamp. You can also click it. There are also ten "recent fast comments" in a scrolling window at the upper portion of the sidebar. Sunday, April 23, 2006 ... Deutsch/Español/Related posts from blogosphere Leonard Susskind Podcast I am just listening a podcast with Leonard Susskind. You can find the link somewhere on this page. I will add it here later. Then you click "Podcasts" on the left side, and the second one is Susskind: the 5.57 MB is 23:55 long. Entertaining, recommended. Manic Miner Manic Miner... Manic miner flash game removed from the page because it was making a lot of noise. Please click at the second "Manic miner". How many people used to play such things 20 years ago? Links to previous flash games on this blog can be found here. PageRank algorithm finds physics gems Several colleagues from Boston University and from Brookhaven have proposed a method to look for influential papers using the same algorithm that Google uses to rank web pages. This algorithm uses the list of web pages (or papers) and the links between them (or citations) as input. The web pages or papers are nodes of a graph and the citations are oriented links. It works as follows: You have lots of "random walkers". Each of them sits at some paper XY. In each step, each random walker either jumps to a random paper in the full database, with probability "d", or it jumps to a random paper mentioned in the references of the previous paper XY, with probability "1-d". Once the number of random walkers associated with each paper reaches equilibrium (approximately), the algorithm terminates. The number of walkers at each paper gives you the rank. Saturday, April 22, 2006 ... Deutsch/Español/Related posts from blogosphere Illinois: Particle Accelerator Day Illinois' governor has declared this Saturday (or Friday?) to be the and everyone must celebrate. Mr. Blagojevich is trying to attract the future linear ILC collider to his state. Congratulations, Argonne and Fermilab. illuminates some ILC attempts of these two facilities here. Meanwhile, on the same day, the celebrations of the Earth Day, invented by John McConnell, dominate in Massachusetts. Those who are already fed up with the Earth - and with Google Earth - may try Google Mars. Via JoAnne. Detlev Buchholz: algebraic quantum field theory Prof. Detlev Buchholz who is a rather famous researcher in the algebraic quantum field theory community has given the duality seminar today and we had a tête-à-tête discussion yesterday. He has attempted to convert the string theorists to the belief system of algebraic quantum field theory which is not a trivial task. Algebraic quantum field theory is a newer version of the older approach of axiomatic quantum field theory. In this approach, the basic mathematical structure is the algebra of bounded operators acting on the Hilbert space. In fact, for every region R, you can find and define a subalgebra of the full algebra of operators, they argue. A goal is to construct - or at least prove the existence of - quantum field theories that do not depend on any classical starting point. This is a nice goal. Because the string theorists know S-dualities and many other phenomena in field theory and string theory which imply that a quantum theory can have many classical descriptions - more precisely, it can have many classical limits - we are certainly open to the possibility that we will eventually be able to formulate our interesting theories without any direct reference to a classical starting point. Instead, we will be able to derive all possible classical limits of string/M-theory from a purely non-classical starting point. On the other hand, the particle physics and string theory communities are deeply rooted in experimental physics and we simply do not want to talk about some abstract concepts without having any particular theory that can at least in principle predict the results of experiments and that respects these concepts. In fact, we want to focus on theories of the same kind that are relevant for observational physics. Friday, April 21, 2006 ... Deutsch/Español/Related posts from blogosphere Evolving proton-electron mass ratio? Update: In 2008, a new experiment with ammonia found no time-dependence in the last 6 billion years. Klaus Lange has pointed out that describes a Dutch experiment performed primarily in the European Southern Observatory - hold your breath, this observatory is located in Chile. They measured the spectrum of the molecular hydrogen that depends on the proton-electron mass ratio "mu". Note that this ratio is about 1836.15. Twenty years ago I played with the calculator and it turned out that this number can be written as • 6.pi^5 = 1836.12. This agreement has promoted me to the king of all crackpots: with only three characters, namely six, pi, five, I can match around 5 or 6 significant figures of the correct result. Actually my calculator only had 8 significant figures (with no hidden figures) and I exactly matched 8 significant figures of 1836.1515 written in the mathematical tables of that time. Later I learned that someone else has actually published this "discovery" fifty years ago, and the agreement got worse with better calculators and better measurements in particle physics. More seriously, the Dutchmen now claim that the ratio was 1.00002 times higher twelve billion years ago. The New Scientist immediately speculates that this could prove extra dimensions or string theory. I, for one, have absolutely no idea where this statement comes from. I personally believe that these constants have been constant in the last 12 billion years - and moreover this opinion is completely and naturally compatible with string theory. George Bush meets Prof. Albert Einstein As soon as Lee Smolin asked the question several gifted Korean engineers from a company called Hanson Robotics gave a possible answer by asking a better question: Click the link above to see the final stages of the project. Anatomical pictures are here. The engineers decided that the body was not too important - what matters is Einstein's brain (plus face, to make a good package). They replaced the body by a robot. Everything has worked fine so Prof. Albert Einstein, nicknamed Albert HUBO, could meet with the president of the United States of America. It is widely believed that Hon. George W. Bush has convinced Prof. Albert Einstein to oppose the attempts of Einstein's colleagues to force Bush to take the nuclear option off the table. HUBO said that the Iranians could be working on the same bomb. Click the photograph above or here to see a directory with many other photographs. Einstein's robotic brother ASIMO supervised by Koizumi, the prime minister of Japan, has met the former Czech prime minister Špidla in 2003 and demonstrated that Špidla was a sourball. See here. So far, Prof. Einstein, much like Honda's ASIMO, only knows how to walk, serve tea, and compute spin foam amplitudes, so it is not terribly useful. But they hope to teach Einstein quantum mechanics and bosonic string theory next week and how to climb stairs in a few years. Meeting a robot in 1999 In 1999 or so, when I was at Rutgers, I met a robot in the Busch campus dining hall. He came to us, shaked my hand and we talked about everything - including Heisenberg's uncertainty principle. His voice was a typical computer voice equipped with a very authentic human intonation. He was so interesting and smart! The debate was much more meaningful than most debates with various loop quantum gravity people and many others. I was stunned: have they already succeeded to create artificial intelligence that exceeds not only 90% of people but also many senior professors? The answer was mysterious for half a day. Later I could re-check that it was a "synthetic personality" that was remotely controlled by a human being from about 50 meters. The human being could see through the robot's eyes, and he could control the motion and submit his speech that was transformed into the computerized voice color. Thursday, April 20, 2006 ... Deutsch/Español/Related posts from blogosphere Jefferson Physics Laboratory becomes a historic site Jefferson Physical Laboratory where we have offices has been declared a historic site by the American Physical Society, mostly because it is the first building that was ever built in the U.S. for physics research. Figure 1: The picture is mine See the letter that President Lawrence Summers and the department chair John Huth received here. The picture above is from 2002 but you already see the new attick which is pretty these days. Recall that it is exactly this Jefferson tower where the first gravitational red shift experiment was done by Pound, Rebka, and Snyder in the early 1960s. Its 22.6 meters were enough to measure the 4.92 x 10^{-15} relative change of the frequency of 14.4 keV gamma rays from iron-57. The prediction of general relativity - of a red shift factor "(1+gh/c^2)" (verify the numbers!) - was confirmed with a 1% accuracy. Integrability: giant magnons Diego Hofman and Juan Maldacena - these two physicists should not be confused with Diego Maradona - study the • excitations of N=4 gauge theory in d=4 in the planar limit. Recall that according to the gauge-gravity holographic correspondence, the strong coupling limit describes type IIB string theory on the product space "AdS5 x S5". A few years ago, Berenstein, Maldacena, and Nastase have shown that the gauge theory is not equivalent to pure supergravity but the full string theory; they identified the strings with the long traces. This research direction has been transformed into the studies of integrability and spin chains (these are the discretized strings) and we have talked about this topic at various places, for example here. This spin chain itself carries excitations and the most important ones are called magnons: it's an excitation that reverts the direction of a single spin (or the "magnetic moment" if you wish) in the spin chain and propagates as a wave along the chain. In the planar limit, i.e. up to the leading terms in the "1/N" expansion, physics should simplify. Many people have believed for some time that a full exact solution of string theory in this limit should exist. This task is equivalent to a full understanding of the worldsheet of a string propagating in the "AdS5 x S5" background for the simplest choice of its topology. In the variables mentioned above, the question is reduced to the spectrum, the dispersion relations, and the S-matrix of the magnons. Effectively, one needs to study the S-matrix for various polarizations and encounters a "256 x 256" matrix. Its form was recently fixed by Niklas Beisert, up to an overall normalization. Moreover, one month ago, Romuald Janik of Poland has shown how the crossing symmetry emerges from the formulae for the S-matrix. Hofman and Maldacena confirm the results but add something extremely interesting: the adjective "giant". In analogy with giant gravitons, you may suspect that there will be a new picture that replaces the original point-like magnon excitations by something big. Harvard Crimson: environmentalism is dead The Harvard Crimson has looked at environmentalism with critical and bright Harvard eyes and concluded that and that we have a chance to enter a new era in which the environment itself, not an ideology, is a winner. Piotr Brzezinski '07, a member of the Resource Efficiency Program, argues that all the dire predictions have so far been falsified, and if our care about the environment is supposed to impact reality in the future, those who care must abandon some methods such as the authoritative Soviet style manifested in the Kyoto protocol, open some taboos for debate, and start to publish realistic appraisals of reality even if they lead to less exciting headlines in the newspapers. Wednesday, April 19, 2006 ... Deutsch/Español/Related posts from blogosphere Michele Papucci: neutrino optics Michele Papucci from Berkeley gave a talk about neutrino optics. There will be a preprint about it with Gilad Perez, Hitoshi Murayama, and one more author whose name will be completed here if necessary. When we test cosmological models, we rely on regular optics of photons. Are there other "eyes" we could use? They must be weakly interacting, so the only possibilities are • gravitational waves • neutrinos Michele only focused on the last ones, the neutrinos. More precisely, it is the electron antineutrinos he is interested in. They are produced by supernovae (yes, there is some neutrino oscillation physics you must take into account); on the other hand, the Sun only creates neutrinos so the solar neutrino backgrounds does not affect their proposed experiments. You can't really measure the direction from which they come (pretty fuzzy optics) because the particles created in the inverse beta decay have a momentum that is virtually uncorrelated with the antineutrinos' momenta. So the only thing you can measure is the distribution of energies. BBC climate software confuses 200,000 computers This story is a good example how the climate models work in the most optimistic case. The idle time of most PCs is wasted. About 30,000 people like me run software such as MPRIME - the search for the greatest prime integers in the world. It is a well-defined activity and there are very good reasons to trust this software. Actually, there exist other programs above the BOINC platform, and some of them can be found in this list: However, some people don't like things like LHC at home too much. Instead, they want to save the world and help the humanity. So they download the third program in the list above, namely You can also join the group of 200,000 enthusiasts, the saviors of the planet, if you click the link above and continue with "Taking part in CPDN". This community will calculate the date of the armageddon. ;-) But wait a minute. The Reference Frame has been saying that the existing climate models are not trustworthy and those who run them often fail to respect basic principles of science. Ignore bloggers at your peril Clifford Johnson has pointed out an article in the Guardian. The article discusses some kind of research about the influence of bloggers. It also mentions three companies that were affected by bloggers because the bloggers described physics of Kryptonite locks, McDonald's abracadabra, as well as Dell whose last CEO was possibly fired. Well, the visitor data indicates that very different segments of the society are being influenced. For example, many people were looking for Angela Merkel semi-naked today. And of course, people are still interested in Mary Winkler as well as a potential massive nuclear strike. More demanding readers look for physics blogs uncertainty as well as the sad story of John Brodie, the physicist. Tuesday, April 18, 2006 ... Deutsch/Español/Related posts from blogosphere Cosmological breaking of SUSY and the seesaw Tonight, Michael McGuigan has made a new step in his attempt to make the seesaw mechanism for the cosmological constant realistic: The paper combines the previous work of Michael McGuigan - that we discussed here and that was mostly based on this blog article and/or comments of this article by Sean Carroll - with the brave proposal of my (former) adviser Tom Banks: Recall that Tom has proposed to interpret the cosmological constant - the curvature of empty space - as the primary effect and the supersymmetry breaking in particle physics as its consequence. This changes the question from "why is the cosmological constant so small" to the question "why is the supersymmetry breaking in particle physics so strong". The supersymmetry breaking induced by the tiny curvature of our Universe would normally be negligible, and Tom circumvents this problem by suggesting that an important exponent in his power law is corrected from a classical value of 1/4 to the value of 1/8 by huge effects of virtual black holes whose loops are localized near the de Sitter horizon. The relation with the seesaw mechanism is not quite clear to me - although both methods of course try to obtain the same kind of result for the vacuum energy (but via different effects, I think). Right now I don't have enough time to tell you exactly what I think about the proposal but the paper is rather concrete and tries to apply the Wheeler-DeWitt equation on various string-theoretical backgrounds. He seems to show that the off-diagonal elements of the vacuum energy (transitions) exist in three spacetime dimensions or less. Can you obtain these off-diagonal elements from Coleman-DeLuccia-like instantons? I believe that the proposal is interesting enough to be looked at. Incidentally, Apple finally offers Mac users a decent operating system. It is called Windows XP. Monday, April 17, 2006 ... Deutsch/Español/Related posts from blogosphere Stanislav Petrov supersedes Easter Bunny and Jesus Christ During the paganic era, people would celebrate Easter as the holidays of spring, fertility, and Easter bunny. The Christians cleverly overwrote this special season by the anniversary of resurrection of Jesus Christ, our savior. However, things changed again in 2006. The liberal blogosphere, including Cosmic Variance and In Search of 42, among hundreds of other blogs, has replaced the Easter bunny and Jesus Christ by a Soviet military officer: the Easter has become the Stanislav Petrov Day. It is not exactly clear why the Easter season was chosen. Well, Stanislav Petrov (*1939) saved the world on September 26th, 1983. He realized that the Soviet computer system was crappy - because it was a technology developed in a left-wing political system - and discarded the warning of his computers that the American missiles were approaching the Soviet targets. By having failed to inform his superiors, he has arguably saved half a billion lives. ;-) The details have been secret until 1998. However, the rough story was not. I remember that on Monday, September 26th 1983, when I was in the 4th grade, during Andropov's era, we were just playing volleyball in the gym or something like that when the school radio announced that the international situation deteriorated and the conflict was imminent. We have never learned anything else beyond this single message in the school radio and the worries faded away completely. Today, Petrov lives in relative poverty as a Russian pensioner. A San Francisco peace organization named him the new savior of the world (only one of his two predecessors enjoys the same honor; don't confuse the honor with the true savior of modern music) and awarded him with a breathtaking amount of $1,000. Congratulations. If someone wants to send him more money, let me know. Back to 2006 But we live in 2006 and the main target right now is not Moscow but Tehran. Professor James Miller who is a game theory expert and a candidate for the president of Harvard - one that vows to defeat feminism - has offered a smooth scenario how the U.S. attacks on Iran will be started and justified. The Israeli prime minister will inform Bush that Israel is threatened and it will have to nuke Iran unless the nuclear program of the crazy mullahs is stopped. Because Iran wants to wipe Israel off the map, Israel has a kind of moral right to make such an announcement. The U.S. weapons are much stronger and cleaner than the Israeli weapons. By using both types against Iran, Bush will save not only Israel but he will also save millions of Iranian lives that would otherwise be lost because of the dirty Israeli nukes. Next year, Easter Bunny, Jesus Christ, and Stanislav Petrov will be replaced by George Bush (and James Miller), the new savior. Mahmoud is probably a nail Meanwhile, it's been announced that Mahmoud Ahmadinejad is probably a nail. What kind of nail? He is a nail of the Hidden Imam who is secretly the Sovereign of the World and who has been hiding since 941. :-) See here. Mahmoud received the presidency from the Hidden Imam for promising to provoke a clash of civilizations. Mahmoud realizes that the U.S. is the last infidel country whose military is not impotent and Mahmoud, supported by God, will defeat the U.S. in a long asymmetric war. But he will wait until 2008 when Bush is out of office because Bush is clearly an aberration - everyone else since Truman would run away. A divine anthropic coincidence puts the triumph of the Iranian Manhattan project, secretly pursued by Imam Hossein Nuclear University, to the same year 2008, Mahmoud argues. Wow. These people are real nutcases which is not a good combination with the advanced P-2 centrifuges that, according to the New York Times, are suddenly again being developed in Iran. Mahmoud has just given Hamas the same thing that Harvard has pledged for the feminist programs: 50 million dollars. Finally, Reuel Gerecht from AEI asks the question: The U.S. and the U.K. have already been training an occupation of a fictitious Middle East country called "Korona" in 2015 whose territory happens to coincide with Iran and whose citizens are Iranians. Well, obviously, some dynamics is on both sides. Saturday, April 15, 2006 ... Deutsch/Español/Related posts from blogosphere Carlo Rovelli and graviton propagator Several readers have asked me what I think about a new in loop quantum gravity, an attempt described as a groundbreaking paper by a fellow blogger and included in the unfinished revolution by another blogger. It would be far too dramatic to say that I am flabbergasted but one thing is clear. The work is so manifestly incorrect that I just can't fully comprehend how someone who has attended at least one quantum field theory course can fail to see it. But of course, yes, I am happy that people are still trying different things and some of them don't get discouraged by decades of failure - and I always open such papers with an enthusiastic hope that a new breakthrough will appear in front of my eyes. ;-) The paper linked above is supposed to be a more complete version of Rovelli's previous graviton propagator paper. Indeed, you can see that several pages in these two papers are identical. Most of these two papers' assumptions are misguided, nearly all the nontrivial steps are erroneous, and the results are incorrect, too. Semiclassical GR Let us start with semiclassical gravity. At this level, the graviton propagator is philosophically analogous to the propagators of all other quantum fields you can think of - for example the electromagnetic field. You must start with a background; the simplest background is the flat Minkowski space. This means that you write the full metric as • g_{mn} = eta_{mn} + sqrt(Gnewton) h_{mn} Here, eta_{mn} is a background, i.e. a classical vacuum expectation value of the quantum field while h_{mn} is the fluctuation around this background that remains a quantum field and is treated as a set of small numbers. The full gravitational action can be expanded in "h_{mn}", to get Happy Easter Something analogous to annihilating letters, jumping frog, shooting frog, and stained glass. Click here for Easter eggs in full screen. Cuba vs. Czechia 1:1 Meanwhile, Cuba has expelled the Czech diplomat, Mr. Stanislav Kázecký, for spying on behalf of the U.S. - which is most likely not true. The Czech Republic has followed all the decent traditions and refused to extend the visa for a Cuban diplomat, too. :-) While the Czechoslovak Socialist Republic was one of Cuba's closest friends, the Czech Republic is its #2 foe. A large portion of the U.N. resolutions that criticize the situation in Cuba as well as the trade restrictions for the European Union members have been proposed by the Czech Republic. There have been many recent incidents between the two countries. For example, the countrymate of mine above is a psychologist called Helena Houdová. (In fact, she is my citymate, from Pilsen.) She was former Miss Czech Republic 1999 and the Dean's world hero of the week. In January 2006, she decided to take pictures of the Cuban slums, something that Fidel Castro pretends not to exist. She was immediately arrested (together with her friend, Mariana Kroftová, who is also a model) - for taking the pictures - and the commies have confiscated her film. As you can imagine, those communist morons can't really compete with a modern capitalist young woman from the Czech Republic and her state-of-the-art technologies. She stored a memory card from her digital camera in her bra. Today, she is showing the alarming pictures of the "island of freedom" all around the world. Cuba has canceled various celebrations of the Czech national holidays and expelled or temporarily arrested many Czech citizens - the aristocrat Schwarzenberg and the politician Ivan Pilip (with his friend Filip Bubeník) are two most well-known examples. You can try to liberate Pilip by shooting 50 Cuban agents here. Friday, April 14, 2006 ... Deutsch/Español/Related posts from blogosphere La Griffe du Lion: prison ratios La Griffe du Lion has a new technical analysis of a sociological issue. He asks: His answer is based on mathematics that is more or less equivalent to his previous analysis of women in science. The conservative states impose a lower threshold to be arrested - they only tolerate smaller crime. This makes the groups of people behind bars less selective. Because the black crime Gaussian is broader and higher than the white ones in the same way as the male math aptitude Gaussian is broader and higher than the female one, smaller selectivity is translated to a less dramatic ratio between the black and white percentages. It is therefore logical and inevitable that the racial disparity is more striking in the left-wing states. The indentity of La Griffe du Lion remains a mystery to us. Is George W. Bush a feminist? David Goss has sent me an insightful that starts with the announcement that the Bush administration is going to investigate universities with fewer women in math and science than the feminists such as Barbara Boxer would like. Schlafly notices that even though Bush has been the president for more than five years, Bill Clinton's feminist policies are apparently still in force. She asks: Is Bush a feminist or just a gentleman who is intimidated by the feminists? At the physical level of policies, there is no real difference between the two answers. 171 wrestling teams have already been intentionally destroyed by these dumb policies and math and science may follow. Schlafly explains how this mindless feminist mentality, based on a striking misunderstanding of the differences between men and women, can have a devastating effect on universities and beyond. There is of course not a shred of evidence of any discrimination, she writes: men are simply more interested in competitive sports, math, and science. Moreover, when it comes to muscle growth, testosterone is the key to success. After having explained how unreasonable the feminist approach is, she says that the Bush administration is ignoring one example of increasing gender disparity that can indeed have bad consequences: a decreasing percentage of male schoolteachers. With all my respect for George W. Bush, let me offer an obvious answer to Schlafly's basic question. Yes, Bush is a feminist and he in fact does think that women are brighter in many respects including science and math - and most discussions he has with the First Lady have to reinforce this belief. ;-) Bert Schroer vs. path integral Prof. Bert Schroer has publicized his essay in which he argues that there is something wrong with the path integrals and they should be universally replaced by algebraic methods. Because half of the Internet is going to decide that he must be right, at least in some sense, let me also post the correct answers to his doubts - which includes a trivial assertion that his statements are nonsensical. The first couple of pages are filled with a content-free bitterness about the path integrals and a unsubstantiated promotion of algebraic quantum field theory: the kind of silly unphysical whining that all of us know very well from "Not Even Wrong" and other places on the Internet. The author is upset about the "string theory caravan" that does not support "great" ideas - such as the "great" idea of Prof. Schroer himself that the path integrals are bad. The first non-trivial statement appears on page 3. Prof. Schroer essentially claims that the path integrals give a wrong result if you use them to describe a spinning top. The critical sentence is the following: • The paradoxical situation consists in the fact that although the higher fluctuation terms (higher perturbations) are nonvanishing, one must ignore them in order to arrive at the rigorous result. Wow. The path integral fails at higher orders, he says. Of course that this statement is a complete nonsense. Path integrals are better, not worse, to compute loop effects, especially if one has to deal with non-Abelian gauge symmetries. By introducing the Faddeev-Popov ghosts, the best formalism to calculate higher-order effects in this theory may be developed. Moreover, the path integral is also a superior approach in obtaining non-perturbative corrections such as instanton corrections. Path integrals also make the Lorentz symmetry of quantum field theories manifest and they have other advantages, too. Thursday, April 13, 2006 ... Deutsch/Español/Related posts from blogosphere Google calendar A new service by Google is You need to have a Google account - for example a Gmail account. With Gmail, you may also incorporate the calendar - with the list of things you have to do - into the corner of your Gmail inbox. The interface is based on a traditionally fresh, Google-like, no-nonsense environment. See Calendar help for more details. Incidentally, you will also be able to make Google searches using your voice and telephone: Flux compactifications of M-theory and F-theory Today we had an oral exam, some minor progress in the calculations of the black hole corrections, and I attended Cumrun Vafa's class which is always a good opportunity to refresh one's knowledge of various things. He started with the Dijkgraaf-Vafa correspondence, and finished with flux compactifications. I will write comments about Dijkgraaf-Vafa later, but let me start with the following: Flux compactifications As the Becker sisters explained, the compactification of M-theory on Calabi-Yau four-folds (which are eight-real-dimensional which leaves three large spacetime dimensions) actually requires nonzero values of the four-form field strength G4. It is because the eleven-dimensional action contains the terms of the form • S = int C3 /\ ( G4 /\ G4 - I8(R) ) + ... The first term is a tree-level Chern-Simons term needed for classical supersymmetry of eleven-dimensional supergravity while the second term depending on the Riemann tensor R may be viewed as a one-loop correction. Note that the one-loop terms are often determined independently of the UV details of physics and M-theory is no exception. Wednesday, April 12, 2006 ... Deutsch/Español/Related posts from blogosphere Richard Lindzen: Climate of Fear Prof. Richard Lindzen of MIT is one of the world's most respected climate scientists or, if you at least allow me to use the alarmists' words, he is considered by them to be the world's most respectable climate skeptic. See also: Lindzen 2008: Climate Science: Is it currently designed to answer questions? Today, in his Wall Street Journal article, he describes not only the reasons why the public should not believe the statements that the carbon dioxide emissions are bringing us closer to the armageddon but especially the intense intimidation campaign that the scientists who reach politically incorrect conclusions have to face. One of the topics that Lindzen talks about are the double standards in the journals where non-alarmist articles about the climate are commonly refused without review as being without interest. I have already learned how it works which is why I recommended Steve McIntyre not to spend too much time trying to get his articles published in "mainstream" journals. But the main focus of Lindzen's discussions seems to be funding. Funding is something that is cut for all of those who indicate the obvious - namely that science offers no justification for bizarre policies such as the Kyoto protocol. Harvard energy initiative On Monday, we had a faculty lunch meeting at the Faculty Club and one of the topics was the so-called "Harvard energy initiative". A short story is that a large amount of money was given to something described by these three words - and up to 10 new faculty positions are expected to be created - except that no one knows what "Harvard energy initiative" means and what people should be hired. Tuesday, April 11, 2006 ... Deutsch/Español/Related posts from blogosphere Microsoft: competition for Google Scholar Everyone knows a search engine that is used for more than 50 percent of the searches in the world. Many of us find another service comparably priceless: That's a place where you can search through the full text of scientific articles in all fields you can imagine, and get the results sorted according to the relevance which is a criterion that includes the number of citations. In the 1980s, IBM would be a very important company in the computer industry but Microsoft took over. Is Google going to make Microsoft obsolete in a similar way? What will be the result of the Microsoft vs. Google competition? Well, the guys in Microsoft seem to be smarter than those in IBM 20 years ago and they don't want to give up. So the counterparts of are while the counterpart of will be It became available tonight before 9 p.m. Eastern time - but the website so far fails to give any scholarly results. Instead, I get the standard search results. Also, arXiv seems to be absent from their list of journals - the only journal with "arxi" in it is "Rethinking in Marxism". Monday, April 10, 2006 ... Deutsch/Español/Related posts from blogosphere Elizabeth Lada: stars born in clusters Elizabeth Lada from University of Florida is an astronomer who is most famous for defending the statement that most stars are born in clusters. This statement has brought two communities that studied star formation - those who only wanted the overall rate and those who investigated individual cases microscopically - closer together. It was interesting to see a nice colloquium from an adjacent field - a field whose conclusions are slightly less theoretical, quantitative, universal, and principled than ours but one that can offer nicer pictures. Some of the main messages of the talk are the following: CDF excitement - press conference This is how good P.R. looks like at Fermilab. ;-) Or is it more than good P.R.? From: June Matthews We've received advanced word on some exciting new results from the CDF experiment at Fermilab, where Christoph Paus heads up the MIT effort. This is the wording: Fermilab will hold a press conference at 4:00 pm today (Central Time) with details on the precision measurement of extremely rapid transitions between matter and antimatter. It has been known for 50 years that very special species of subatomic particles can make spontaneous transitions between matter and antimatter. In this exciting new result, CDF physicists measured the rate of these matter-antimatter transitions for the B sub s meson, which consists of the heavy bottom quark bound by the strong nuclear interaction to a strange anti-quark, a staggering rate that challenges the imagination: 200 billion times per second. There will be a life feed of the press conference available on the web at: Marc Kastner P.S. You can click the envelope icon two lines below this one to send the announcement about the press conference to all your friends who might be interested. The online press conference started at 5:00 p.m. Eastern Daylight Saving Time or 2:00 p.m. Californian time and ended one hour later. Main content: They determined that at 99.5% confidence level, they have seen oscillations between matter and antimatter with frequency 17.33 plus minus something inverse picoseconds. The results are consistent with the Standard Model and place new upper bounds on flavor violation of new physics such as supersymmetry. Harper under pressure: scrap Kyoto While many other politicians experienced pressure from the activists, Stephen Harper, the prime minister of Canada, is under pressure from who urge him to scrap the "pointless" Kyoto protocol. See the full letter and signatories here. They explain that "global climate change" is an emerging science and the Kyoto treaty would not have been negotiated in the 1990s if the parties knew what we know today. The cliche "climate change is real" is a meaningless phrase used by the activists to fool the public into believing that a climate catastrophe is looming. Climate is changing all the time because of natural reasons and the human impact still can't be disentangled from the natural noise. Meanwhile, Rona Ambrose has reviewed the situation and concluded that the targets can't be met by Canada: it's impossible. The Canadian economy is recently doing very well which is of course very bad for similar anti-growth policies: the emissions are growing while they should be shrinking according to the protocol. I think that Canada itself should also honestly admit that if we will hypothetically face warming, Canada will benefit from it. The goal should be to isolate the countries that are supposed to be the "losers" of the hypothetical warming and help them. And also help those countries that face problems that are unrelated to warming which is far more often the case. ;-) But help them not with the crazy egalitarian policies according to which the whole planet must be heated up or cooled down simultaneously, but help them by rational, focused, meaningful projects. The U.N. should allow Canada to change what its contributions will be and the whole U.N. framework for climate change should be re-built on new principles. See also: Kyoto hopes vanish. Rona Ambrose now intends to challenge the international focus on setting emission targets. I am sure she has enough intelligence and charm to do important things. Incidentally, the last link explains some proposed biomass projects that could actually make at least some sense but even these things should be studied and planned rationally. Prof. Bob Carter, a paleoclimate geologist, explains in the London Telegraph that the main problem with the global warming is that it stopped in 1998. Meanwhile, Al Gore has admitted that the global warming is no longer a scientific or political issue: it is a moral or a religious issue, if you will, and Al Gore is a prophet. Figure 1: The picture from the Boston Globe shows what the alarmists consider a "balanced journalism" and fair reporting about the climate. John Brodie - sad story Today, Rutland Herald offers a very sad story about whom many people, not only at Princeton University, Stanford University and the Perimeter Institute, know pretty well. John has suffered from bipolar mental illness - the same disorder that Mary Winkler has been treated for - and jumped into a cold river on January 28th, 2006. Technically, his most well-known paper was his work with Amihay Hanany about brane boxes but the paper he co-authored and one can't forget is with Bernevig, Susskind, and Toumbas about the construction of the quantum Hall effect from D-branes. Via Not Even Wrong. Sunday, April 09, 2006 ... Deutsch/Español/Related posts from blogosphere Readying a massive (nuclear) strike on Iran Update: Iran claims to have shot down an unmanned airplane from Iraq on Sunday. In the Czech Republic, it is the news #1 at major servers such as but no one seems to care in the U.S. According to the April 17th issue of and its investigative journalist Seymour Hersh, the White House is finalizing plans for a major air attack against selected targets in Iran. The situation has developed quite a bit during the last year. The theory behind these plans is that an attack is the only method how to stop Mahmoud Ahmadinejad, a modern potential counterpart of Adolf Hitler as the White House officials describe him in private discussions, from developing nuclear weapons and using them against Israel and, with the help of terrorists, against the whole civilized world. The attacks are meant to humiliate the Iranian religious government and to make the people overthrow it. I personally don't believe that the bombing would encourage Iranians to follow America. I did not believe similar idealistic predictions in Iraq either. The support of Hussain was clearly significant. Environmentalists who like sustainability should like the bombing campaign because the "coercion" attacks will be "sustained". Another theory is that Ahmadinejad sees the West as "wimps who will cave in". Some sources argue that it is a public misconception that Bush has been mostly thinking about Iraq since 9/11 - the main and more ambitious ideas were always about Iran. Even Quantoken agrees that the real danger is Iran. The White House is secretly communicating with the members of the U.S. senate and no one really objects to the idea of a war. There is no international opposition either because no one really likes the regime of Iran, Hersh argues. Even ElBaradei agrees that the Iranian leaders are 100% certified nutcases. On the other hand, no other country - not even Great Britain - is going to actively support nuking. Some plans are already underway. Some of the Iranian nuclear facilities are deep underground (25 meters) and Pentagon believes that they will require a bunker-buster tactical nuclear weapon such as B61-11, the "earth penetrating" thermonuclear daughter of the old B61-7 gravity bomb, developed in 1997 under Clinton. The energy from this key nuclear product is able to penetrate up to 100 meters of soil (not rock) and the bomb explodes 6 meters beneath the surface. One of the main targets is Natanz, 300 kilometers south of Tehran. This particular plan is not technologically new because the U.S. was thinking about bombing a similar facility near Moscow in the early 1980s. Rather detailed plans already exist how big a part of the air force of Iran has to be eliminated for the fix and what to do with the mess that would probably emerge in Iran and Southern Iraq. Controversy exists how many places would have to be bombed and whether the nuclear option is useful. I definitely recommend you to read the article. What about the Reference Frame? I am always afraid of a war - and I am always repelled by its obvious negative consequences. On the other hand, there seems to be a rather clear danger in the air (athough I can't rigorously prove it), and if this operation became necessary and remained a job for the air forces and avoided ground battles, I would be moderately optimistic because all these operations in the past were rather successful. Incidentally, the U.S. troops in Iran will mark the facilities by lasers to increase the accuracy of the operation and reduce the civil casualties. Nuclear weapons have been silent for 60 years but they're not really a hot new technology. At the high school, during the first Gulf War, our classes were often cancelled and we were watching. Most of the boys in our class were truly impressed. Whenever the U.S. technology edge is being displayed, one can always see the natural authority of America, especially if a maximum effort is made to minimize civil casualties. The Reference Frame recommends all readers in Iran - and everyone they know - to move at least 50 kilometers from the neighborhood of the potential targets, especially Natanz (plus other targets enumerated in Wikipedia in the link at the bottom). We also recommend all citizens of Iran to start a revolution and establish democracy and freedom in Iran. This blog can't guarantee that the story from New Yorker is accurate but there are very good reasons to think that it might be true. Hersh, the author of the article, has won a Pulitzer prize in 1970 for uncovering a massacre in Vietnam by the U.S. troops and he was also the reporter who broke the Abu Ghraib prison scandal. That's a pretty good record, I think. He likes to expose things that look anti-Bush but whether his new article is really anti-Bush remains to be seen. The contingency planning is obviously what many people in Pentagon are paid for but I can tell you neither how many decisions have actually been made, nor whether such a thing could work out as smoothly as some successful operations in the past. Other sources: The hypothetical bombing poses many dilemmas - moral, strategic, tactical, psychological, economical - and question marks but the psychological pressure could not be that bad. The Reference Frame also recommends Mahmoud Ahmadinejad to establish democracy, give up nuclear ambitions, and resign. Such a reasonable decision could hypothetically save millions of lives.
083007e6f25ab3bb
Saturday, June 30, 2007 Progress in the understanding of baryon masses In the previous posting I explained the progress made in understanding of mesonic masses basically due to the realization how the Chern-Simons coupling k determines Kähler coupling strength and p-adic temperature discussed in still earlier posting. Today I took a more precise look at the baryonic masses. It the case of scalar mesons quarks give the dominating contribution to the meson mass. This is not true for spin 1/2 baryons and the dominating contribution must have some other origin. The identification of this contribution has remained a challenge for years. A realization of a simple numerical co-incidence related to the p-adic mass squared unit led to an identification of this contribution in terms of states created by purely bosonic generators of super-canonical algebra and having as a space-time correlate CP2 type vacuum extremals topologically condensed at k=107 space-time sheet (or having this space-time sheet as field body). Proton and neutron masses are predicted with .5 per cent accuracy and Δ-N mass splitting with .2 per cent accuracy. A further outcome is a possible solution to the spin puzzle of proton. 1. Does k=107 hadronic space-time sheet give the large contribution to baryon mass? In the sigma model for baryons the dominating contribution to the mass of baryon results as a vacuum expectation value of scalar field and mesons are analogous to Goldstone bosons whose masses are basically due to the masses of light quarks. This would suggest that k=107 gluonic/hadronic space-time sheet gives a large contribution to the mass squared of baryon. p-Adic thermodynamics allows to expect that the contribution to the mass squared is in good approximation of form Δm2= nm2(107), where m2(107) is the minimum possible p-adic mass mass squared and n a positive integer. One has m(107)=210m(127)= 210me51/2=233.55 MeV for Ye=0 favored by the top quark mass. 1. n=11 predicts (m(n),m(p))=(944.5, 939.3) MeV: the actual masses are (m(n),m(p)=(939.6,938.3) MeV. Coulombic repulsion between u quarks could reduce the p-n difference to a realistic value. 2. λ-n mass splitting would be 184.7 MeV for k(s)=111 to be compared with the real difference which is 176.0 MeV. Note however that color magnetic spin-spin splitting requires that the ground state mass squared is larger than 11m02(107). 2. What is responsible for the large ground state mass of the baryon? The observations made above do not leave much room for alternative models. The basic problem is the identification of the large contribution to the mass squared coming from the hadronic space-time sheet with k=107. This contribution could have the energy of color fields as a space-time correlate. 1. The assignment of the energy to the vacuum expectation value of sigma boson does not look very promising since the very of existence sigma boson is questionable and it does not relate naturally to classical color gauge fields. More generally, since no gauge symmetry breaking is involved, the counterpart of Higgs mechanism as a development of a coherent state of scalar bosons does not look like a plausible idea. 2. One can however consider the possibility of Bose-Einstein condensate or of a more general many-particle state of massive bosons possibly carrying color quantum numbers. A many-boson state of exotic bosons at k=107 space-time sheet having net mass squared m2=nm02(107), n=∑i ni could explain the baryonic ground state mass. Note that the possible values of ni are predicted by p-adic thermodynamics with Tp=1. 3. Glueballs cannot be in question Glueballs (see this and this) define the first candidate for the exotic boson in question. There are however several objections against this idea. 1. QCD predicts that lightest glue-balls consisting of two gluons have JPC= 0++ and 2++ and have mass about 1650 MeV. If one takes QCD seriously, one must exclude this option. One can also argue that light glue balls should have been observed long ago and wonder why their Bose-Einstein condensate is not associated with mesons. 2. There are also theoretical objections in TGD framework. • Can one really apply p-adic thermodynamics to the bound states of gluons? Even if this is possible, can one assume the p-adic temperature Tp=1 for them if it is quite generally Tp=1/26 for gauge bosons consisting of fermion-antifermion pairs (see this). • Baryons are fermions and one can argue that they must correspond to single space-time sheet rather than a pair of positive and negative energy space-time sheets required by the glueball Bose-Einstein condensate realized as wormhole contacts connecting these space-time sheets. 4. Do exotic colored bosons give rise to the ground state mass of baryon? The objections listed above lead to an identification of bosons responsible for the ground state mass, which looks much more promising. 1. TGD predicts exotic bosons, which can be regarded as super-conformal partners of fermions created by the purely bosonic part of super-canonical algebra, whose generators belong to the representations of the color group and 3-D rotation group but have vanishing electro-weak quantum numbers. Their spin is analogous to orbital angular momentum whereas the spin of ordinary gauge bosons reduces to fermionic spin. Thus an additional bonus is a possible solution to the spin puzzle of proton. 2. Exotic bosons are single-sheeted structures meaning that they correspond to a single wormhole throat associated with a CP2 type vacuum extremal and would thus be absent in the meson sector as required. Tp=1 would characterize these bosons by super-conformal symmetry. The only contribution to the mass would come from the genus and g=0 state would be massless so that these bosons cannot condense on the ground state unless they suffer topological mixing with higher genera and become massive in this manner. g=1 glueball would have mass squared 9m02(k) which is smaller than 11m02. For a ground state containing two g=1 exotic bosons, one would have ground state mass squared 18m02 corresponding to (m(n),m(p))=(1160.8,1155.6) MeV. Electromagnetic Coulomb interaction energy can reduce the p-n mass splitting to a realistic value. 3. Color magnetic spin-spin splitting for baryons gives a test for this hypothesis. The splitting of the conformal weight is by group theoretic arguments of the same general form as that of color magnetic energy and given by (m2(N),m2(Δ))= (18m02-X, 18m02+X) in absence of topological mixing. n=11 for nucleon mass implies X=7 and m(Δ) =5m0(107)= 1338 MeV to be compared with the actual mass m(Δ)= 1232 MeV. The prediction is too large by about 8.6 per cent. If one allows topological mixing one can have m2=8m02 instead of 9m02. This gives m(Δ)=1240 MeV so that the error is only .6 per cent. The mass of topologically mixed exotic boson would be 660.6 MeV and equals m02(104). Amusingly k=104 happens to corresponds to the inverse of αK for gauge bosons. 4. In the simplest situation a two-particle state of these exotic bosons could be responsible for the ground state mass of baryon. Also the baryonic spin puzzle caused by the fact that quarks give only a small contribution to the spin of baryons, could find a natural solution since these bosons could give to the spin of baryon an angular momentum like contribution having nothing to do with the angular momentum of quarks. 5. The large value of the Kähler coupling strength αK=1/4 would characterize the hadronic space-time sheet as opposed to αK=1/104 assignable to the gauge boson space-time sheets. This would make the color gauge coupling characterizing their interactions strong. This would be a precise articulation for what the generation of the hadronic space-time sheet in the phase transition to a non-perturbative phase of QCD really means. 6. The identification would also lead to a physical interpretation of super(-conformal) symmetries. It must be emphasized the super-canonical generators do not create ordinary fermions so that ordinary gauge bosons need not have super-conformal partners. One can of course imagine that also ordinary gauge bosons could have super-partners obtained by assuming that one wormhole throat (or both of them) is purely bosonic. If both wormhole throats are purely bosonic Higgs mechanism would leave the state essentially massless unless p-adic thermal stability allows Tp=1. Color confinement could be responsible for the stability. For super-partners having fermion number Higgs mechanism would make this kind of state massive unless the quantum numbers are those of a right handed neutrino. 7. The importance of the result is that it becomes possible to derive general mass formulas also for the baryons of scaled up copies of QCD possibly associated with various Mersenne primes and Gaussian Mersennes. In particular, the mass formulas for "electro-baryons" and "muon-baryons" can be deduced (see this) For more details about p-adic mass calculations of elementary particle masses see the chapter p-Adic mass calculations: elementary particle masses of the book "p-Adic Length Scale Hierarchy and Dark Matter Hierarchy". The chapter p-Adic mass calculations: hadron masses describes the model for hadronic masses. The chapter p-Adic mass calculations: New Physics explains the new view about Kähler coupling strength. Friday, June 29, 2007 The model for hadron masses revisited The blog of Tommaso Dorigo contains two postings which served as a partial stimulus to reconsider the model of hadron masses. The first posting is The top quark mass measured from its production rate and tells about new high precision determination of top quark mass reducing its value to the most probale value 169.1 GeV in allowed interval 164.7-175.5 GeV. Second posting Rumsfeld hadrons tells about "crackpottish" finding that the mass of Bc meson is in an excellent approximation average of the mass of Ψ and Υ mesons. TGD based model for hadron masses allows to understand this finding. 1. Motivations There were several motivations for looking again the p-adic mass calculations for quarks and hadrons. 1. If one takes seriously the prediction that p-adic temperature is Tp=1 for fermions and Tp=1/26 for gauge bosons as suggested by the considerations of blog posting (see also this), and accepts the picture about fermions as topologically condensed CP2 type vacuum extremals with single light-like wormhole throat and gauge bosons and Higgs boson as wormhole contacts with two light-like wormhole throats and connecting space-time sheets with opposite time orientation and energy, one is led to the conclusion that although fermions can couple to Higgs, Higgs vacuum expectation value must vanish for fermions. One must check whether it is indeed possible to understand the fermion masses from p-adic thermodynamics without Higgs contribution. This turns out to be the case. This also means that the coupling of fermions to Higgs can be arbitrarily small, which could explain why Higgs has not been detected. 2. There has been some problems in understanding top quark mass in TGD framework. Depending on the selection of p-adic prime p≈ 2k characterizing top quark the mass is too high or too low by about 15-20 per cent. This problem had a trivial resolution: it was due to a calculational error due to inclusion of only the topological contribution depending on the genus of partonic 2-surface. The positive surprise was that the maximal value for CP2 mass corresponding to the vanishing of second order correction to electron mass and maximal value of the second order contribution to top mass predicts exactly the recent best value 169.1 GeV of top mass. This in turn allows to clean up uncertainties in the model of hadron masses. 2. The model for hadron masses The basic assumptions in the model of hadron masses are following. 1. Quarks are characterized by two kinds of masses: current quark masses assignable to free quarks and constituent quark masses assignable to bound state quarks (see this). This can be understood if the integer kq characterizing the p-adic length scale of quark is different for free quarks and bound quarks so that bound state quarks are much heavier than free quarks. A further generalization is that the value of k can depend on hadron. This leads to an elegant model explaining meson and baryon masses within few percent. The model becomes more precise from the fixing of the CP2 mass scale from top mass (note that top quark is always free since toponium does not exist). This predicts several copies of various quarks and there is evidence for three copies of top corresponding to the values kt=95,94,93. Also current quarks u and d can correspond to several values of k. 2. The lowest mass mesonic formula is extremely simple. If the quarks characterized by same p-adic prime, their conformal weights and thus mass squared are additive: m2B = m2q1+ m2q2. If the p-adic primes labelling quarks are different masses are additive mB = mq1+ mq2. This formula generalizes in an obvious manner to the case of baryons. Thus apart from effects like color magnetic spin-spin splitting describable p-adically for diagonal mesons and in terms of color magnetic interaction energy in case of nondiagonal mesons, basic effect of binding is modification of the integer k labelling the quark. 3. The formula produces the masses of mesons and also baryons with few per cent accuracy. There are however some exceptions. 1. The mass of η' meson becomes slightly too large. In case of η' negative color binding conformal weight can reduce the mass. Also mixing with two gluon gluonium can save the situation. 2. Some light non-diagonal mesons such as K mesons have also slightly too large mass. In this case negative color binding energy can save the situation. 2. An example about how mesonic mass formulas work. The mass formulas allow to understand why the "crackpottish" mass formula for Bc holds true. The mass of the Bc meson (bound state of b and c quark and antiquark) has been measured with a precision by CDF (see the blog posting by Tommaso Dorigo) and is found to be M(Bc)=6276.5+/- 4.8 MeV. Dorigo notices that there is a strange "crackpottian" co-incidence involved. Take the masses of the fundamental mesons made of c anti-c (Ψ) and b anti-b (Υ), add them, and divide by two. The value of mass turns out to be 6278.6 MeV, less than one part per mille away from the Bc mass! The general p-adic mass formulas and the dependence of kqon hadron explain the co-incidence. The mass of Bc is given as m(Bc)= m(c,kc(Bc))+ m(b,kb(Bc)), whereas the masses of Ψ and Υ are given by m( Ψ)= 21/2m(c,kΨ) m(Υ)= 21/2m(b,kΥ). Assuming kc(Bc)= kc(Ψ) and kb(Bc)= kb(Υ) would give m(Bc)= 2-1/2[m( Ψ)+m( Υ)] which is by a factor 21/2 higher than the prediction of the "crackpot" formula. kc(Bc)= kc( Ψ)+1 and kb(Bc)= kb( Υ)+1 however gives the correct result. As such the formula makes sense but the one part per mille accuracy must be an accident in TGD framework. 1. The predictions for Ψ and Υ masses are too small by 2 resp. 5 per cent in the model assuming no effective scaling down of CP2 mass. 2. The formula makes sense if the quarks are effectively free inside hadrons and the only effect of the binding is the change of the mass scale of the quark. This makes sense if the contribution of the color interactions, in particular color magnetic spin-spin splitting, to the heavy meson masses are small enough. Ψ and ηc have spins 1 and 0 and their masses differ by 3.7 per cent (m(ηc)=2980 MeV and m(Ψ)= 3096.9 MeV) so that color magnetic spin-spin splitting is measured using per cent as natural unit. Monday, June 25, 2007 Still about Witten's new ideas Witten's paper on 3-D quantum gravity is now in the web. Here is the abstract. We consider the problem of identifying the CFT's that may be dual to pure gravity in three dimensions with negative cosmological constant. The c-theorem indicates that three-dimensional pure gravity is consistent only at certain values of the coupling constant, and the relation to Chern-Simons gauge theory hints that these may be the values at which the dual CFT can be holomorphically factorized. If so, and one takes at face value the minimum mass of a BTZ black hole, then the energy spectrum of three-dimensional gravity with negative cosmological constant can be determined exactly. At the most negative possible value of the cosmological constant, the dual CFT is very likely the monster theory of Frenkel, Lepowsky, and Meurman. The monster theory may be the first in a discrete series of CFT's that are dual to three-dimensional gravity. The partition function of the second theory in the sequence can be determined on a hyperelliptic Riemann surface of any genus. We also make a similar analysis of supergravity. 1. Key ideas The key observations are following. 1. In 3-D case Einstein-Hilbert action can be written as Chern-Simons action for gauge group SO(2,2) by combining vielbein connection form with vielbein (this for negative cosmological constant). 2. Although 3-D gravity is trivial dynamically, it allows so called BTZ blackholes for negative cosmological constant. These blackholes have a huge degeneracy of states and the idea is that this degeneracy could correspond to primary fields of QFT defined by Chern-Simons action. 3. The Virasoro algebra decomposes into direct sum of left and right algebras corresponding to left and right movers. If gravitational Chern-Simons action is assumed to vanish, one has left-right symmetry and k=kL=kR=k and c=24k. Holomorphic factorization of the partition function into a product occurring for cL= cL=24k makes the model calculable and the partition function can be expressed as power series using the well known modular invariant J-function j(q)= 1/q+ 19688q+... appearing in number theory and associated with the Monster group. This allows to identify blackhole states in terms of primary fields belonging to the representations of the Monster group. There is a critical discussion of by Jacques Distler criticizing the holomorphic factorization assumption as unphysical at the classical limit when k is large. To me the condition kL=kR looks very natural. Distler also argues that this kind of approach probably fails if blackholes are actually absent since the condition that there is a gap 0<h<k+1 in the spectrum of states is too strong so that one would be no pure 3-D quantum gravitation in the proposed sense. For k=1 there exists precisely one CFT for which Kac-Moody symmetry is absent so that absence of gravitons is realized in this sense. 2. Objections Witten considers also some objections against the equivalence of 3-D quantum gravity with Chern-Simons gauge theory. 1. Witten makes a comment about the invertibility of the vielbein as a condition for perturbative well-definedness since in the gauge theory formulation the gauge potential (ω,e/l)=(0,0) (l is the length scale defining cosmological constant) is legitimate whereas the perturbation theory is performed around a solution for which e is invertible (3-metric is non-degenerate). Exactly this would occur in TGD by effective metric 2-dimensionality of lightlike 3-surfaces. 2. Witten also notices that in quantum gravity description using path integral one expects sum over all topologies whereas in gauge theory description no such sum is necessary. In TGD framework where the generalization of S-matrix defines time like entanglement coefficients between positive and negative energy parts of zero energy state there is no need to sum over intermediate topologies. Witten comments also the case with vanishing cosmological constant which is of special interest from the point of view of TGD. The objection is that since no particles exist there is no S-matrix. The situation changes in TGD framework where lightlike 3-surfaces define particles instead of 2-geometries and the 3-D states define generalized S-matrices as or briefly Matrices as "square roots" of density matrix. The formula for Kac Moody central charge in absence of gravitational Chern-Simons term is of form k = l/16G, l =sqrt(-1/λ). For λ→0 l and thus k infinite and the theory would become classical. Also c= 24k would diverge at this limit meaning classicality in Virasoro degrees of freedom. The number of "blackhole states" would become infinite. Note however that both curvature scalar and gravitational Chern-Simons term vanish at this limit if the metric corresponds to that of a lightlike surface. 3. Comparison with TGD It is interesting to look the situation from the TGD point of view since usually this kind of comparisons bring new insights to TGD. 1. I have already told about strong analogies between this theory and quantum TGD as an almost topological QFT for lightlike 3-surfaces as basic objects with extended conformal invariance (see this and this). There are of course also deep differences. In Witten's theory 3-D space-time is not a surface. In TGD one has light-like 3-surfaces which correspond to solutions of Einstein's equations with degenerate metric for vanishing cosmological constant. Thus light-likeness characterizing a property as a 3-surface forces vacuum Einstein's equations characterizing purely geometric property. Dynamical variables in TGD framework correspond to second quantized free induced spinor field and deformations of lightlike 3-surface. 2. In the case of a degenerate metric with vanishing cosmological constant, the vielbein group would reduce to SO(2) and extension by vielbein would reduce to trivial extension. What makes this interesting is that induced Kähler form corresponds to U(1), which strictly speaking does not define a gauge symmetry but spin glass type dynamical symmetry acting as gauge transformations only for vacuum extremals. It is just the classical gravitation which breaks the gauge symmetry character of U(1). This picture conforms nicely with quantum criticality of TGD Universe. 3. Partition function can be determined for hyperelliptic 2-surfaces of any genus (all g<3 surfaces are hyperelliptic). In TGD framework hyper-elliptic surfaces should correspond to fermions and gauge bosons having identification as fermion-antifermion bound states and hyperellipticity implied by the generalization of the imbedding space concept and quantum classical correspondence implies the vanishing of the elementary particle vacuum functionals for g>2 so that only three fermion and gauge boson families are possible. 4. An interesting question is whether one should add to TGD 3-D gravitational action and corresponding Chern-Simons term. The curvature scalar part vanishes identically by the metric 2-dimensionality. For the same reason also Chern-Simons term should vanish, at least in a suitable gauge. If not, gravitational Chern-Simons would remove the vacuum degeneracy crucial for TGD and spoil also the almost topological QFT property of quantum TGD. 4. Does the quantization of Kähler coupling strength reduce to the quantization of Chern-Simons coupling? Usually this kind of comparisons have yielded new insights also into TGD. This was the case also now. The conjectured quantization of the coupling constant guaranteing so called holomorphic factorization is implied by the integer valuedness of the Chern-Simons coupling strength k. As Witten explains, this follows from the quantization of the first Chern-Simons class for closed 4-manifolds plus the requirement that the phase defined by Chern-Simons action equals to 1 for a boundaryless 4-manifold obtained by gluieing together to 4-manifolds along their boundaries. The quantization argument seems to generalize to the case of TGD. What is clear that this quantization should closely relate to the quantization of Kähler coupling strength appearing in the 4-D Kähler action defining Kähler function for the world of classical worlds and conjectured to result as a Dirac determinant. The argument leading leading to an extremely simple formula for Kähler coupling strength as αK =1/4k and allowing to identify p-adic temperature as Tp=1/k with k =26 for bosons and k=1 for fermions, is discussed in separate posting. Kähler coupling strength associated with Kähler action (Maxwell action for the induced Kähler form) is the only coupling constant parameter in quantum TGD, and its value (or values) is in principle fixed by the condition of quantum criticality since Kähler coupling strength is completely analogous to critical temperature. The quantum TGD at parton level reduces to almost topological QFT for light-like 3-surfaces. This almost TQFT involves Abelian Chern-Simons action for the induced Kähler form. This raises the question whether the integer valued quantization of the Chern-Simons coupling k could predict the values of the Kähler coupling strength. I considered this kind of possibility already for more than 15 years ago but only the reading of the introduction of the recent paper by Witten about his new approach to 3-D quantum gravity led to the discovery of a childishly simple argument that the inverse of Kähler coupling strength could indeed be proportional to the integer valued Chern-Simons coupling k: 1/αK=4k if all factors are correct. k=26 is forced by the comparison with some physical input. Also p-adic temperature could be identified as Tp=1/k. 1. Quantization of Chern-Simons coupling strength For Chern-Simons action the quantization of the coupling constant guaranteing so called holomorphic factorization is implied by the integer valuedness of the Chern-Simons coupling strength k. As Witten explains, this follows from the quantization of the first Chern-Simons class for closed 4-manifolds plus the requirement that the phase defined by Chern-Simons action equals to 1 for a boundaryless 4-manifold obtained by gluing together two 4-manifolds along their boundaries. As explained by Witten in his paper, one can consider also "anyonic" situation in which k has spectrum Z/n2 for n-fold covering of the gauge group and in dark matter sector one can consider this kind of quantization. 2. Formula for Kähler coupling strength The quantization argument for k seems to generalize to the case of TGD. What is clear that this quantization should closely relate to the quantization of the Kähler coupling strength appearing in the 4-D Kähler action defining Kähler function for the world of classical worlds and conjectured to result as a Dirac determinant. The conjecture has been that gK2 has only single value. With some physical input one can make educated guesses about this value. The connection with the quantization of Chern-Simons coupling would however suggest a spectrum of values. This spectrum is easy to guess. 1. The U(1) counterpart of Chern-Simons action is obtained as the analog of the "instanton" density obtained from Maxwell action by replacing J wedge *J with J wedge J. This looks natural since for self dual J associated with CP2 extremals Maxwell action reduces to instanton density and therefore to Chern-Simons term. Also the interpretation as Chern-Simons action associated with the classical SU(3) color gauge field defined by Killing vector fields of CP2 and having Abelian holonomy is possible. Note however that instanton density is multiplied by imaginary unit in the action exponential of path integral. One should find justification for this "Wick rotation" not changing the value of coupling strength and later this kind of justification will be proposed. 2. Wick rotation argument suggests the correspondence k/4π = 1/4gK2 between Chern-Simons coupling strength and the Kähler coupling strength gK appearing in 4-D Kähler action. This would give gK2=π/k . The spectrum of 1/αK would be integer valued The result is very nice from the point of number theoretic vision since the powers of αK appearing in perturbative expansions would be rational numbers (ironically, radiative corrections might vanish but this might happen only for these rational values of αK!). 3. It is interesting to compare the prediction with the experimental constraints on the value of αK. The basic empirical input is that electroweak U(1) coupling strength reduces to Kähler coupling at electron length scale (see this). This gives αK= αU(1)(M127)≈ 104.1867, which corresponds to k=26.0467. k=26 would give αK= 104: the deviation would be only .2 per cent and one would obtain exact prediction for αU(1)(M127)! This would explain why the inverse of the fine structure constant is so near to 137 but not quite. Amusingly, k=26 is the critical space-time dimension of the bosonic string model. Also the conjectured formula for the gravitational constant in terms of αK and p-adic prime p involves all primes smaller than 26 (see this). 4. Note however that if k is allowed to have values in Z/n2, the strongest possible coupling strength is scaled to n2/4 if hbar is not scaled: already for n=2 the resulting perturbative expansion might fail to converge. In the scalings of hbar associated with M4 degrees of freedom hbar however scales as 1/n2 so that the spectrum of αK would remain invariant. 3. Justification for Wick rotation It is not too difficult to believe to the formula 1/αK =qk, q some rational. q=4 however requires a justification for the Wick rotation bringing the imaginary unit to Chern-Simons action exponential lacking from Kähler function exponential. In this kind of situation one might hope that an additional symmetry might come in rescue. The guess is that number theoretic vision could justify this symmetry. 1. To see what this symmetry might be consider the generalization of the Montonen-Olive duality obtained by combining theta angle and gauge coupling to single complex number via the formula τ= θ/2π+i4π/g2. What this means in the recent case that for CP2 type vacuum extremals (see this) Kähler action and instanton term reduce by self duality to Kähler action obtained by the replacement g2 with -iτ/4π. The first duality τ→τ+1 corresponds to the periodicity of the theta angle. Second duality τ→-1/τ corresponds to the generalization of Montonen-Olive duality α→ 1/α. These dualities are definitely not symmetries of the theory in the recent case. 2. Despite the failure of dualities, it is interesting to write the formula for τ in the case of Chern-Simons theory assuming gK2=π/k with k>0 holding true for Kac-Moody representations. What one obtains is τ= 4k(1-i). The allowed values of τ are integer spaced along a line whose direction angle corresponds to the phase exp(i2π/n), n=4. The transformations τ→ τ+ 4(1-i) generate a dynamical symmetry and as Lorentz transformations define a subgroup of the group E2 leaving invariant light-like momentum (this brings in mind quantum criticality!). One should understand what is so special in this line. 3. This formula conforms with the number theoretic vision suggesting that the allowed values of τ belong to an integer spaced lattice. Indeed, if one requires that the phase angles are proportional to vectors with rational components then only phase angles associated with orthogonal triangles with short sides having integer valued lengths m and n are possible. The additional condition that the phase angles correspond to roots of unity! This leaves only m=n and m=-n>0 into consideration so that one would have τ= n(1-i) from k>0. 4. Notice that theta angle is a multiple of 8kπ so that a trivial strong CP breaking results and no QCD axion is needed (this of one takes seriously the equivalence of Kähler action to the classical color YM action). 4. Is p-adicization needed and possible only in 3-D sense? The action of CP2 type extremal is given as S=π/8αK= kπ/2. Therefore the exponent of Kähler action appearing in the vacuum functional would be exp(kπ) known to be a transcendental number (Gelfond's constant). Also its powers are transcendental. If one wants to p-adicize also in 4-D sense, this raises a problem. Before considering this problem, consider first the 4-D p-adicization more generally. 1. The definition of Kähler action and Kähler function in p-adic case can be obtained only by algebraic continuation from the real case since no satisfactory definition of p-adic definite integral exists. These difficulties are even more serious at the level of configuration space unless algebraic continuation allows to reduce everything to real context. If TGD is integrable theory in the sense that functional integral over 3-surfaces reduces to calculable functional integrals around the maxima of Kähler function, one might dream of achieving the algebraic continuation of real formulas. Note however that for lightlike 3-surface the restriction to a category of algebraic surfaces essential for the re-interpretation of real equations of 3-surface as p-adic equations. It is far from clear whether also preferred extremals of Kähler action have this property. 2. Is 4-D p-adicization the really needed? The extension of light-like partonic 3-surfaces to 4-D space-time surfaces brings in classical dynamical variables necessary for quantum measurement theory. p-Adic physics defines correlates for cognition and intentionality. One can argue that these are not quantum measured in the conventional sense so that 4-D p-adic space-time sheets would not be needed at all. The p-adic variant for the exponent of Chern-Simons action can make sense using a finite-D algebraic extension defined by q=exp(i2π/n) and restricting the allowed lightlike partonic 3-surfaces so that the exponent of Chern-Simons form belongs to this extension of p-adic numbers. This restriction is very natural from the point of view of dark matter hierarchy involving extensions of p-adics by quantum phase q. If one remains optimistic and wants to p-adicize also in 4-D sense, the transcendental value of the vacuum functional for CP2 type vacuum extremals poses a problem (not the only one since the p-adic norm of the exponent of Kähler action can become completely unpredictable). 1. One can also consider extending p-adic numbers by introducing exp(π) and its powers and possibly also π. This would make the extension of p-adics infinite-dimensional which does not conform with the basic ideas about cognition. Note that ep is not p-adic transcendental so that extension of p-adics by powers e is finite-dimensional and if p-adics are first extended by powers of π then further extension by exp(π) is p-dimensional. 2. A more tricky manner to overcome the problem posed by the CP2 extremals is to notice CP2 type extremals are necessarily deformed and contain a hole corresponding to the lightlike 3-surface or several of them. This would reduce the value of Kähler action and one could argue that the allowed p-adic deformations are such that the exponent of Kähler action is a p-adic number in a finite extension of p-adics. This option does not look promising. 5. Is the p-adic temperature proportional to the Kähler coupling strength? Kähler coupling strength would have the same spectrum as p-adic temperature Tp apart from a multiplicative factor. The identification Tp=1/k is indeed very natural since also gK2 is a temperature like parameter. The simplest guess is Tp= 1/k. Also gauge couplings strengths are expected to be proportional to gK2 and thus to 1/k apart from a factor characterizing p-adic coupling constant evolution. That all basic parameters of theory would have simple expressions in terms of k would be very nice from the point of view quantum classical correspondence. If U(1) coupling constant strength at electron length scales equals αK=1/104, this would give 1/Tp≈ 1/26. This means that photon, graviton, and gluons would be massless in an excellent approximation for say p=M89, which characterizes electroweak gauge bosons receiving their masses from their coupling to Higgs boson. For fermions one has Tp=1 so that fermionic lightlike wormhole throats would correspond to the strongest possible coupling strength αK=1/4 whereas gauge bosons identified as pairs of light-like wormhole throats associated with wormhole contacts would correspond to αK=1/104. Perhaps Tp=1/26 is the highest p-adic temperature at which gauge boson wormhole contacts are stable against splitting to fermion-antifermion pair. Fermions and possible exotic bosons created by bosonic generators of super-canonical algebra would correspond to single wormhole throat and could also naturally correspond to the maximal value of p-adic temperature since there is nothing to which they can decay. A fascinating problem is whether k=26 defines internally consistent conformal field theory and is there something very special in it. Also the thermal stability argument for gauge bosons should be checked. What could go wrong with this picture? The different value for the fermionic and bosonic αK makes sense only if the 4-D space-time sheets associated with fermions and bosons can be regarded as disjoint space-time regions. Gauge bosons correspond to wormhole contacts connecting (deformed pieces of CP2 type extremal) positive and negative energy space-time sheets whereas fermions would correspond to deformed CP2 type extremal glued to single space-time sheet having either positive or negative energy. These space-time sheets should make contact only in interaction vertices of the generalized Feynman diagrams, where partonic 3-surfaces are glued together along their ends. If this gluing together occurs only in these vertices, fermionic and bosonic space-time sheets are disjoint. For stringy diagrams this picture would fail. To sum up, the resulting overall vision seems to be internally consistent and is consistent with generalized Feynman graphics, predicts exactly the spectrum of αK, allows to identify the inverse of p-adic temperature with k, allows to understand the differences between fermionic and bosonic massivation, and reduces Wick rotation to a number theoretic symmetry. One might hope that the additional objections (to be found sooner or later!) could allow to develop a more detailed picture. For more details see the chapter Is it Possible to Understand Coupling Constant Evolution at Space-Time Level? of "Towards S-matrix". Saturday, June 23, 2007 Schwartschild horizon for a rotating blackhole like object as a 3-D lightlike surface defining wormhole throat The metric determinant at Schwartschild radius is non-vanishing. This does not quite conform with the interpretation as an analog of a light-like partonic 3-surface identifiable as a wormhole throat for which the determinant of the induced 4-metric vanishes and at which the signature of the induced metric changes from Minkowskian to Euclidian. An interesting question is what happens if one makes the vacuum extremal representing imbedding of Schwartshild metric a rotating solution by a very simple replacement Φ→ Φ+nΦ, where Φ is the angle angle coordinate of homologically trivial geodesic sphere S2 for the simplest vacuum extremals, and Φ the angle coordinate of M4 spherical coordinates. It turns out that Schwartschild horizon is transformed to a surface at which det(g4) vanishes so that the interpretation as a wormhole throat makes sense. If one assumes that black hole horizon is analogous to a wormhole contact, only rotating black hole like structures with quantized angular momentum are possible in TGD Universe. Friday, June 22, 2007 Strings 2007 is not only about strings Peter Woit and Kea commented Witten's talk in Strings 2007 about his new ideas related to 3-D quantum theory of gravity having very little to do with strings. • Light-like 3-surfaces can be regarded as solutions vacuum Einstein equations with vanishing cosmological constant (Witten considers solutions with non-vanishing cosmological constant). The effective 2-D character of the induced metric is what makes this possible. • Zero energy ontology is also an essential element: quantum states of 3-D theory in zero energy ontology correspond to generalized S-matrices: Matrix or M-matrix might be a proper term. Matrix is a "complex square root" of density matrix -matrix valued generalization of Schrödinger amplitude - defining time like entanglement coefficients. Its "phase" is unitary matrix and might be rather universal. Matrix is a functor from the category of Feyman cobordisms and matrices have groupoid like structure. • Theory becomes genuinely 4-D because S-matrix is not universal anymore but characterizes zero energy states. • 4-D holography is obtained via the Kähler metric of the world of classical worlds assigning to light-like 3-surface a preferred extremal of Kähler action as the analog of Bohr orbit containing 3-D lightlike surfaces as submanifolds (analogs of blackhole horizons and lightlike boundaries). Interiors of 4-D space-time sheets corresponds to zero modes of the metric and to the classical variables of quantum measurement theory (quantum classical correspondence). The conjecture is that Dirac determinant for the modified Dirac action associated with partonic 3-surfaces defines the vacuum functional as the exponent of Kähler function with Kähler coupling strength fixed completely as the analog of critical temperature so that everything reduces to almost topological QFT. Thursday, June 21, 2007 New Developments in TGD and Their Implications for TGD Inspired Theory of Consciousness There have been quite an impressive development in the understanding of quantum TGD at the basic level, and the interaction of the new ideas with TGD inspired theory of conciousness and model of quantum biology will be a fascinating adventure. The first task was to write a chapter summarizing the updatings and develop a systematic overall view about the consequences for TGD inspired theory of consciousness. I glue the abstract of the chapter here. The conflict between the non-determinism of state function reduction and determinism of time evolution of Schrödinger equation is serious enough a problem to motivate the attempt to extend physics to a theory of consciousness by raising the observer from an outsider to a key notion also at the level of physical theory. Further motivations come from the failure of the materialistic and reductionistic dogmas in attempts to understand consciousness in neuroscience context. There are reasons to doubt that standard quantum physics could be enough to achieve this goal and the new physics predicted by TGD is indeed central in the proposed theory. The developments in quantum TGD during last years have led to a fusion of real and p-adic physics by using generalization of number concept, to the realization of the crucial role of hyper-finite factors of type II1 for quantum TGD, to the generalization of the imbedding space implying hierarchy of quantized values of Planck constant, to so called zero energy ontology, and to the reduction of quantum TGD to parton level with parton understand as 2-D surface whose orbit is light-like 3-surface, and to the realization that quantum TGD can be formulated as almost topological quantum field theory using category theoretical framework. These developments have considerably simplified the conceptual framework behind both TGD and TGD inspired theory of consciousness and provided justification for various concepts of consciousness theory deduced earlier from quantum classical correspondence and properties of many-sheeted space-time. The notions of quantum jump and self can be unified in the recent formulation of TGD relying on dark matter hierarchy characterized by increasing values of Planck constant. Negentropy Maximization Principle serves as a basic variational principle for the dynamics of quantum jump and must be modified to the case of hyper-finite factors of type II1 The new view about the relation of geometric and subjective time together with zero energy ontology leads to a new view about memory and intentional action. The quantum measurement theory based on finite measurement resolution and realized in terms of hyper-finite factors of type II1 justifies the notions of sharing of mental images and stereo-consciousness deduced earlier on basis of quantum classical correspondence. A new element is finite resolution of quantum measurement and cognitive and sensory experience. Qualia reduce to quantum number increments associated with quantum jump. Self-referentiality of consciousness can be understood from quantum classical correspondence implying a symbolic representation of contents of consciousness at space-time level updated in each quantum jump. p-Adic physics provides space-time correlates for cognition and intentionality. For details see the new chapter New Developments in TGD and Their Implications for TGD Inspired Theory of Consciousness. Declaration of Academic Freedom The recent crackpot hunting activities have their comic aspects but what looks comedy from a safe distance, is a tragedy when seen from nearby. A public defame literally in a global scale can produce a lot of suffering to their victims and their friends and relatives. The young crackpot hunters seems to be quite blind to this human aspect. Many western intellectuals accept Physical Integrity as a basic value. For some reason these people however see nothing bad in the violation of what might be called Intellectual Integrity or Emotional Integrity. The events in comment sections of some blogs indeed bring in mind a story about how primitive tribes treated the individuals who broke the taboo: the taboo breaker was literally torn in pieces in a bloody orgy. The crackpot hunting is of course only a tip of iceberg. The censorship applied by so called respected journals and by electronic archives such as plus academic discrimination prevents very effectively the communitation of new ideas. There is an organization known as Archive Freedom founded for few years ago by the victims of these activities. It has also electronic archive to which people censored out of and unable to publish in so called respected journals can post their papers. Kea had added to her blog a piece of the Declaration of Academic Freedom to her blog. I think that this piece of text deserves to be added also here. Article 2: Who is a scientist A scientist is any person who does science. Any person who collaborates with a scientist in developing and propounding ideas and data in research or application is also a scientist. The holding of a formal qualification is not a prerequisite for a person to be a scientist. Article 4: Freedom of choice of research theme Many scientists working for higher research degrees or in other research programmes at academic institutions such as universities and colleges of advanced study, are prevented from working upon a research theme of their own choice by senior academic and/or administrative officials, not for lack of support facilities but instead because the academic hierarchy and/or other officials simply do not approve of the line of inquiry owing to its potential to upset mainstream dogma, favoured theories, or the funding of other projects that might be discredited by the proposed research. The authority of the orthodox majority is quite often evoked to scuttle a research project so that authority and budgets are not upset. This commonplace practice is a deliberate obstruction to free scientific thought, is unscientific in the extreme, and is criminal. It cannot be tolerated. A scientist working for any academic institution, authority or agency, is to be completely free as to choice of a research theme, limited only by the material support and intellectual skills able to be offered by the educational institution, agency or authority. If a scientist carries out research as a member of a collaborative group, the research directors and team leaders shall be limited to advisory and consulting roles in relation to choice of a relevant research theme by a scientist in the group. Article 8: Freedom to publish scientific results: P.S. In n-Category Cafe there is a nice posting of David Corfield expressing what science is at best: a spiritual endeavour rather than fight for academic positions. Wednesday, June 20, 2007 In zero energy ontology S-matrix can be seen as a functor from the category of Feynman cobordisms to the category of operators. S-matrix can be identified as a "complex square root" of the positive energy density matrix S= ρ1/2+S0, where S0 is a unitary matrix and ρ+ is the density matrix for positive energy part of the zero energy state. Obviously one has SS*+. S*S=ρ- gives the density matrix for negative energy part of zero energy state. Clearly, S-matrix can be seen as a matrix valued generalization of Schrödinger amplitude. Note that the "indices" of the S-matrices correspond to configuration space spinors (fermions and their bound states giving rise to gauge bosons and gravitons) and to configuration space degrees of freedom (world of classical worlds). For hyper-finite factor of II1 it is not strictly speaking possible to speak about indices since the matrix elements are traces of the S-matrix multiplied by projection operators to infinite-dimensional subspaces from right and left. The functor property of S-matrices implies that they form a multiplicative structure analogous but not identical to groupoid. Groupoid has associative product and there exist always right and left inverses and identity in the sense that ff-1 and f-1f are defined but not identical in general, and one has fgg-1=f and f-1fg= g. The reason for the groupoid like property is that S-matrix is a map between state spaces associated with initial and final sets of partonic surfaces and these state spaces are different so that inverse must be replaced with right and left inverse. The defining conditions for the groupoid are however replaced with more general ones. Associativity holds also now but the role of inverse is taken by hermitian conjugate. Thus one has the conditions fgg*=fρ_{g,+} and f*fg= ρf,-g, and the conditions ff*+ and f*f=ρ- are satisfied. Here ρf+/- is density matrix associated with positive/negative energy parts of zero energy state. If the inverses of the density matrices exist, groupoid axioms hold true since f-1L=f*ρf,+-1 satisfies ff-1L= Id+ and fR-1f,--1f* satisfies f-1Rf= Id-. There are good reasons to believe that also tensor product of its appropriate generalization to the analog of co-product makes sense with non-triviality characterizing the interaction between the systems of the tensor product. If so, the S-matrices would form very beautiful mathematical structure bringing in mind the corresponding structures for 2-tangles and N-tangles. Knowing how incredibly powerful the group like structures have been in physics one has good reasons to hope that groupoid like structure might help to deduce a lot of information about the quantum dynamics of TGD. A word about nomenclature is in order. S has strong associations to unitarity and it might be appropriate to replace S with some other letter. The interpretation of S-matrix as a generalized Schrödinger amplitude would suggest Ψ-matrix. Since the interaction with Kea's M-theory blog (with M denoting Monad or Motif in this context) helped to realize the connection with density matrix, also M-matrix might work. S-matrix as a functor from the category of Feynman cobordisms in turn suggests C or F. Or could just Matrix denoted by M in formulas be enough? Certainly it would inspire feeling of awe but create associations with M-theory in the stringy sense of the word but wouldn't it be fair if stringy M-theory could leave at least some trace to physics;-)! Tuesday, June 19, 2007 It is time for crackpot hunting Summer time seems to be especially active period for young crackpot hunters and the best season has begun. Only a few days ago Lubos revealed that Tommaso Dorigo is a "small crackpot". The scientific justification was that Tommaso had visited several times in Peter Woit's Not-Even-Wrong (Peter Woit is the "black crackpot" to be distinguished from Lee Smolin, the "blue crackpot"). As Tommaso noticed, the fact that Lubos knew about these visits, revealed that also Lubos had visited the same place, from which Lubos himself can make the obvious conclusion and come out of closet. Note that Lubos has also a special message for anyone trying to get to the blog of Lubos from Not-Even-Wrong. For some time ago I saw commented Sean Carroll's idea that cosmology explains second law. Lubos Motl had already commented the idea and quite correctly pointed out the most obvious flaw in Sean's thinking, namely the failure to realize that cosmological time scale is totally different than the time scale of second law in laboratory environment. Lubos also concluded that Sean Carroll is a crackpot. What I did in the posting Second law and cosmology was some analysis of basic flaws in the thinking of not only Carroll and Lubos but entire community basically due to the refusal to return to roots and concentrate seriously on the fundamental conceptual problems instead of crackpot hunting. I commented the sad situation also from the point of view of particle physics in the posting About landscape: what's the real problem? . I did not call anyone crackpot since I would regard scientific argumentation based on ad hominem attacks as a criterion number one for crackpot-ness if crackpot hunting were my hobby. Only the contents matter and if someone lacks the substance needed for a serious discussion, he (as a rule he and almost as a rule from US) should listen and learn rather than argue. Both these young fellows seem to lack the courage and arguments to attack directly against me: this would be also against the politics of total silence about serious competitors of super string hegemony. However, a comment from someone outside Harward, US, and even academy showing a lack of deep unjustified respect towards anything declared by someone from the academic heights of Sean Carrol or Lubos Motl was too much for the vanity of these young bloggers and empire stroke back indirectly this morning. Sean Carroll devoted his posting to the identification criteria of alternative scientists and also Lubos used opportunity and emphasize that "alternative scientist", the characterization that he uses about me in his article to Physicists category of Wikipedia, is his polite expression for a crackpot. Lubos forgot that he himself is very alternative climate scientist. Amusingly, someone in the comment section of Lubos noticed Lubos had deemed also Carroll as a crackpot in his comments about Carroll's cosmic explanation of second law. Lubos indeed admitted that poor Sean, the discoverer of the new brilliant crackpot identification criteria, becomes the first victim of his own method. If my proposal that ad hominem attacks as a substitute for a scientific argumentation is also basic signature of crackpot-ness is accepted then both these crackpot hunters suffer the same fate. Perhaps scientists using so much valuable time to the hunting of crackpots deserve this. P.S. Could one consider some kind of mind police of theoretical physics analogous to KGB in Soviet Union, where professionals would take care of crackpot hunting so that super string theorists could concentrate on deducing new predictions from M-theory;-)? Saturday, June 16, 2007 Empirical support for TGD based model of long term memories Quite recently I learned about empirical support for the TGD inspired model of long term memories. Experiments with mice have shown that loss of long-term memory can be reversed with drugs that seem to trigger the rewiring of brain cells. The findings suggest ways of treating dementia and other neuro-degenerative diseases that are associated with impaired learning and memory loss, says Li-Huei Tsai of the Massachussetts Institute of Technology. Tsai and her team studied mice genetically engineered to express a protein called p25 when their diet contained an antibiotic. The protein has been implicated in some neuro-degenerative diseases. The mice were first placed in a tank of water and trained to find their way to a platform just below the surface. Next, the team ensured that the task was stored in the mice's long-term memory by waiting for a few weeks. Then they induced the mice to produce p25, leading to loss of neurons, learning ability and memory. When the mice were replaced in an environment rich of various stimuli the memories were restored. The first interpretation is that the memories are stored in the ordinary sense but not to synaptic contacts but somewhere else, say to RNA inside cell nuclei known now to be coded in large quantities by the intronic portion of DNA (see the recent article in New Scientist). Stable storage of memories in the conventional sense of the word seems however to require that these RNA molecules remain in the nucleus which need not make sense. TGD based model of long term memories for which zero energy ontology provides a justification, provides an alternative explanation. The basic ideas are following. • Long term sensory or episodal memories making possible memory feats correspond to either sharing of mental images of the geometric past by time-like entanglement. • This storage mechanism is not very efficient and a more efficient mechanism would be based on communication with geometric past. Memory recall would be represented as a signal sent from magnetic body at appropriate level of onionlike hierarchy of magnetic bodies to the brain of the geometric past and realized as phase conjugate negative energy dark photons. Memory would be communicated to the geometric future by using analogous positive energy signal. This time mirror mechanism is also the key mechanism of remote metabolism and mechanism of intentional action and explains the findings of Libet about strange time delays of consciousness. • The problems due to the extremely low photon energies are circumvented if photons are dark and thus correspond to so large value of Planck constant that their energies are above thermal energy at room temperature. EEG represents only a small portion of frequencies involved and corresponds to time scale which is fraction of second. Much longer time scales are involved with what we are used to call long term memories. The model makes un-necessary explicit storage mechanisms and memories as such are intact in the geometric past apart from the possible changes induced by quantum jumps and only the ability to recall them is lost as a consequence of treatment and it would be this ability which is restored in the stimulus rich environment. This option allows the representation of memories in terms of RNA without requiring the stability of RNA molecules. Also the long microtobuli associated with axons are excellent candidates for providing representations of memories. In fact, almost any quantum dynamical event of the geometric past in the living body could serve as a memory storage. Nerve pulse patterns would serve only as symbolic "digital" representations of memories. This representation would be much more economic and than the representation as sensory memories localizable at the level of primary sensory organs. Friday, June 15, 2007 Introns transcribed to RNA inside cell nuclei The last issue of New Scientist contains an article about the discovery that only roughly one half of DNA expresses itself as aminoacid sequences. The article is published in Nature (thanks for Doug for the link). The Encyclopedia of DNA Elements (ENCODE) project has quantified RNA transcription patterns and found that while the "standard" RNA copy of a gene gets translated into a protein as expected, for each copy of a gene cells also make RNA copies of many other sections of DNA. In particular, intron portions ("junk DNA", the portion of which increases as one climbs up in evolutionary hierarchy) are transcribed to RNA in large amounts. What is also interesting that the RNA fragments correspond to pieces from several genes which raises the question whether there is some fundamental unit smaller than gene. None of the extra RNA fragments gets translated into proteins, so the race is on to discover just what their function is. TGD proposal is that it gets braided and performs a lot of topological quantum computation (see this). Topologically quantum computing RNA fits nicely with replicating number theoretic braids associated with light-like orbits of partonic 2-surfaces and with their spatial "printed text" representations as linked and knotted partonic 2-surfaces giving braids as a special case (see this). An interesting question is how printing and reading could take place. Is it something comparable to what occurs when we read consciously? Is the biological portion of our conscious life identifiable with this reading process accompanied by copying by cell replication and as secondary printing using aminoacid sequences? This picture conforms with TGD view about pre-biotic evolution. Plasmoids [1], which are known to share many basic characteristics assigned with life, came first: high temperatures are not a problem in TGD Universe since given frequency corresponds to energy above thermal energy for large enough value of hbar. Plasmoids were followed by RNA, and DNA and aminoacid sequences emerged only after the fusion of 1- and 2-letter codes fusing to the recent 3-letter code. The cross like structure of tRNA molecules carries clear signatures supporting this vision. RNA would be still responsible for roughly half of intracellular life and perhaps for the core of "intelligent life". I have also proposed that this expression uses memetic code which would correspond to Mersenne M127=2127-1 with 2 126 codons whereas ordinary genetic code would correspond to M7=27-1 with 26 codons. Memetic codons in DNA representations would consist of sequences of 21 ordinary codons. Also representations in terms of field patterns with duration of .1 seconds (secondary p-adic time scale associated with M 127 defining a fundamental biorhythm) can be considered. A hypothesis worth of killing would be that the DNA coding for RNA has memetic codons scattered around genome as basic units. It is interesting to see whether the structure of DNA could give any hints that memetic codon appears as a basic unit. 1. In a "relaxed" double-helical segment of DNA, the two strands twist around the helical axis once every 10.4 base pairs of sequence. 21 genetic codons correspond 63 base pairs whereas 6 full twists would correspond to 62.4 base pairs. 2. Nucleosomes are fundamental repeating units in eukaryotic chromatin possessing what is known as 10 nm beads-on-string structure. They repeat roughly every 200 base pairs: integer number of genetic codons would suggest 201 base pairs. 3 memetic codons makes 189 base pairs. Could this mean that only a fraction p≈ 12/201, which happens to be of same order of magnitude as the portion of introns in human genome, consists of ordinary codons? Inside nucleosomes the distance between neighboring contacts between histone and DNA is about 10 nm, the p-adic length scale L(151) associated with the Gaussian Mersenne (1+i)151-1 characterizing also cell membrane thickness and the size of nucleosomes. This length corresponds to 10 codons so that there would be two contacts per single memetic codon in a reasonable approximation. In the example of Wikipedia nucleosome corresponds to about 146=126+20 base pairs: 147 base pairs would make 2 memetic codons and 7 genetic codons. The remaining 54 base pairs between histone units + 3 ordinary codons from histone unit would make single memetic codon. That only single memetic codon is between histone units and part of the memetic codon overlaps with histone containing unit conforms with the finding that chromatin accessibility and histone modification patterns are highly predictive of both the presence and activity of transcription start sites. This would leave 4 genetic codons and 201 base pairs could decompose as memetic codon+2 genetic codons+memetic codon+2 genetic codons. The simplest possibility is however that memetic codons are between histone units and histone units consist of genetic codons. Note that memetic codons could be transcribed without the straightening of histone unit occurring during the transcription leading to protein coding. Note that prokaryote genome lacks the histone units so that the transition from prokaryotes to eukaryotes would mean the emergence of memetic code. For background see the chapter Topological Quantum Computation in TGD Universe of "TGD as a Generalized Number Theory" and the chapter Pre-biotic Evolution in Many-Sheeted Space-Time of "Genes and Memes". Wednesday, June 13, 2007 In what sense tangles are realized in TGD Universe? Kea gave a link to a highly interesting article of Kauffman and Lambropoulou about rational 2-tangles having commutative sum and product allowing to map them to rationals. The illustrations of the article are beautiful and make it easy to get the gist of various ideas. The theorem of the article states that equivalent rational tangles giving trivial tangle in the product correspond to subsequent Farey numbers a/b and c/d satisfying ad-bc=+/-1 so that the pair defines element of the modular group SL(2,Z). 1. The basic observation is that 2-tangles are 2-tangles in both "s- and t-channels". Product and sum can be defined for all tangles but only in the case of 2-tangles the sum, which in this case reduces to product in t-channel obtained by putting tangles in series, gives 2-tangle. The sum of M- and N-tangles is M+N-2-tangle and combines various N-tangles to a monoidal structure. Tensor product like operation giving M+N tangle looks to me physically more natural than the sum. 2. The reason why general 2-tangles are non-commutative although 2-braids obviously commute is that 2-tangles can be regarded as sequences of N-tangles with 2-tangles appearing only as the initial and final state: N is actually even for intermediate states. Since N>2-braid groups are non-commutative, non-commutativity results. It would be interesting to know whether braid group representations have been used to construct representations of N-tangles. The article stimulated the question in what senses N-tangles could be obtained in TGD framework. 1. Tangles as number theoretic braids? The strands of number theoretical N-braids correspond to roots of N:th order polynomial and if one allows time evolutions of partonic 2-surface leading to the disappearance or appearance of real roots N-tangles become possible. This however means continuous evolution of roots so that the coefficients of polynomials defining the partonic 2-surface can be rational only in initial and final state but not in all intermediate "virtual" states. 2. Tangles as tangled partonic 2-surfaces? Tangles could appear in TGD also in second manner. • Partonic 2-surfaces are sub-manifolds of a 3-D section of space-time surface. If partonic 2-surfaces have genus g>0 the handles can become knotted and linked and one obtains besides ordinary knots and links more general knots and links in which circle is replaced by figure eight and its generalizations obtained by adding more circles (eyeglasses for N-eyed creatures;-)). • Since these 2-surfaces are space-like, the resulting structures are indeed tangles rather than only braids. Tangles made of strands with fixed ends would result by allowing spherical partons elongate to long strands with fixed ends. DNA tangles would the basic example, and are discussed also in the article. DNA sequences to which I have speculatively assigned invisible (dark) braid structures might be seen in this context as space-like "written language representations" of genetic programs represented as number theoretic braids. For details see the chapter Hyper-Finite Factors and Construction of S-Matrix of "Towards S-Matrix". Farey sequences, Riemann Hypothesis, and Platonia as the best possible world Kea has mentioned Farey sequences in her blog couple of times (see this and this). Some basic facts about Farey sequences demonstrate that they are very interesting also from TGD point of view. 1. Farey sequence FN is defined as the set of rationals 0< q= m/n≤1 satisfying the conditions n≤ N ordered in an increasing sequence. 2. Two subsequent terms a/b and c/d in FN satisfy the condition ad-bc=1 and thus define and element of the modular group SL(2,Z). 3. The number of terms in Farey sequence is given by F(N) =F(N-1)+ φ(N-1). Here φ(n) is Euler's totient function giving the number of divisors of n. For primes one has φ(p)=1 so that in the transition from p to p+1 the length of Farey sequence increases by one unit by the addition of q=1/(p+1) to the sequence. The members of Farey sequence FN are in one-one correspondence with the set of quantum phases qn=exp(i2π/n) , 0≤ n≤ N. This suggests a close connection with the hierarchy of Jones inclusions, quantum groups, and in TGD context with quantum measurement theory with finite measurement resolutiona nd the hierarchy of Planck constants involving the generalization of imbedding space. Also the recent TGD inspired ideas about the hierachy of subgroups of rational modular group with subgroups labelled by N and in direct correspondence with the hierarchy of quantum critical phases would naturally relate to Farey sequence. 1. Riemann Hypothesis and Farey sequences Farey sequences are used in two equivalent formulations of the Riemann hypothesis. Suppose the terms of FN are an,N, 0 < n≤ mN. Define dn,N = an,N - n /mN: in other words dn,N is the difference between the n:th term of the N:th Farey sequence, and the n:th member of a set of the same number of points, distributed evenly on the unit interval. Franel and Landau proved that both of the two statements • n=1,...,mNdn,N =O(Nr) for any r>1/2. • n=1,...,mN dn,N2 =O(Nr) for any r>1. are equivalent with Riemann hypothesis. One can say that RH would guarantee that the numbers of Farey sequence provide the best possible approximate representation for the evenly distributed rational numbers n/mN. 2. Farey sequences and TGD Farey sequences seem to relate very closely to TGD. 1. The rationals in the Farey sequence can be mapped to the roots of unity by the map q→exp(i2π q). The numbers 1/mN are in turn mapped to the numbers exp(i2π/mN), which are also roots of unity. The statement would be that the algebraic phases defined by Farey sequence give the best possible approximate representation for the phases exp(in2π/mN) with evenly distributed phase angle. 2. In TGD framework the phase factors defined by FN corresponds to the set of quantum phases corresponding to Jones inclusions labelled by q=exp(i2π/n), n≤ N, and thus to the N lowest levels of dark matter hierarchy. There are actually two hierarchies corresponding to M4 and CP2 degrees of freedom and the Planck constant appearing in Schrödinger equation corresponds to the ratio na/nb defining quantum phases in these degrees of freedom. Zna× nb appears as a conformal symmetry of "dark" partonic 2-surfaces and with very general assumptions this implies that there are only three fermion families in TGD Universe. 3. The fusion of physics associated with various number fields to single coherent whole requires algebraic universality. In particular, the roots of unity, which are complex algebraic numbers, should define approximations to continuum of phase factors. At least the S-matrix associated with p-adic-to-real transitions and more generally p1 → p2 transitions between states for which the partonic space-time sheets are p1- resp. p2-adic can involve only this kind of algebraic phases. One can also say that cognitive representations can involve only algebraic phases and algebraic numbers in general. For real-to-real transitions and real-to-padic transitions U-matrix might be non-algebraic or obtained by analytic continuation of algebraic U-matrix. S-matrix is by definition diagonal with respect to number field and similar continuation principle might apply also in this case. 4. The subgroups of the hierarchy of subgroups of the modular group with rational matrix elements are labelled by integer N and relate naturally to the hierarchy of Farey sequences. The hierarchy of quantum critical phases is labelled by integers N with quantum phase transitions occuring only between phases for which the smaller integer divides the larger one. 5. The 2-tangles known as rational tangles form are characterized by a rational number a/b (for detailed definitions see the article of Kaufmann and Lambropoulou). According to the result of the same article, two rational tangles labelled by a/b and c/d and possessing commutative sum and product combine to form an unknot if and only if a/b and c/d are two subsequent Farey numbers and therefore satisfy ad-bc=+/-1. An interesting question is whether the result somehow generalizes to the case of N-tangles and whether this generalization relates to the hierarchy of subgroups of the rational modular group obtained by replacing the generator τ→τ+1 with τ→ τ+1/N. 3. Interpretation of RH in TGD framework Number theoretic universality of physics suggests an interpretation for the Riemann hypothesis in TGD framework. RH would be equivalent to the statement that the Farey numbers provide best possible approximation to the set of rationals k/mN. Or to the statement that the roots of unity contained by FN define the best possible approximation for the roots of unity defined as exp(ik2π/mN) with evenly spaced phase angles. The roots of unity allowed by the lowest N levels of the hierarchy of Jones inclusions allows the best possible approximate representation for algebraic phases represented exactly at mN:th level of hierarchy. A stronger statement would be that the Platonia where RH holds true would be the best possible world in the sense that algebraic physics behind the cognitive representations would allow the best possible approximation hierarchy for the continuum physics (both for numbers in unit interval and for phases on unit circle). Platonia with RH would be cognitive paradise;-). One could see this also from different view point. "Platonia as the cognitively best possible world" could be taken as the "axiom of all axioms": a kind of fundamental variational principle of mathematics. Among other things it would allow to conclude that RH is true: RH must hold true either as a theorem following from some axiomatics or as an axiom in itself. Tuesday, June 12, 2007 Second law and cosmology In his blog Lubos criticizes Sean Carrol's recent idea that cosmology explains second law. Carroll is not completely wrong Basically I agree with the criticism of Lubos. Carroll fails to realize that second law applies in all length scale and cosmology is only about largest scales. On the other hand, in the fractal Universe of TGD and in good mood;-) one can find in Carrol's idea something which cannot be said to be completely wrong. What I mean is follows. In TGD framework one is forced to assign to incoming/outgoing elementary particles appearing in the generalized Feynman diagrams future/past directed lightcones analogous to mini-mini versions of big bang/crunch. These lightcones define geometric correlates for irreversibility and for the arrow of subjective time implied by the identification of the subjective time flow as sequence of quantum jumps. Therefore also second law can be said to have representation at space-time level. The failure of complete classical determinism characterizing the basic variational principle of TGD is crucial for quantum classical correspondence also at level of quantum jumps. The representations of quantum jump sequences at space-time level make also possible symbolic representations for the contents of consciousness at space-time level and explain the "aboutness" character of consciousness: it is possible to become conscious about what one was conscious of (seeing red during this quantum jump and in next quantum jump becoming conscious that one was seeing red). These representations make possible a continual evolution of new reflective levels of consciousness via the feedback analogous to formulas written by a mathematician and inducing further ideas. The basic fallacies It is useful to list the fallacies of in the thinking of Lubos and Carroll since these fallacies allow to understand the recent dead end situation in theoretical physics. • Both Lubos and Carroll and the entire string communitity stubbornly continue to miss the fractality of the Universe reflected as hierarchies of p-adic length scales and Planck constants in TGD framework. The p-adic hierarchy make itself especially visible in the widely different mass scales for elementary particles. Recall that the ratio of mass scales for top quark mass and neutrino is around 1011: despite this the proponents of strings→ GUT at low energies vision try to force these particles inside the same multiplet of unifying gauge group! • A lot of muddy thinking results from the refusal to reconsider quantum measurement theory. Just the introduction of finite quantum measurement resolution to the theory would help considerably and would give connection with quantum groups and hyper-finite factors. • I mentioned in the beginning that quantum classical correspondence provides space-time correlates for the sequences of quantum jumps in TGD Universe. One could also ask what are the classical space-time correlates for the choice of quantization axes and end up with the hierarchy of Planck constants. M-theorists are ready to consider incredibly weird brane world scenarios but refuse to consider this kind of idea inspired by need to go forward from the position where the fathers of quantum theory left us. • Lubos with his ultraconservative hbar=1 vision about Universe refuses to seriously consider the connection between consciousness and quantum theory. I believe that quantum is absolutely crucial for consciousness as is also the understanding of consciousness for quantum physics. Of course, I do not believe that simple wave mechanics could resolve the riddle of consciousness: something much more general is needed. • A vision about the origin of second law (most naturally quantum jumps) is missing. The erratic identification of the geometric time and experienced time irrespective of the fact that these times are quite different (reversibility/irreversibility, etc.) relates closely to this. Some amount of quantum consciousness theorizing accepting Boolean logic as a starting point would help to get rid of obvious logical paradoxes but what one can do if someone has decided that the idea about connection between quantum and consciousness is dull - as Lubos states it. • Penrose's conjecture about the special role of gravitation concerning consciousness corresponds in TGD Universe immense values of gravitational Planck constant implying macroscopic quantum coherence in astro length scales. Gravitational interaction becomes fundamental for consciousness and life in TGD Universe. From above it should be clear that basic problems are conceptual and the development of mathematical methods instead of humble return to the roots is not the manner to make progress. Second law in standard Big Bang cosmology and TGD inspired cosmology The considerations of Carrol have roots in the problem of understanding second law in Big Bang cosmology. The necessity of low entropy during primordial times does not quite fit with the standard big bang picture. In TGD long string like objects represent primordial state and this phase has low entropy. Entropy is produced when these objects decay to elementary particles. Dominance of string like objects also means that gravitational mass per comoving volume vanishes linearly as a function of cosmic time (call it t) rather than diverging as 1/t as in radiation dominated cosmology. Big Bang is transformed to a silent whispher gradually amplified to a big roar. Thursday, June 07, 2007 The recent situation in Stanford Wonderful weather and we are still able to enjoy it! It is a real experience to sit outside with eyes closed and mind wandering freely. Add to this garden of apple trees in full blossom and flowers. In the middle of this all one should not go to web but it is difficult to avoid the usual blog around. The recent situation in Stanford had inspired Peter Woit to develop an interesting indirect argument against string models. Some homeless woman had found a roof above her head in Standord university in the same building as Leonard Susskind and Stanford model theory group. This has lasted as long as for four years. As a non-professional I failed to understand the details or even the gist of the argument leading to the conclusion that this poor woman was actually a string theorist and that something very strange was going also in Stanford theory group, which might relate to the recent situation in string theory. What would had come into my uneducated mind first that it is nice that even string theorist can feel compassion towards suffering human kind. Also Lubos commented but did not point out any correlations with the recent situation in M-theory. Scott Aaronson also commented the situation in Stanford but from a pragmatic view point and crystallized his view as: "When we discover a stowaway on the great Ship of Science, why throw her overboard when we could make her swab the decks?"! The crisis in Stanford inspired amazingly lively discussions about the groupies of science, as Scott Aaronson called them. The discussion left the overall impression that the ability to feel compassion towards those who suffer is something which I would not use to recognize a physicist from a big crowd of people. The topic also seemed to create group feeling: "we scientists in our departments in contrast to those non-scientists outside academy". This group feeling materialized in several serious questions. Should we tolerate among us also individuals who cannot do calculus? Could it be possible to teach the most intelligent individuals of groupie species to perform simple but useful activities such as writing popular scientific articles (rather than only swab the decks)? Wednesday, June 06, 2007 What it is to be a theoretical physicist in Finland I thought to tell something about about my life as theoretical physicist in Finland. I visited today the employment agency of state which formally tries to get me a job. Actually getting the job outside academic world would be a catastrophe but the criteria about what job I am forced to receive have allowed me to continue serious work as a physicist. Not because I would not like to have a job and enjoy salary and all that brings with it. The problem is that I simply have so much to give to this stupid human kind that I would regard myself as a criminal if I would start to do something just for money and leave my mission. Since the academic powerholders of Finland have for a long time ago made the firm decision that they will not in any imaginable circumstances provide the ridiculously small funding that I would need to continue my work, I am in a difficult situation. Admittely there are also some comic aspects involved since there are two finnish physicists in the category "Physicists" with Einstein's picture as basic icon: the other one is Norström- a friend and competitor of Einstein- and the second one is me. Finnish academic decision makers do not let this to disturb them and do their best to look determined and intelligent. In employment agency there has however been a continually increasing tension from the top to improve unemployment statistics by kicking out persons like me out and although individuals that I have met understand perfectly the silly situation but they cannot continue endlessly this formal employment procedure. This statistical tidying up procedure would mean getting money for food and rent from social office and living at the lowest step of the social ladder. The right wing won in the parliament election in the beginning of the year and I guessed correctly that the situation would worsen as a consequence. The inofficial term for the politics applied now is in finnish "panna köyhät kyykkyyn": the free translation is to "put poor people on their knees". This policy means that I am forced to chose between the following options. • I will receive my living expenses from social office just like the people having very bad personal problems. This might mean also a kind of David Star type humiliation: I must show a card telling that social office pays my living when I go to buy my daily food. • I can also try to get some 1/2 period of unemployment work funded by government and hope that I could continue my work at least part of the time. I should find an intelligent employer who perhaps realizes that he has opportunity to get into history of science in the country which can be proud of having the silliest scientific decision makers in the known Universe. • Then there is the possibility that I get a position as a trainee for some employer. These positions are usually meant for 18 year olds having problems with addictions, suffering from ADHD, or something like that but since I am so called hard-to-employ kind of person I might get it although I am 56 years old. I would not receive any salary except for the minimum unemployment money as hitherto. The purpose of this job would be to teach me the basic skills needed in the working life such as the ability to concentrate for 5 minutes to do one and same thing. Having worked with a unified theory for 28 years and produced 15 books I am optimistic about achieving the required skills during the trainee period. I am however afraid that I cannot be a trainee for the rest of my life so that the lowest step is still waiting there. I must say that the last option - and why not- also the David Star option looks a fascinating possibility to make my memoirs a best steller. It would be even more wonderful to tell this in Stockholm and conclude with warm thanks to all finnish colleagues without whose generous help this great human comedy would not have been possible! More seriously: I can blame only myself. I should have chosen politics as a tool to make word better. The people suffering political discrimation have Amnesty but there is nothing like this for scientists who happen to have brains in a wrong country.
16a1283e8a0de41e
Monday, September 4, 2017 Speakable and Unspeakable in Quantum Mechanics by J.S. Bell John S Bell is well known because of his development of what is known as Bell’s theorem – a proof showing that quantum entanglement means that local causality does not exist. This book, Speakable and Unspeakable in Quantum Mechanics, is a collection of 24 technical and semi-technical papers written by Bell on that topic. Bell’s outlook is partially physical and partially philosophical, making these papers quite interesting reading. At this point I would say it’s incredibly well-written and accessible, but I remember trying to read this as an undergraduate in the 90’s (when there were only 22 papers; I picked this one up because I lost the old one in a postdoc-postdoc transition) and having quite a lot of trouble with it.   Many of the papers seem to be addressed to philosophers, whereas others are standard physics papers. But most of them lay in the no man’s land between theoretical physics and the philosophy of science. Many of Bell’s concerns run throughout the book, with slight variations from paper to paper. One of them is the incoherence of quantum mechanics: So long as wave packet reduction is an essential component [of quantum mechanics], so long as we don’t know exactly how and when it takes over for the Schrödinger equation, we do not have an exact and unambiguous formulation of our most fundamental theory. And that cannot stand. In order to have a reasonably scientific quantum theory, you should be able to express exactly when the wavefunction collapses. This is for several reasons, but what Bell really wants to know is this: if I measure the magnetic moment of an electron in a magnetic field, when does the electron decide which Sz state it is in (up or down)? Here are some options, which aren’t all of them: • Does it do so when I turn on the static magnetic field? • Does it do so when the microwave detection field reaches it? • Does it do so when the response is felt by the field? • Does it do so when the inductive current is generated in the pick-up coil? • Does it do so when the microwave current passes through the diode detector? • Does it do so when the detector is read by the multimeter? • Does it do so after the multimeter output is analyzed by the computer? • Does it do so when the analysis is displayed on the screen? • Does it do so when the graduate student save the data? • Does it do so when the Ph.D. looks at the charts? • Does it do so when the paper is submitted or accepted? • Does it do so when the paper is printed or earns an award?  The Ph.D. gag was Bell’s favorite sarcastic line in these papers (judging by the number of re-uses), which were drawn from publications like Reviews of Modern Physics, Foundations of Physics, and so on, as well as invited lectures and symposia and book chapters. The important thing is that “measurement,” resulting in the collapse of the wavefunction, is an essential part of quantum theory, but it is not well defined theoretically. In Bell’s words: The Landau-Lifshitz formulation…when applied with good taste and discretion is adequate for all practical purposes,” but it is “still ambiguous in principle about exactly when and exactly how the collapse occurs…”  This is the same problem that led Schrödinger to torture analogical cats late at night in obscure journals.* Furthermore, Bell feels that “highly idealized ‘measurements’ should be replaced by an interaction of continuous, if variable, character.” This is essentially the thing that Aharonov explores in the book that started PhysicsFM off, Quantum Paradoxes. Bell returns again and again to the Einstein-Poldosky-Rosen paradox (EPR, in case I use it again), its reformulation by Bohm into a more physical experiment, and finally, the Aspect Experiment which was the first practical test of the EPR paradox (the introduction to the new edition was written by Alain Aspect himself). The Aspect Experiment really turned Bell’s Theorem into an experiment, but Bell’s theorem was one that elucidated the true importance of what had been an almost forgotten result by Bohm – for the practical reason that no one could figure out how to do the experiment with 1950’s technology. The experiment took entangled photons (rather than electrons in Bohm’s experiment) and looked at their correlations. If you are looking at just up vs. down, clockwise vs. counterclockwise, and so on, then the correlations are fairly simple and come directly from conservation laws. However, when you tilt the detectors with respect to each other, the classical and quantum predictions diverge in such a way that an inspired and talented experimental physicist can tickle out the subtle differences. And when he did that experiment, Alain Aspect fount that quantum mechanics won and Bell’s theorem implied that local causality** was lost. And at that point, “the concept of ‘reality’ [became] an embarrassing one for many physicists,” according to Bell. Much of the book also discusses the interpretation of quantum mechanics. Bell looks at interpretations differently than most. In “Six Possible Worlds of Quantum Mechanics,” Bell categorizes theories into a 3 x 2 matrix. Bell’s three main categories are a no-nonsense measurement-based approach that doesn’t attempt to understand what is happening between measurements, that the wavefunction collapse is a real thing that happens to the quantum system and changes it, and that there are two or more subsystems in any quantum system that account for wave-particle duality (hidden variables). The “x2” breaks three interpretation into unromantic and romantic pairs. The romantic dual makes the interpretation interesting without adding any true meaning. Thus, you have this practical approach being paired with the Bohrian Copenhagen interpretation where the universe holds complimentary views, macroscopic and microscopic, simultaneously. The collapse interpretation is paired with a Wignerian dualistic interpretation where it is the act of intelligent observation that collapses the wave function. The de Broglie-Bohm hidden variable interpretation is paired with Everett’s multiversal interpretation where each possible way in which something can happen does happen – just in another universe. This is a very different view of Everett. Specifically, Bell’s interpretation of the many-worlds interpretation is to say that the many-worlds part is inessential. It is comforting, he says, to cosmologists (because it allows them to ignore the collapse of the universal wavefunction), but the additional “worlds” don’t add any new physics or understanding of what is happening. So, he says, if you strip the romantic multiverse from Everett, you have a (possibly different) nonlinear hidden variable theory than conjured by Bohm. I’ve never seen anyone else say that. To everyone else the many-worlds of the many-worlds interpretation are the point. The most annoying gripe Bell makes is to continually harp on his theory of “Be-ables,” which would be a subset of quantum mechanical observables with certain properties that make things less weird. I don’t think it helps so much as he thinks, and it certainly wasn’t clear what the different was, other than terminology, the in the first half-dozen papers he mentioned them in. In sum, I very much like this book. It is wonderfully written, physically insightful, and historically important. Many of the points, especially those from lectures, are very much Bell’s own thoughts and just his own thoughts that no one else thinks (beables), but even there he is trying to make points about the unsuitability of quantum theory without refinements that tell us what several of these mathematical objects that we use refer to in the physical world. * Well, not really obscure. But still. ** “Local causality” might seem to be a strange combination of words, but it is what we normally think of as causality. First, if P causes Q, then P occurs before Q. Second, if P causes Q, it should be close enough to affect Q by special relativity. That is P is close enough to Q that light can travel from P to Q. It really is what you’d think about as causality in relativity theory. No comments: Post a Comment
cf63d7bd475ea34a
How Mathematicians Think by William Byers by William Byers - Read Online How Mathematicians Think 0% of How Mathematicians Think completed To many outsiders, mathematicians appear to think like computers, grimly grinding away with a strict formal logic and moving methodically--even algorithmically--from one black-and-white deduction to another. Yet mathematicians often describe their most important breakthroughs as creative, intuitive responses to ambiguity, contradiction, and paradox. A unique examination of this less-familiar aspect of mathematics, How Mathematicians Think reveals that mathematics is a profoundly creative activity and not just a body of formalized rules and results. Nonlogical qualities, William Byers shows, play an essential role in mathematics. Ambiguities, contradictions, and paradoxes can arise when ideas developed in different contexts come into contact. Uncertainties and conflicts do not impede but rather spur the development of mathematics. Creativity often means bringing apparently incompatible perspectives together as complementary aspects of a new, more subtle theory. The secret of mathematics is not to be found only in its logical structure. The creative dimensions of mathematical work have great implications for our notions of mathematical and scientific truth, and How Mathematicians Think provides a novel approach to many fundamental questions. Is mathematics objectively true? Is it discovered or invented? And is there such a thing as a "final" scientific theory? Ultimately, How Mathematicians Think shows that the nature of mathematical thinking can teach us a great deal about the human condition itself. Published: Princeton University Press on ISBN: 9781400833955 List price: $24.95 Availability for How Mathematicians Think by William Byers by William Byers With a 30 day free trial you can read online for free Book Preview How Mathematicians Think - William Byers Page 1 of 1 Turning on the Light A FEW YEARS AGO the PBS program Nova featured an interview with Andrew Wiles. Wiles is the Princeton mathematician who gave the final resolution to what was perhaps the most famous mathematical problem of all time—the Fermat conjecture. The solution to Fermat was Wiles’s life ambition. When he revealed a proof in that summer of 1993, it came at the end of seven years of dedicated work on the problem, a degree of focus and determination that is hard to imagine.¹ He said of this period in his life, I carried this thought in my head basically the whole time. I would wake up with it first thing in the morning, I would be thinking about it all day, and I would be thinking about it when I went to sleep. Without distraction I would have the same thing going round and round in my mind.² In the Nova interview, Wiles reflects on the process of doing mathematical research: This is the way it is! This is what it means to do mathematics at the highest level, yet when people talk about mathematics, the elements that make up Wiles’s description are missing. What is missing is the creativity of mathematics—the essential dimension without which there is no mathematics. Ask people about mathematics and they will talk about arithmetic, geometry, or statistics, about mathematical techniques or theorems they have learned. They may talk about the logical structure of mathematics, the nature of mathematical arguments. They may be impressed with the precision of mathematics, the way in which things in mathematics are either right or wrong. They may even feel that mathematics captures the truth, a truth that goes beyond individual bias or superstition, that is the same for all people at all times. Rarely, however, do most people mention the doing of mathematics when they talk about mathematics. Unfortunately, many people talk about and use mathematics despite the fact that the light switch has never been turned on for them. They are in a position of knowing where the furniture is, to use Wiles’s metaphor, but they are still in the dark. Most books about mathematics are written with the aim of showing the reader where the furniture is located. They are written from the point of view of someone for whom the light switch has been turned on, but they rarely acknowledge that without turning on the switch the reader will forever remain in the dark. It is indeed possible to know where the furniture is located without the light switch having ever been turned on. Locating the furniture is a relatively straightforward, mechanical task, but turning on the light is of another order entirely. One can prepare for it, can set the stage, so to speak, but one can neither predict nor program the magical moment when things click into place. This book is written in the conviction that we need to talk about mathematics in a way that has a place for the darkness as well as the light and, especially, a place for the mysterious process whereby the light switch gets turned on. Almost everyone uses mathematics of some kind almost every day, and yet, for most people, their experience of mathematics is the experience of driving a car—you know that if you press on the gas the car will go forward, but you don’t have any idea why. Thus, most people are in the dark with respect to the mathematics that they use. This group includes untold numbers of students in mathematics classrooms in elementary schools, high schools, and universities around the world. It even includes intelligent people who use fairly sophisticated mathematics in their personal or professional lives. Their position, with respect to the mathematics they use every day, is like that of the person in the dark room. They may know where certain pieces of furniture are located, but the light switch has not been turned on. They may not even know about the existence of light switches. Turning on the light switch, the aha! experience, is not something that is restricted to the creative research mathematician. Every act of understanding involves the turning on of a light switch. Conversely, if the light has not gone on, then one can be pretty certain that there is no understanding. If we wish to talk about mathematics in a way that includes acts of creativity and understanding, then we must be prepared to adopt a different point of view from the one in most books about mathematics and science. This new point of view will examine the processes through which new mathematics is created and old mathematics is understood. When mathematics is identified with its content, it appears to be timeless. The new viewpoint will emphasize the dynamic character of mathematics—how it is created and how it evolves over time. In order to arrive at this viewpoint, it will be necessary to reexamine the role of logic and rigor in mathematics. This is because it is the formal dimension of mathematics that gives it its timeless quality. The other dimension—the developmental—will emerge from an examination of situations that have spawned the great creative advances in mathematics. What are the mechanisms that underlie these advances? Do they arise out of the formal structure or is there some other factor at work? This new point of view will turn our attention away from the content of the great mathematical theories and toward questions that are unresolved, that are in flux and problematic. The problematic is the context within which mathematical creativity is born. People are so motivated to find answers that they sometimes neglect the boundaries of the known, where matters have not settled down, where questions are more meaningful than answers. This book turns matters around; that is, the problematic is regarded as the essence of what is going on. The consequence is that much of the traditional way of looking at mathematics is radically changed. Actual mathematical content does not change, but the point of view that is developed with respect to that content changes a great deal. And the implications of this change of viewpoint will be enormous for mathematics, for science, and for all the cultural projects that get their worldview, directly or indirectly, from mathematics. Now, not everyone thinks that such a change in viewpoint is necessary or even desirable. There are eminent spokespeople for an opposing point of view, one that maintains that The ultimate goal of mathematics is to eliminate all need for intelligent thought.³ This viewpoint, one that is very influential in the artificial intelligence community, is that progress is achieved by turning creative thought into algorithms that make further creativity unnecessary. What is an algorithm? Webster’s New Collegiate Dictionary defines it to be a procedure for solving a mathematical problem in a finite number of steps that frequently involves the repetition of an operation. So an algorithm breaks down some complex mathematical task into a sequence of more elementary tasks, some of which may be repeated many times, and applies these more elementary operations in a step-by-step manner. We are all familiar with the simple mathematical algorithms for addition or multiplication that we learned in elementary school. But algorithms are basic to all of computer programming, from Google’s search procedures to Amazon’s customer recommendations. Today the creation of algorithms to solve problems is extremely popular in fields as diverse as finance, communications, and molecular biology. Thus the people I quoted in the above paragraph believe that the essence of what is going on in mathematics is the creation of algorithms that make it unnecessary to turn on the light switch. There is no question that some of the greatest advances in our culture involve the creation of algorithms that make calculations into mechanical acts. Because the notion of an algorithm underlies all of computer programming, algorithms are given a physical presence in computers and other computational devices. The evident successes of the computer revolution have moved most people’s attention from the creative breakthroughs of the computer scientists and others who create the algorithms to the results of these breakthroughs. We lose track of the how and the why of information technology because we are so entranced with what these new technologies can do for us. We lose track of the human dimension of these accomplishments and imagine that they have a life independent of human creativity and understanding. The point of view taken in what follows is that the experience Wiles describes is the essence of mathematics. It is of the utmost importance for mathematics, for science, and beyond that for our understanding of human beings, to develop a way of talking about mathematics that contains the entire mathematical experience, not just some formalized version of the results of that experience. It is not possible to do justice to mathematics, or to explain its importance in human culture, by separating the content of mathematical theory from the process through which that theory is developed and understood. Mathematics has something to teach us, all of us, whether or not we like mathematics or use it very much. This lesson has to do with thinking, the way we use our minds to draw conclusions about the world around us. When most people think about mathematics they think about the logic of mathematics. They think that mathematics is characterized by a certain mode of using the mind, a mode I shall henceforth refer to as algorithmic. By this I mean a step-by-step, rule-based procedure for going from old truths to new ones through a process of logical reasoning. But is this really the only way that we think in mathematics? Is this the way that new mathematical truths are brought into being? Most people are not aware that there are, in fact, other ways of using the mind that are at play in mathematics. After all, where do the new ideas come from? Do they come from logic or from algorithmic processes? In mathematical research, logic is used in a most complex way, as a constraint on what is possible, as a goad to creativity, or as a kind of verification device, a way of checking whether some conjecture is valid. Nevertheless, the creativity of mathematics—the turning on of the light switch—cannot be reduced to its logical structure. Where does mathematical creativity come from? This book will point toward a certain kind of situation that produces creative insights. This situation, which I call ambiguity, also provides a mechanism for acts of creativity. The ambiguous could be contrasted to the deductive, yet the two are not mutually exclusive. Strictly speaking, the logical should be contrasted to the intuitive. The ambiguous situation may contain elements of the logical and the intuitive, but it is not restricted to such elements. An ambiguous situation may even involve the contradictory, but it would be wrong to say that the ambiguous is necessarily illogical. Of course, it is not my intention to produce some sort of recipe for creativity. On the contrary, my argument is precisely that such a recipe cannot exist. This book directs our attention toward the problematic and the ambiguous because these situations so often form the contexts that produce creative insights. Normally, the development of mathematics is reconstructed as a rational flow from assumptions to conclusions. In this reconstruction, the problematic is avoided, deleted, or at best minimized. What is radical about the approach in this book is the assertion that creativity and understanding arise out of the problematic, out of situations I am calling ambiguous. Logic abhors the ambiguous, the paradoxical, and especially the contradictory, but the creative mathematician welcomes such problematic situations because they raise the question, What is going on here? Thus the problematic signals a situation that is worth investigating. The problematic is a potential source of new mathematics. How a person responds to the problematic tells you a great deal about them. Does the problematic pose a challenge or is it a threat to be avoided? It is the answer to this question, not raw intelligence, that determines who will become the successful researcher or, for that matter, the successful student. In preparing to write this introduction, I went back to reread the introductory remarks in that wonderful and influential book, The Mathematical Experience. I was struck by the following paragraph: I started to talk to other mathematicians about proof, knowledge, and reality in mathematics and I found that my situation of confused uncertainty was typical. But I also found a remarkable thirst for conversation and discussion about our private experiences and inner beliefs. I’ve had the same experience. People want to talk about mathematics but they don’t. They don’t know how. Perhaps they don’t have the language, perhaps there are other reasons. Many mathematicians usually don’t talk about mathematics because talking is not their thing—their thing is the doing of mathematics. Educators talk about teaching mathematics but rarely about mathematics itself. Some educators, like scientists, engineers, and many other professionals who use mathematics, don’t talk about mathematics because they feel that they don’t possess the expertise that would be required to speak intelligently about mathematics. Thus, there is very little discussion about mathematics going on. Yet, as I shall argue below, there is a great need to think about the nature of mathematics. What is the audience for a book that unifies the content with the doing of mathematics? Is it restricted to a few interested mathematicians and philosophers of science? This book is written in the conviction that what is going on in mathematics is important to a much larger group of people, in fact to everyone who is touched one way or another by the mathematization of modern culture. Mathematics is one of the primary ways in which modern technologically based culture understands itself and the world around it. One need only point to the digital revolution and the advent of the computer. Not only are these new technologies reshaping the world, but they are also reshaping the way in which we understand the world. And all these new technologies stand on a mathematical foundation. Of course the mathematization of culture has been going on for thousands of years, at least from the times of the ancient Greeks. Mathematization involves more than just the practical uses of arithmetic, geometry, statistics, and so on. It involves what can only be called a culture, a way of looking at the world. Mathematics has had a major influence on what is meant by truth, for example, or on the question, What is thought? Mathematics provides a good part of the cultural context for the worlds of science and technology. Much of that context lies not only in the explicit mathematics that is used, but also in the assumptions and worldview that mathematics brings along with it. The importance of finding a way of talking about mathematics that is not obscured by the technical difficulty of the subject is perhaps best explained by an analogy with a similar discussion for physics and biology. Why should nonphysicists know something about quantum mechanics? The obvious reason is that this theory stands behind so much modern technology. However, there is another reason: quantum mechanics contains an implicit view of reality that is so strange, so at variance with the classical notions that have molded our intuition, that it forces us to reexamine our preconceptions. It forces us to look at the world with new eyes, so to speak, and that is always important. As we shall see, the way in which quantum mechanics makes us look at the world—a phenomenon called complementarity—has a great deal in common with the view of mathematics that is being proposed in these pages. Similarly, it behooves the educated person to attempt to understand a little of modern genetics not only because it provides the basis for the biotechnology that is transforming the world, but also because it is based on a certain way of looking at human nature. This could be summarized by the phrase, You are your DNA or, more explicitly, DNA is nothing less than a blueprint—or, more accurately, an algorithm or instruction manual—for building a living, breathing, thinking human being.⁴ Molecular biology carries with it huge implications for our understanding of human nature. To what extent are human beings biological machines that carry their own genetic blueprints? It is vital that thoughtful people, scientists and nonscientists alike, find a way to address the metascientific questions that are implicit in these new scientific and technological advances. Otherwise society risks being carried mindlessly along on the accelerating tide of technological innovations. The question about whether a human being is mechanically determined by their blueprint of DNA has much in common with the question raised by our approach to mathematics, namely, Is mathematical thought algorithmic? or Can a computer do mathematics? The same argument that can be made for the necessity to closely examine the assumptions of physics and molecular biology can be made for mathematics. Mathematics has given us the notion of proof and algorithm. These abstract ideas have, in our age, been given a concrete technological embodiment in the form of the computer and the wave of information technology that is inundating our society today. These technological devices are having a significant impact on society at all levels. As in the case of quantum mechanics or molecular biology, it is not just the direct impact of information technology that is at issue, but also the impact of this technological revolution on our conception of human nature. How are we to think about consciousness, about creativity, about thought? Are we all biological computers with the brain as hardware and the mind defined to be software? Reflecting on the nature of mathematics will have a great deal to contribute to this crucial discussion. The three areas of modern science that have been referred to above all raise questions that are interrelated. These questions involve, in one way or another, the intellectual models—metaphors if you will—that are implicit in the culture of modern science. These metaphors are at work today molding human beings’ conceptions of certain fundamental human attributes. It is important to bring to conscious awareness the metascientific assumptions that are built into these models, so that people can make a reasonable assessment of these assumptions. Is a machine, even a sophisticated machine like a computer, a reasonable model for thinking about human beings? Most intelligent people hesitate even to consider these questions because they feel that the barrier of scientific expertise is too high. Thus, the argument is left to the experts, but the fact is that the experts do not often stop to consider such questions for two reasons: first, they are too busy keeping up with the accelerating rate of scientific development in their field to consider philosophical questions; second, they are insiders to their fields and so have little inclination to look at their fields from the outside. In order to have a reasonable discussion about the worldview implicit in certain scientific disciplines, it would therefore be necessary to carry a dual perspective; to be inside and outside simultaneously. In the case of mathematics, this would involve assuming a perspective that arises from mathematical practice—from the actual doing of mathematics—as well as looking at mathematics as a whole as opposed to discussing some specific mathematical theory. What is it that makes mathematics mathematics? What are the precise characteristics that make mathematics into a discipline that is so central to every advanced civilization, especially our own? Many explanations have been attempted. One of these sees mathematics as the ultimate in rational expression; in fact, the expression the light of reason could be used to refer to mathematics. From this point of view, the distinguishing aspect of mathematics would be the precision of its ideas and its systematic use of the most stringent logical criteria. In this view, mathematics offers a vision of a purely logical world. One way of expressing this view is by saying that the natural world obeys the rules of logic and, since mathematics is the most perfectly logical of disciplines, it is not surprising that mathematics provides such a faithful description of reality. This view, that the deepest truth of mathematics is encoded in its formal, deductive structure, is definitely not the point of view that this book assumes. On the contrary, the book takes the position that the logical structure, while important, is insufficient even to begin to account for what is really going on in mathematical practice, much less to account for the enormously successful applications of mathematics to almost all fields of human thought. This book offers another vision of mathematics, a vision in which the logical is merely one dimension of a larger picture. This larger picture has room for a number of factors that have traditionally been omitted from a description of mathematics and are translogical—that is, beyond logic—though not illogical. Thus, there is a discussion of things like ambiguity, contradiction, and paradox that, surprisingly, also have an essential role to play in mathematical practice. This flies in the face of conventional wisdom that would see the role of mathematics as eliminating such things as ambiguity from a legitimate description of the worlds of thought and nature. As opposed to the formal structure, what is proposed is to focus on the central ideas of mathematics, to take ideas—instead of axioms, definitions, and proofs—as the basic building blocks of the subject and see what mathematics looks like when viewed from that perspective. The phenomenon of ambiguity is central to the description of mathematics that is developed in this book. In his description of his own personal development, Alan Lightman says, Mathematics contrasted strongly with the ambiguities and contradictions in people. The world of people had no certainty or logic.⁵ For him, mathematics is the domain of certainty and logic. On the other hand, he is also a novelist who realized that the ambiguities and complexities of the human mind are what give fiction and perhaps all art its power. This is the usual way that people divide up the arts from the sciences: ambiguity in one, certainty in the other. I suggest that mathematics is also a human, creative activity. As such, ambiguity plays a role in mathematics that is analogous to the role it plays in art—it imbues mathematics with depth and power. Ambiguity is intrinsically connected to creativity. In order to make this point, I propose a definition of ambiguity that is derived from a study of creativity.⁶ The description of mathematics that is to be sketched in this book will be a description that is grounded in mathematical practice—what mathematicians actually do—and, therefore, must include an account of the great creativity of mathematics. We shall see that many creative insights of mathematics arise out of ambiguity, that in a sense the deepest and most revolutionary ideas come out of the most profound ambiguities. Mathematical ideas may even arise out of contradiction and paradox. Thus, eliminating the ambiguous from mathematics by focusing exclusively on its logical structure has the unwanted effect of making it impossible to describe the creative side of mathematics. When the creative, open-ended dimension is lost sight of, and, therefore, mathematics becomes identified with its logical structure, there develops a view of mathematics as rigid, inflexible, and unchanging. The truth of mathematics is mistakenly seen to come exclusively from a rigid, deductive structure. This rigidity is then transferred to the domains to which mathematics is applied and to the way mathematics is taught, with unfortunate consequences for all concerned. Thus, there are two visions of mathematics that seem to be diametrically opposed to one another. These could be characterized by emphasizing the light of reason, the primacy of the logical structure, on the one hand, and the light that Wiles spoke of, a creative light that I maintain often emerges out of ambiguity, on the other (this is itself an ambiguity!). My job is to demonstrate how mathematics transcends these two opposing views: to develop a picture of mathematics that includes the logical and the ambiguous, that situates itself equally in the development of vast deductive systems of the most intricate order and in the birth of the extraordinary leaps of creativity that have changed the world and our understanding of the world. This is a book about mathematics, yet it is not your average mathematics book. Even though the book contains a great deal of mathematics, it does not systematically develop any particular mathematical subject. The subject is mathematics as a whole—its methodology and conclusions, but also its culture. The book puts forward a new vision of what mathematics is all about. It concerns itself not only with the culture of mathematics in its own right, but also with the place of mathematics in the larger scientific and general culture. The perspective that is being developed here depends on finding the right way to think about mathematical rigor, that is, logical, deductive thought. Why is this way of thinking so attractive? In our response to reason, we are the true descendents of the Greek mathematicians and philosophers. For us, as for them, rational thought stands in contrast to a world that is all too often beset with chaos, confusion, and superstition. The dream of reason is the dream of order and predictability and, therefore, of the power to control the natural world. The means through which we attempt to implement that dream are mathematics, science, and technology. The desired end is the emergence of clarity and reason as organizational principles of the entire cosmos, a cosmos that of course includes the human mind. People who subscribe to this view of the world might think that it is the role of mathematics to eliminate ambiguity, contradiction, and paradox as impediments to the success of rationality. Such a view might well equate mathematics with its formal, deductive structure. This viewpoint is incomplete and simplistic. When applied to the world in general, it is mistaken and even dangerous. It is dangerous because it ignores one of the most basic aspects of human nature—in mathematics or elsewhere—our aesthetic dimension, our originality and ability to innovate. In this regard let us take note of what the famous musician, Leonard Bernstein, had to say: ambiguity . . . is one of art’s most potent aesthetic functions. The more ambiguous, the more expressive.⁷ His words apply not only to music and art, but surprisingly also to science and mathematics. In mathematics, we could amend his remarks by saying, the more ambiguous, the more potentially original and creative. If one wishes to understand mathematics and plumb its depths, one must reevaluate one’s position toward the ambiguous (as I shall define it in Chapter 1) and even the paradoxical. Understanding ambiguity and its role in mathematics will hint at a new kind of organizational principle for mathematics and science, a principle that includes classical logic but goes beyond it. This new principle will be generative—it will allow for the dynamic development of mathematics. As opposed to the static nature of logic with its absolute dichotomies, a generative principle will allow for the existence of mathematical creativity, be it in research or in individual acts of understanding. Thus ambiguity will force a reevaluation of the essence of mathematics. Why is it important to reconsider mathematics? The reasons vary from those that are internal to the discipline itself to those that are external and affect the applications of mathematics to other fields. The internal reasons include developing a description of mathematics, a philosophy of mathematics if you will, that is consistent with mathematical practice and is not merely a set of a priori beliefs. Mathematics is a human activity; this is a triumph, not a constraint. As such, it is potentially accessible to just about everyone. Just as most people have the capacity to enjoy music, everyone has some capacity for mathematics appreciation. Yet most people are fearful and intimidated by mathematics. Why is that? Is it the mathematics itself that is so frightening? Or is it rather the way in which mathematics is viewed that is the problem? Beyond the valid internal reasons to reconsider the nature of mathematics, even more compelling are the external reasons—the impact that mathematics has, one way or another, on just about every aspect of the modern world. Since mathematics is such a central discipline for our entire culture, reevaluating what mathematics is all about will have many implications for science and beyond, for example, for our conception of the nature of the human mind itself. Mathematics provided humanity with the ideal of reason and, therefore, a certain model of what thinking is or should be, even what a human being should be. Thus, we shall see that a close investigation of the history and practice of mathematics can tell us a great deal about issues that arise in philosophy, in education, in cognitive science, and in the sciences in general. Though I shall endeavor to remain within the boundaries of mathematics, the larger implications of what is being said will not be ignored. Mathematics is one of the most profound creations of the human mind. For thousands of years, the content of mathematical theories seemed to tell us something profound about the nature of the natural world—something that could not be expressed in any way other than the mathematical. How many of the greatest minds in history, from Pythagoras to Galileo to Gauss to Einstein, have held that God is a mathematician. This attitude reveals a reverence for mathematics that is occasioned by the sense that nature has a secret code that reveals her hidden order. The immediate evidence from the natural world may seem to be chaotic and without any inner regularity, but mathematics reveals that under the surface the world of nature has an unexpected simplicity—an extraordinary beauty and order. There is a mystery here that many of the great scientists have appreciated. How does mathematics, a product of the human intellect, manage to correspond so precisely to the intricacies of the natural world? What accounts for the extraordinary effectiveness of mathematics? Beyond the content of mathematics, there is the fact of mathematics. What is mathematics? More than anything else, mathematics is a way of approaching the world that is absolutely unique. It cannot be reduced to some other subject that is more elementary in the way that it is claimed that chemistry can be reduced to physics. Mathematics is irreducible. Other subjects may use mathematics, may even be expressed in a totally mathematical form, but mathematics has no other subject that stands in relation to it in the way that it stands in relation to other subjects. Mathematics is a way of knowing—a unique way of knowing. When I wrote these words I intended to say "a unique human way of knowing." However, it now appears that human beings share a certain propensity for number with various animals.⁸ One could make an argument that a tendency to see the world in a mathematical way is built into our developmental structure, hard-wired into our brains, perhaps implicit in elements of the DNA structure of our genes. Thus mathematics is one of the most basic elements of the natural world. From its roots in our biology, human beings have developed mathematics as a vast cultural project that spans the ages and all civilizations. The nature of mathematics gives us a great deal of information, both direct and indirect, on what it means to be human. Considering mathematics in this way means looking not merely at the content of individual mathematical theories, but at mathematics as a whole. What does the nature of mathematics, viewed globally, tell us about human beings, the way they think, and the nature of the cultures they create? Of course, the latter, global point of view can only be seen clearly by virtue of the former. You can only speak about mathematics with reference to actual mathematical topics. Thus, this book contains a fair amount of actual mathematical content, some very elementary and some less so. The reader who finds some topic obscure is advised to skip it and continue reading. Every effort has been made to make this account self-contained, yet this is not a mathematics textbook—there is no systematic development of any large area of mathematics. The mathematics that is discussed is there for two reasons: first, because it is intrinsically interesting, and second, because it contributes to the discussion of the nature of mathematics in general. Thus, a subject may be introduced in one chapter and returned to in subsequent chapters. It is not always appreciated that the story of mathematics is also a story about what it means to be human—the story of beings blessed (some might say cursed) with self-consciousness and, therefore, with the need to understand the natural world and themselves. Many people feel that such a human perspective on mathematics would demean it in some way, diminish its claim to be revealing absolute, objective truth. To anticipate the discussion in Chapter 8, I shall claim that mathematical truth exists, but is not to be found in the content of any particular theorem or set of theorems. The intuition that mathematics accesses the truth is correct, but not in the manner that it is usually understood. The truth is to be found more in the fact than in the content of mathematics. Thus it is consistent, in my view, to talk simultaneously about the truth of mathematics and about its contingency. The truth of mathematics is to be found in its human dimension, not by avoiding this dimension. This human story involves people who find a way to transcend their limitations, about people who dare to do what appears to be impossible and is impossible by any reasonable standard. The impossible is rendered possible through acts of genius—this is the very definition of an act of genius, and mathematics boasts genius in abundance. In the aftermath of these acts of genius, what was once considered impossible is now so simple and obvious that we teach it to children in school. In this manner, and in many others, mathematics is a window on the human condition. As such, it is not reserved for the initiated, but is accessible to all those who have a fascination with exploring the common human potential. We do not have to look very far to see the importance of mathematics in practically every aspect of contemporary life. To begin with, mathematics is the language of much of science. This statement has a double meaning. The normal meaning is that the natural world contains patterns or regularities that we call scientific laws and mathematics is a convenient language in which to express these laws. This would give mathematics a descriptive and predictive role. And yet, to many, there seems to be something deeper going on with respect to what has been called the unreasonable effectiveness of mathematics in the natural sciences.⁹ Certain of the basic constructs of science cannot, in principle, be separated from their mathematical formulation. An electron is its mathematical description via the Schrödinger equation. In this sense, we cannot see any deeper than the mathematics. This latter view is close to the one that holds that there exists a mathematical, Platonic substratum to the real world. We cannot get closer to reality than mathematics because the mathematical level is the deepest level of the real. It is this deeper level that has been alluded to by the brilliant thinkers that were mentioned above. This deeper level was also what I meant by calling mathematics irreducible. Our contemporary civilization has been built upon a mathematical foundation. Computers, the Internet, CDs, and DVDs are all aspects of a digital revolution that is reshaping the world. All these technologies involve representing the things we see and hear, our knowledge, and the contents of our communications in digital form, that is, reducing these aspects of our lives to a common numerical basis. Medicine, politics, and social policy are all increasingly expressed in the language of the mathematical and statistical sciences. No area of modern life can escape from this mathematization of the world. If the modern world stands on a mathematical foundation, it behooves every thoughtful, educated person to attempt to gain some familiarity with the world of mathematics. Not only with some particular subject, but with the culture of mathematics, with the manner in which mathematicians think and the manner in which they see this world of their own creation. What is my purpose in writing this book? Where do the ideas come from? Obviously, I think that the ideas are important because the point of view from which they are written is unusual. But putting aside the content of the book for a moment, there is also an important personal reason for me. This book weaves together two of the most important strands in my life. One strand is mathematics: I have spent a good part of the last forty years doing the various things that a university mathematician does—teaching, research, and administration. When I look back at my motivation for going into mathematics, what appealed to me was the clarity and precision of the kind of thinking that doing mathematics called for. However, clarity was not a sufficient condition for doing research. Research required something else—a need to understand. This need to understand often took me into realms of the obscure and the problematic. How, I asked myself, can one find one description of mathematics that unifies the logical clarity of formal mathematics with the sense of obscurity and flux that figures so prominently in the doing and learning of mathematics? The second strand in my life was and is a strenuous practice of Zen Buddhism. Zen helped me confront aspects of my life that went beyond the logical and the mathematical. Zen has the reputation for being antilogical, but that is not my experience. My experience is that Zen is not confined to logic; it does not see logic as having the final word. Zen demonstrates that there is a way to work with situations of conflict, situations that are problematic from a normal, rational point of view. The rational, for Zen, is just another point of view. Paradox, in Zen, is used constructively as a way to direct the mind to subverbal levels out of which acts of creativity arise. I don’t think that Zen has anything to say about mathematics per se, but Zen contains a viewpoint that is interesting when applied to mathematics. It is a viewpoint that resonates with many interesting things that are happening in our culture. They all involve moving away from an absolutist position, whether this means distrust of all ideologies or in rejection of absolute, objective, and timeless truth. For me, this means that ambiguity, contradiction, and paradox are an essential part of mathematics—they are the things that keep it changing and developing. They are the motor of its endless creativity. In the end, I found that these two strands in my life—mathematics and Zen—fit together very well indeed. I expect that you, the reader, will find this voyage beyond the boundaries of the rational to be challenging because it requires a change in perspective; but for that very reason I hope that you will find it exciting. Ambiguity opens up a world that is never boring because it is a world of continual change and creativity. The book is divided into three sections. The first, The Light of Ambiguity, begins by introducing the central notion of ambiguity. Actually one could look at the entire book as an exploration of the role of ambiguity in mathematics, as an attempt to come to grips with the elusive notion of ambiguity. In order to highlight my contention that the ambiguous always has a component of the problematic about it, I spend a couple of chapters talking about contradiction and paradox in mathematics. These chapters also enable me to build up a certain body of mathematical results so as to enable even readers who are a little out of touch with mathematics to get up to speed, so to say. The second section is called The Light as Idea. It discusses the nature of ideas in mathematics—especially those ideas that arise out of situations of ambiguity. Of course the creative process is intimately tied to the birth and the processing of mathematical ideas. Thus thinking about ideas as the fundamental building blocks of mathematics (as opposed to the logical structure, for example) pushes us toward a reevaluation of just what mathematics is all about. This section demonstrates that even something as problematic as a paradox can be the source of a productive idea. Furthermore, I go on to claim that some of the most profound ideas in mathematics arise out of situations that are characterized not by logical harmony but by a form of extreme conflict. I call the ideas that emerge out of these extreme situations great ideas, and a good deal of the book involves a discussion of such seminal ideas. The third section, The Light and the Eye of the Beholder, considers the implications of the point of view that has been built up in the first two sections. One chapter is devoted to a discussion of the nature of mathematical truth. Is mathematics absolutely true in some objective sense? For that matter, what do we mean by objectivity in mathematics? Thinkers of every age have attested to the mystery that lies at the heart of the relationship between mathematics and truth. My ambiguous approach leads me to look at this mystery from a perspective that is a little unusual. Finally, I spend a concluding chapter discussing the fascinating and essential question of whether the computer is a reasonable model for the kind of mathematical activity that I have discussed in the book. Is mathematical thought algorithmic in nature? Is the mind of the mathematician a kind of software that is implemented on the hardware that we call the brain? Or is mathematical activity built on a fundamental and irreducible human creativity—a creativity that comes
cdf4d7c077463f3d
Deep Blue This text is a common-sense introduction to the key concepts in quantum physics. It recycles what I consider to be my more interesting posts, but combines them in a comprehensive structure. For those who’d like to read it in an e-book format, I also published it on Amazon/Kindle, and summarized it online on another site. In fact, I recommend reading online, because the e-books do not have the animations: click the link for the shorter version, or continue here. [Note that the shorter version is more recent and has an added chapter on the physical dimensions of the real and imaginary component of the wavefunction, which I think is quite innovative – but I will let you judge that.] What I write – here and on the other site – is a bit of a philosophical analysis of quantum mechanics as well, as I will – hopefully – do a better job than others in distinguishing the mathematical concepts from what they are supposed to describe, i.e. physical reality. In the end, you’ll still be left with lots of unanswered questions. However, that’s quite OK, as the late Richard Feynman—who’s surely someone who knows quantum physics as few others do—was of the opinion that he himself did not understand the topic the way he would like to understand it. That’s what draws all of us to quantum physics: a common search for understanding, rather than knowledge alone. So let’s now get on with it. Please note that, while everything I write is common sense, I am not saying this is going to be easy reading. I’ve written much easier posts than this—treating only aspects of the whole theory. But this is the whole  thing, and it’s not easy to swallow. In fact, it may well too big to swallow as a whole. 🙂 But please do give it a try. I wanted this to be an intuitive but formally correct introduction to quantum math. However, when everything is said and done, you are the only who can judge if I reached that goal. I. The scene: spacetime Any discussion of physics – including quantum physics – should start with a discussion of our concepts of space and time, I think. So that’s what I’ll talk about first. Space and time versus spacetime Because of Einstein, we now know that our time and distance measurements are relative. Think of time dilation and relativistic length contraction here. Minkowski’s famous introduction to his equally famous 1908 lecture – in which he explained his ideas on four-dimensional spacetime, which enshrined Einstein’s special relativity theory in a coherent mathematical framework – sums it all up: Minkowski was not stating the obvious when he said this back in 1908: in his lecture, he talked about four-vectors and Lorentzian inner products. To be precise, he talked about his spacetime concept, which is… Well… About preserving the invariance of four-vectors and other rather non-intuitive stuff. If you want a summary: Minkowski’s spacetime framework is a mathematical structure which incorporates relativistic time dilation, length contraction and mass increase. Phew! That’s quite a mouthful, isn’t it? But we’ll need that non-intuitive stuff. Minkowski did not merely say that space and time are related. He went way beyond. Because it’s obvious that space and time are related, somehow. That has always been obvious. The thinking of Einstein and Minkowksi and their likes was radical because they told us we should think very differently about the way how space and time are related: they’re related in a non-intuitive way! However, to set the scene, let’s first talk about the easy relations, i.e. the Galilean or Newtonian concepts of time and space—which actually go much further back in time than Galileo or Newton, as evidenced by the earliest philosophical definitions we have of space and time. Think of Plato and Aristotle, for example. Plato set the stage, indeed, by associating both concepts with motion: we can only measure distance, or some time interval, with a reference to some velocity, and the concept of velocity combines both the time as well as the space dimension: v = Δx/Δt. Plato knew that ratio. He knew a lot about math, and he knew all about Zeno’s paradoxes which, from a mathematical point of view, can only really be refuted by introducing modern calculus, which includes the concepts of continuity, infinite series, limits and derivatives. In the limit, so for the time interval going to 0 (Δt → 0), the Δx/Δt ratio becomes a derivative, indeed, and the velocity becomes an instantaneous velocity: v = dx/dt Sorry for introducing math here but math is the language in which physics is expressed, so we can’t do without it. We will also need Pythagoras’ formula, which is shown below in a way which is somewhat less obvious than usual. [Note the length of the longest side of the upper triangle is one in this diagram.]AngleAdditionDiagramSine So the Greek philosophers already knew that time and distance were only ‘mere shadows’ of something more fundamental—and they knew what: motion. All is motion. Force is motion. Energy is motion. Momentum is motion. Action is motion. To be precise: force is measured as a change in motion, and all of the other concepts I just mentioned – i.e. energy, momentum and action – just combine force with time, distance, or both. So they’re all about motion too! I’ll come back to that in the next section. Let’s first further explore the classical ideas. To help you – or, let me re-phrase that, to help you help yourself 🙂 – you should try to think of defining time in any other way—I mean in another way than referring to motion. You may want to start with the formal definition of time of the International Bureau of Standards and Weights here, which states that one second is “the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyper-fine levels of the ground state of the Caesium-133 atom at rest at a temperature of 0 K.” Where’s the reference to motion? Well… Radiation is the more scientific word for light, i.e. a wave with a propagation speed that’s equal to… Well… The speed of light. So that’s 299,792,458 meter per second, precisely. So… Yes. Time is defined by motion. Let’s pause here. 299,792,458 meter per second precisely? How do we know that? The answer is quite straightforward: because the meter, as the distance unit in the International System of Units (SI), is defined as the distance traveled by light in the vacuum in 1/299,792,458 of a second. So there you go: both our time as well as our distance unit are defined by referring to each other—with the concept of the velocity of some wave as an intermediary. Let me be precise here: the definition of time and distance reflect each other, so to speak, as they both depend on the invariance of the speed of light, as postulated by Einstein: space and time may be relative, but the speed of light isn’t. We’ll always measure it as c = 299,792,458 m/s, and so that’s why it defines both our time as well as our distance unit. Let me insert that great animation here, that shows the relativistic transformation of spacetime. I’ll come back to it later, but note how the central diagonals – which reflect the constant speed of light – are immovable: c is absolute. The speed of light is the same in whatever reference frame, inertial or moving.  Hence, the point is: what matters is how an object (or a wave) moves (or propagates) in spacetime. Space and time on their own are just mathematical notions—constructions of the mind. The physics are in the movement, or in the action—as I will show. Immanuel Kant had already concluded that back in 1770, when he wrote the following: He could have written the same in regard to time. [I know Kant actually did think of time in very much the same way, but I didn’t bother to look up the quote for time. You can google it yourself.] God, Newton, Lorentz and the absolute speed of light Let me, after all of the philosophy above, tell you a light-hearted – and hopefully not to sacrilegious – story about the absolute speed of light which, as you know, inspired Einstein and Minkowksi to come up with their new theory—which, since Einstein published it more than a hundred years ago, has been validated in every possible way. It goes like this. If you would be God, and you’d have to regulate the Universe by putting a cap on speed. How would you do that? First, you would probably want to benchmark speed against the fastest thing in the Universe, which are those photons. So you’d want to define speed as some ratio, v/c. And so that would be some ratio between 0 and 1. So that’s our definition of v now: it’s that ratio. And then you’d want to put a speed limiter on everything else, so you’d burden them with an intricate friction device, so as to make sure the friction goes up progressively as speed increases. So you would not want something linear. No. You want the friction to become infinite as goes to 1 (i.e. c). So that’s one thing. You’d also want a device that can cope with everything: tiny protons, cars, spaceships, solar systems. Whatever. The speed limit applies to all. But you don’t need much force to accelerate a proton as compared, say, a Mercedes. So now you go around and talk to your engineers. One of them, Newton, will tell you that, when applying a force to an object, its acceleration will be proportional to its mass. He writes it down like this: F = m·a, and tells you to go to Lorentz. Lorentz listens to it and then shows you one of these online graphing tools and puts some formulas in. The graphs look like this. This is an easy formula that does the trick, Lorentz says. The red one is for m = 1/2, the blue one for m = 1, and the green one for m = 3. In the beginning, nothing much happens: you pick up speed but your mass doesn’t change. But then the friction kicks in, and very progressively as the speed gets closer to the speed of light. Now, you look at this, and tell Lorentz you don’t want to discriminate, because it looks like you’re punishing the green thing more than the blue or the red thing. But Lorentz says that isn’t the case. His factor is the same per unit mass. So those graphs are the product of mass and his Lorentz factor, which is represented by the blue line—as that’s the one for m = 1. Now, you think that’s looking good, but then you hesitate. You’re having second thoughts, and you tell Lorentz you don’t want to change the Laws of the Universe, as that would be messy. More importantly, it would upset Newton, who’s pretty fussy about God tampering with stuff. But Lorentz tells you it’s not a problem: Newton’s F = m·a will still work, he says. We just need to distinguish two mass concepts: the mass at rest, and the mass at velocity v. Just put a subscript: mv, and then you use that in Newton’s formula, so we write: F = mv·a. But so you don’t want to do things in a rush – fixing the Universe is not an easy job 🙂 – and so you go back to Newton and show him that graph and the F = mv·a. You expected him to shout at you, but… No. Something weird happens: Newton agrees! In principle, at least. He just wants to see a formula for mv. It’s funny, because he’s known for being skeptical and blocking new ideas, because he thinks the Universe is good as it is. And so now you’re really worried and ask Newton why he’d agree. Now Newton looks at you and starts a complicated story about relativity. He says he’s been watching some guys on Earth – Michelson and Morley – who’ve recently proved, experimentally that his relativity theory – i.e. Newtonian relativity (OK—it’s actually a theory which Galileo had already formalized) – is wrong. Dead wrong. Newton looks really upset about it, so you try to console him. You say: how can it be wrong, if everything is working  just fine? But then Newton starts rambling about limits and black bodies and other stuff you’ve never heard about. To make a long story short, you take Newton to Lorentz, and Lorentz shows his formula to Newton: mv = m0/√(1−v2) Newton looks at it intensely. You can almost hear him thinking. And then his face lights up. That’s it, he says. This fixes it. Go ahead. And so… Well… That’s it. That’s the story. It explains why the whole Universe, everything that’s got mass, has a Lorentz device now. 🙂 […] OK. Let’s get serious again. The point is: Einstein’s relativity theory is based on the experimentally established fact that the speed of light is absolute. The experiment is the 1887 Michelson-Morley experiment and, while it’s Einstein who explained the phenomenon in a more comprehensive theory (the special relativity theory) 18 years later (in 1905, to be precise), Hendrik Antoon Lorentz, a Dutch physicist, had already shown, in 1892, that all of the weird consequences of the Michelson-Morley experiment – i.e. relativistic time dilation, length contraction and mass increase – could be explained by that simple 1/√(1−v2) factor, which is why it’s referred to as the Lorentz factor, rather than the Einstein factor. 🙂 Mathematical note: The 1/√(1−v2) factor will – or should – remind you of the equation for a circle: y = √(1−x2), so you may think we could us that formula for an alternative Lorentz factor, which is shown below, first separately and then next to the actual Lorentz factor. The second graph shows the obvious differences between the two, and why the actual Lorentz factor works. redgraph 2 Indeed, the red graph, i.e. the 1–√(1−v2) formula (which is what it is because the origin of the whole circle that’s being described by this formula is the (0, 1) point, rather than the origin of our x and y axes), does not do the trick, because its value for = 1 is not infinity (∞) but 1. And now that we’re talking math, I want you to think about something else: while we measure the speed of light as = 299,792,458 m/s – always and everywhere – we just may want to think of that speed as being unimaginably big and, hence, we may want to equate it with infinity. So then we’d have a very different scale for our velocity. To be precise, we’d have a non-linear scale, as equating c with ∞ would amount to stretching the [o, 1] or [o, c] segment of our horizontal axis (i.e. our velocity scale here) infinitely, so the interval now morphs into [o, ∞]. [I know the mathematicians will cry wolf here, and they should—because there’s a real wolf here.] So we would also have to stretch our curve—which can be done, obviously. There is no problem here, is there? It’s just an alternative scale, right? Well… Think about it. I’ll let you find the wolf yourself. 🙂 Hint: think about the [o, ∞] notation and Zeno’s paradoxes. 🙂 […] You probably skipped that little exercise above, didn’t you? You shouldn’t. Because there’s something very deep about the finite speed of light. It’s just like Nature challenges our mathematical concept of infinity. Likewise, the quantum of action challenges the mathematical notion of the infinitesimally small, i.e. the very notion of a differential. At the same time, we know that, without those notions, math becomes meaningless. Think about this. And don’t skip exercises. 🙂 Let me now talk about what I wanted to talk about here: dimensions. This term may well be one of the most ambiguous in all of the jargon. It’s a term like space: you should always ask what space? There are so many definitions of a space—both physical as well as mathematical. So let me be clear: I am talking physical dimensions here, so that’s SI units like second, or joule, or kilogram. So I am not talking the x, y an z dimensions of Cartesian coordinate space here: those are mathematical dimensions. I must assume you are familiar with physical dimensions. I must assume, for example, that you know that energy is force over some distance, so the joule (i.e. the energy unit) is the newton·meter (so we write: 1 J = 1 N·m). Likewise, I must assume you know that (linear) momentum is force over some time: 1 N·s = 1 kg·m/s. [I know you’re used to think of momentum as mass times velocity, but just play a bit with Newton’s F = m·a (force = mass times acceleration) and you’ll see it all works out: 1 kg = 1 N·m/s2.] Energy and momentum are concepts that get most of the attention because they are used in conservation laws, which we can use to analyze and solve actual physical problems. The less well known concept of action is actually more intuitive but it is associated with a physical principle that your high-school teacher did not teach you: the principle of least action. I’ll talk about action a lot, and you know that Planck’s constant is the quantum of action. Not the quantum of energy, or of momentum. The dimension of action is newton·meter·second (N·m·s). I’ll also talk about angular momentum and the associated concept of spin—later—but you should note its dimension is newton·meter·second (N·m·s), so that’s the same dimension as action. It’s somewhat strange no one thought of associating the name of a scientist with the unit of momentum (linear or angular), or with the unit of action. I mean: we have jouleswattscoulombspascalsvolts, lamberts, einsteins (check it on the Web) and many more, but so we do not have a shorthand for the N·s or N·m·s unit. Strange but… Well… It sure makes it easier to understand things, as you immediately see what’s what now. It is these physical dimensions (as opposed to mathematical dimensions) that make physical equations very different from mathematical equations, even if they look the same. Think about Einstein’s E = m·c2 equation, for example. If, in math, we write bm·n, then we mean b is equal to m·n, because we’re just talking numbers here. And when two numbers are equal, then they are really equal. Why? Because their ‘numericality’ is the only quality they have. That’s not the case for physical equations. For example, Einstein’s mass-energy equivalence relationship does not mean that mass is equal to energy. It implies energy has an equivalent mass which, in turn, means that some of the mass of a non-elementary particle (like an atom or – since we know protons consist of quarks – a proton) is explained by the moving bits inside. The mass-energy equivalence relationship implies a lot of things. For example, it implies we can measure mass in J·s2/munits rather than in kg, which is nice. However, it does not imply that, ontologically, mass is energy. In fact, look at the dimensions here: we have joule on one side, and J·s2/m2 on the other. So E = m·c2 is definitely not like writing bm·n. If mass would be equal to energy, we’d just have a scaling factor between the two, i.e. a relation like E = m·c or something. We would not have a squared factor in it: we’d have a simple proportionality relation, and so we do not have that, and that’s why we say it’s an equivalence relation, not an identity. Let me make another quick note here. It’s a misconception to think that, when an atomic bomb explodes, it somehow converts the mass of some elementary particle into energy, because it doesn’t: the energy that’s released is the binding energy that keeps quarks together, but the quarks survive the blast. I know you’ll doubt that statement, but it’s true: even the concept of pair-wise annihilation of matter and anti-matter doesn’t apply to quarks—or not in the way we can observe electron-positron annihilation. But then I said I wouldn’t talk about quarks, and so I won’t. The point is: the (physical) dimensions give physical equations some meaning. Think of re-writing the E = m·c2 equation as E/c = m·c, for example. If you know anything about physics, this will immediately remind you that this equation represents the momentum of a photon: E/c = p = m·c, from which we get the equivalent mass of a photon: m = p/c = E/c2. So the energy concept in that E = m·c2 equation is quite specific because we can now relate it to the mass of a photon even if a photon is a zero-mass particle. Of course, that means it has zero rest mass (or zero rest energy), and so it’s only movement, and that movement represents energy which, in turn, gives it an equivalent mass, which is why a large star (our Sun, for example) bends light as it passes. But, again, I need to move on. Just think when you see a physical equation, OK? Treat it with some more respect than a mathematical equation. The math will help you to see things, but don’t confuse the physics with the math. Let me inject something else here. You may or may not know that the physical dimension of action and angular momentum is the same: it’s newton·meter·second. However, it’s obvious that the two concepts are quite different, as action may or may not be linear, while angular momentum is definitely not a linear concept. OK. That’s enough. I don’t want this to become a book. 🙂 Onwards! So I made it clear you should know the basic concepts – like energy and momentum, and the related conservation laws in physics, before you continue reading. Having said that, you may not be as familiar with that concept I mentioned above: the concept of (physical) action. Of course, you’ll say that you know what action is, because you do a lot of exercise. 🙂 But… Well… No. I am talking something else here. Or… Well… Maybe not. You can actually relate it to what you’re doing in the gym: pushing weights along a certain distance during a certain time. And, no, it’s not your wattagewatt is energy per second, so that’s N·m/s, not N·m·s. I was actually very happy to hear about the concept when I first stumbled on it, because I always felt energy, and momentum, were both lacking some aspect of reality. Energy is force times distance, and momentum is force times time—so it’s only logical to combine all three: force, distance, and time. So that’s what the concept of action is all about: action = force × distance × time. As mentioned above, it’s weird no one seems to have thought of a shorthand for this unit which, as mentioned above, is expressed in N·m·s. I’d call it the Planck because it’s the dimension of Planck’s constant, i.e. the quantum of action—even if that may lead to confusion because you’ve surely heard about Planck units, which are something else. Well… Maybe not. Let me write it out. Planck’s constant is the product of the Planck force, the Planck distance (or Planck length), and the Planck time: ħ = FPlP∙t≈ (1.21×1044 N)·(1.62×10−35 m)·(5.39×10−44 s) = 1.0545718×10−34 N·m·s [By the way, note how huge the Planck force is—in contrast to the unimaginably small size of the Planck distance and time units. We’ll talk about these units later.] The N·m·s unit is particularly useful when thinking about complicated trajectories in spacetime, like the one below. Just imagine the forces on it as this object decelerates, changes direction, accelerates again, etcetera. Imagine it’s a spaceship. To trace its journey in spacetime, you would keep track of the distance it travels along this weird trajectory, and the time, as it moves back and forth between here and there. That would actually enable you to calculate the forces on it, that make it move as it does. So, yes, force, distance and time. 🙂 trajectory 2 In mathematical terms, this implies you’d be calculating the value of a line integral, i.e. an infinite sum over differentials along the trajectory. I won’t say too much about it, as you should already have some kind of feel for the basics here. Let me, to conclude this section, note that the ħ = FPlP∙tP equation implies two proportionality relations: 1. ħ = EP/tP, i.e. the ratio of the Planck energy divided by the Planck time. 2. ħ = pP/lP, i.e. the ratio of the Planck momentum divided by the Planck length. In fact, we might be tempted to define the Planck unit like this, but then we get the Planck constant from experiment, and you should never ever forget that—just like you should never ever forget that the invariable (or absolute) speed of light is an experimental fact as well. In fact, it took a whole generation of physicists to accept what came out of these two experiments, and so don’t think they wanted it to be this way. In fact, Planck hated his own equation initially: he just knew it had to be true and, hence, he ended up accepting it. 🙂 Note how it all makes sense. We can now, of course, take the ratio of the two equations and we get: ħ/ħ = 1 = (EP/tP)/(pP/lP) ⇔ EP/pP = lP/tP = vP = = 1 So here we define the Planck velocity as EP/pP = lP/tP = vP and, of course, it’s just the speed of light. 🙂 Now, as we’re talking units here, I should make a small digression on these so-called natural units we’ll use so often. Natural units You probably already know what natural units are—or, at least, you may think you know. There are various sets of natural units, but the best known – and most widely used – is the set of Planck units, which we introduced above, and it’s probably those you’ve heard about already. We’re going to use them a lot in this document. In fact, I’ll write things like: “We’re using natural units here and so = ħ = 1.” Or something like: “We’ll measure mass (or energy, or momentum) in units of ħ.” Now that is highly confusing. How can a velocity – which is expressed in m/s – be equal to some amount of action – which is expressed in N·m·s? As for the second statement, does that mean we’re thinking of mass (or energy, or momentum) as some countable variable, like m = 1, 2, 3, etcetera? Let me take the first question first. The answer to that one is the one I gave above: they are not equal. Relations like this reflect some kind of equivalencenot some equality. I once wrote the following: Space and time appear as separate dimensions to us, but they’re intimately connected through c, ħ and the other fundamental physical constants. Likewise, the real and imaginary part of the wavefunction appear as separate dimensions too, but they’re also intimately connected through π and Euler’s number, i.e. through mathematical constants. That statement says it all but it is not very precise. Expressing our physical laws and principles using variables measured in natural units make us see things we wouldn’t see otherwise, so they help us in getting some better understanding. They show us proportionality relations and equivalence relations which are difficult to see otherwise. However, we have to use them carefully, and we must always remember that Nature doesn’t care about our units, so whatever units we use for our expressions, they describe the same physics. Let’s give an example. When writing that = 1, we can also write the following: E = m·c= m·c = m = p for vc = 1 In fact, that’s an expression I’ll use a lot. However, this expression is nonsensical if you don’t think about the dimensions: m·cis expressed in kg·(m/s)2, m·c in kg·m/s, and m in kg only. Hence, the relation above tells us the values of our E and m variables are numerically equal, but it does not tell us that energy, mass and momentum are the same, because they obviously aren’t. They’re different physical concepts. So what does it mean when we say we measure our variables in units of ħ? To explain that, I must explain how we get those natural units. Think about it for yourself: how would you go about it? The first thing you might say is that the absolute speed of light implies some kind of metaphysical proportionality relation between time and distance, and so we would want to measure time and distance in so-called equivalent units, ensuring the velocity of any object is always measured as a v/c ratio. Equating with 1 will then ensure this ratio is always measured as some number between 0 and 1. That makes sense but the problem is we can do this in many ways. For example, we could measure distance in light-seconds, i.e. the distance traveled by light in a second, i.e. 299,792,458 meter, exactly. Let’s denote that unit by lc. Now we keep the second as our time unit but we’ll just denote it as tso as to signal we’ve switched from SI units to… Well… Light units. 🙂 It’s easy to check it works: = (299,792,458 meter)/(1 second) = (1 lc/(1 tc) = 1 vc. Huh? One vc? Yep. I could have put 1, but I just wanted to remind you the physical dimension of our physical equations is always there, regardless of the mathematical manipulations we let loose on them. OK. So far so good. The problem is we can define an infinite number of sets of light units. The Planck units we mentioned are just one of them. They make that c = 1 equation work too: = (1 lP/(1 tP) = 1 vP = (1.62×10−35 m)·(5.39×10−44 s) = 299,792,458 m/s [If you do the calculation using the numbers above, you’ll get a slightly different number but that’s because these numbers are not exact: I rounded the values for the Planck time and distance. You can google the exact values and you’ll see it all works out.] So we need to some more constraints on the system to get a unique set of units. How many constraints do we need? Now that is a complicated story, which I won’t tell you here. . What are the ‘most fundamental constants in Nature’? To calculate Planck units, we use five constraints, and they’re all written as = 1 or ħ = 1 equations. To be precise, we get the Planck units by equating the following fundamental constants in Nature with 1, and then we just solve that set of equations to get our Planck units in terms of our old and trusted SI units: • c: the speed of light (299,792,458 m/s); • ħ: the reduced Planck constant, which we use when we switch from hertz (the number of cycles per second) as a measure of frequency (like in E = h·f) to so-called angular frequencies (like in E = ħ·ω), which are much more convenient to work with from a math point of view: ħ = h/2π; • G: the universal gravitational constant (6.67384×E = ħ·ω N·(m/kg)2); • ke: Coulomb’s constant (ke = 1/4πε0); and, finally, • kB: the Boltzmann constant, which you may not have heard of, but it’s as fundamental a constant as all the others. [When seeing Boltzmann’s name, I always think about his suicide. I can’t help thinking he would not have done that if he would have know that Planck would include his constant as part of this select Club of Five. He was, without any doubt, much ahead of his time but, unfortunately, few recognized that. His tombstone bears the inscription of the entropy formula: S = kB·logW. It’s one of these magnificent formulas—as crucial as Einstein’s E = m·c2 formula. But… Well… I  can’t dwell on it here, as I need to move on.] Note that, when we equate these five constants with 1, we’re re-scaling both unimaginably large numbers (like the speed of light) as well as incredibly small numbers (like h, or G and kB). But then what’s large and small? That’s relative, because large and small here are defined here using our SI units, some of which we may judge to be large or small as well, depending on our perspective. In any case, the point is: after solving that set of five equations, we get the so-called ‘natural units’: the Planck length, the the Planck time, the Planck energy (and mass), the Planck charge, and the Planck unit of temperature: • 1 Planck time unit (tP) ≈ 5.4×10−44 s • 1 Planck length unit (lP) ≈ 1.6×10−35 m • 1 Planck energy unit (EP) ≈ 1.22×1028 eV = 1.22×1019 GeV (giga-electronvolt) ≈ 2×109 J • 1 Planck unit of electric charge (qP) ≈ 1.87555×10–18 C (Coulomb) • 1 Planck unit of temperature (TP) ≈ 1.416834×1032 K (Kelvin) Have a look at the values. The Planck time and length units are really unimaginably small—literally! For example, the wavelength of visible light ranges from 380 to 750 nanometer: a nanometer is a billionth of a meter, so that’s 10−9 m. Also, hard gamma rays have wavelengths measured in picometer, so that 10−12 m. Again, don’t even pretend you can imagine how small 10−35 m is, because you can’t: 10−12 and 10−35 differ by a factor 10−23. That’s something we cannot imagine. We just can’t. The same reasoning is valid for the Planck time unit (5.4×10−44 s), which has a (negative) exponent that’s even larger. In contrast, we’ve got Planck energy and temperature units that are enormous—especially the temperature unit! Just compare: the temperature of the core of our Sun is 15 to 16 million degrees Kelvin only, so that’s about 1.5×10only: that’s 10,000,000,000,000,000,000,000,000 times smaller than the Planck unit of temperature. Strange, isn’t it? Planck’s energy unit is somewhat more comprehensible because, while it’s huge at the atomic or sub-atomic scale, we can actually relate it to our daily life by doing yet another conversion: 2×109 J (i.e. 2 giga-joule) corresponds to 0.5433 MWh (megawatt-hour), i.e. 543 kilowatt-hours! I could give you a lot of examples of how much energy that is but one illustration I particularly like is that 0.5 MWh is equivalent to the electricity consumption of a typical American home over a month or so. So, yes, that’s huge…:-)  What about the Planck unit for electric charge? Well… The charge of an electron expressed in Coulomb is about−1.6×10−19 C, so that’s pretty close to 1.87555×10–18 C, isn’t it? To be precise, the Planck charge is approximately 11.7 times the electron charge. So… Well… The Planck charge seems to be something we can imagine at least. What about the Planck mass? Well… Energy and mass are related through the mass-energy equivalence relationship (E = m·c2) and, when you take care of the units, you should find that 2 giga-joule (i.e. the Planck energy unit) corresponds to a Planck mass unit (mP) equal to 2.1765×10−8 kg. Again, that’s huge (at the atomic scale, at least): it’s like the mass of an eyebrow hair, or a flea egg. But so it’s something we can imagine at least. Let’s quickly do the calculations for the energy and mass of an electron, just to see what we get: 3. Hence, the electron mass expressed in Planck units is meP = me/mP = (9.1×10–31 kg)/(2.1765×10−8 kg) = 4.181×10−23, which is a very tiny fraction as you can see (just write it out: it’s something with 22 zeroes after the decimal point). Now, when we calculate the (equivalent) energy of an electron, we get the same number. Indeed, from the E = m·c2 relation, we know the mass of an electron can also be written as 0.511 MeV/c2. Hence, the equivalent energy is 0.511 MeV (in case you wonder, that’s just the same number but without the 1/cfactor). Now, the Planck energy EP (in eV) is 1.22×1028 eV, so we get EeP = Ee/EP = (0.511×10eV)/(1.22×1028 eV) = 4.181×10−23. So it’s exactly the same number as the electron mass expressed in Planck units. That’s nice, but not all that spectacular either because, when we equate c with 1, then E = m·c2 simplifies to E = m, so we don’t need Planck units for that equality. So that’s the real meaning of “measuring energy (or mass) in units of ħ.” What we’re saying is that we’re using a new gauge: Planck units. It ensures that, when we measure the energy and the mass of some object, we get the same numerical value, but their dimensions are still very different, as evidenced by the numbers we get when we write it all out: •  EeP = Ee/E= (0.511×10eV)/(1.22×1028 eV) = 4.181×10−23 You can check it for some other object, like a proton, for example: • mpP = mp/mP = (1.672622×10−27 kg)/(2.1765×10−8 kg) = 0.7685×10−19 = 7,685×10−23 • EpP = Ep/E= (938.272×10eV)/(1.22×1028 eV) = 768.5×10−22 = 7,685×10−23 Interesting, isn’t it? If we measure stuff in Planck units, then we know that whenever we measure the energy and/or the mass of whatever object, we’ll always get the same numerical value. Hence, yes, we do confidently write that E = m. But so don’t forget this does not mean we’re talking the same thing: the dimensions are what they are. Measuring stuff in natural units is kinda special, but it’s still physics. Energy is energy. Action is action. Mass is mass. Time is time. Etcetera. 🙂 To conclude this section, I’ll also quickly remind you of what I derived above: ħ = EP/tP and ħ = pP/lP ⇔ EP/pP = lP/tP = vP = = 1 Of course, this reminds us of the famous E/p = equation for photons. This, and what I wrote above, may lead you to confidently state that – when using natural units – we should always find the following: E = p However, that E = p equation is not always true. We only have that E = p relation for particles with zero rest mass. In that case, all of their energy will be kinetic (as they have no rest energy, there is no potential energy in their rest mass), and their velocity will be equal to = 1 because… Well… The slightest force accelerates them infinitely. So, yes, the E = p relation makes sense for so-called zero-mass particles—by which we mean: zero rest mass, as their momentum translates into some equivalent energy and, hence, some equivalent mass. In case of doubt, just go back to the old-fashioned formulas in SI units. Then that E = p relation becomes mv·c= mv·v, and so now you’re reminded of the key assumption − besides the use of natural units − when you see that E = p thing, which is v = c. To sum it all up, we write: E = p ⇔ c = 1 What about the E = m relation? That relation is valid whenever we use natural units, isn’t it? Well… Yes and no. More yes than no, of course—because we don’t get when reverting to SI units and writing it all out: E = m ⇔ mv·c= mc2 = 1, and the latter condition is the case whenever we use natural units, because then = 1 and, hence, c2 will also be equal to one—numerically at least! At this point, I should just give you the relativistically correct equation for relating mass, energy, velocity and momentum. It is the following one: p = (E/c2 Using natural units (so = 1 and becomes a relative velocity now, i.e. v/c), this simplifies to: p = v·E This implies what we wrote above: m·v = v·E ⇔ E = m and E = p if and only if v = c = 1. So just memorize that relativistic formula (using natural units, it’s just p = v·E, and so that it’s easy) and the two consequences: E = m (always) ⇐ p = v·E ⇒ E = p if and only if v = c = 1 However, as I will show in a later section (to be precise, as I will show when discussing the wavefunction of an actual electron), we must watch out with our mass concepts—and, consequently, with the energy concepts we use. As I’ve said a couple of time already, mass captures something else than energy, even if we tend to forget that when using that E = m equation too much. Energy is energy, and mass… Well… Mass is a measure of inertia, and things can become quite complicated here. Let me give an example involving the spin factor here. The mv·v equation captures linear momentum, but we may imagine some particle – at rest or not – which has equal angular momentum. Think of it as spinning around its own axis at some incredible velocity: its mass will effectively increase, because the energy in its angular momentum will have some equivalent mass. I know what you’ll say: that shouldn’t affect the m·v equation, as our mass factor will incorporate the energy that’s related to the angular momentum. Well… Yes. You’re right, but so that’s why you’ll sometimes see funny stuff, like E = 2m, for example. 🙂 If you see stuff like that, don’t panic: just think! Always ask yourself: whose velocity? What mass? Whose energy? Remember: all is relative, except the speed of light! Another example of how tricky things can be is the following. In the context of Schrödinger’s equation for electrons, I’ll introduce the concept of the effective mass, which, using natural units once more (so v is the relative velocity, as measured  against the (absolute) speed of light), I’ll write as meff = m·v2, so the efficient mass is some fraction (between 0 and 1) of the usual mass factor here. Huh? Yes. Again, you should always think twice when seeing a variable or some equation. In this case, the question is: what mass are we talking about? [I know this is a very nasty example, as the concept of the effective mass pops up only when delving really deep into quantum math, but… Well… I told you I’d give you a formally correct account of it.] Let me give one more example. A really fine paradox, really. When playing with the famous de Broglie relations – aka as the matter-wave equations – you may be tempted to derive the following energy concept: If you want, you can use the ω = E/ħ and k = p/ħ equations. You’ll find the same nonsensical energy formula. Nonsensical? Yes. Think of it. The energy concept in the ω = E/ħ relation is the total energy, so that’s E = m∙c2, and m∙cis equal to m·vif, and only if, v = c, which is usually not the case because the wavefunction is supposed to describe real-life particles that are not traveling at the speed of light (although we actually will talk first about theoretical zero-mass particles when introducing the topic). So how do we solve this paradox? It’s simple. We’re confusing two different velocity concepts here: the phase velocity of our wavefunction, versus the classical velocity of our particle, which is actually equal to the group velocity of the wavefunction. I know you may not be familiar with these concepts, so just look at the animation below (credit for which must go to Wikipedia): the green dot moves (rather slowly) with the group, so that’s the group velocity. The phase velocity is the velocity of the wave itself, so that’s the red dot.  Wave_group.gif So the equation we get out of the two de Broglie equations is E/p = vp, with vthe phase velocity. So the energy concept here is E = vp∙p = vp∙m = m·vpv. [Note we’re consistently using natural units, so all velocities are relative velocities measured against c.] Now, when presenting the Schrödinger equation, we’ll show that vp is equal to the reciprocal of v, so v= 1/v and the energy formula makes total sense: E = m·vpv = m·(1/v= m. In any case, don’t worry about it now, as we’ll tackle it later. Just make a mental note: the use of natural units is usually enlightening, but it does lead to complications from time to time—as we tend to forget what’s behind those simple E = p or E = m relations. In case of trouble or doubt, just revert to SI units and do a dimensional analysis of your equation, or think about that relativistic relation between energy and momentum, and re-derive the consequences. OK. I should really stop here and start the next piece but, as an exercise, think once more about what those Planck units really do. Think about the proton-electron example, for which we found that their mass and energy – as measured in Planck units – was equal to 4.181×10−23 units and 7,685×10−23 units in particular. That just establishes a proportionality relation between the energy and the mass of whatever objects we’d be looking at. Indeed, it implies the following: E and m proportionality This proportionality relation is a very deep fact, and it led many respected physicists say that mass actually is energy—that they are fundamentally, ontologically, or philosophically the same. Don’t buy that: energy is energy, and mass is mass. Never forget the cfactor, as its physical dimension is still there when using natural units: the = 1 identity does not make it disappear! Now, to conclude, I should answer the question I started out with. What do we mean when saying something: “We’ll measure mass (or energy, or momentum) in units of ħ.” Unlike what you might think, it does not mean we’re thinking of mass (or energy, or momentum) as some countable variable. No. That’s not what we mean. I am underlining this because I know some of my blog posts seem to suggest that. It’s a complicated matter, because I do like to think that, at the Planck scaletime and distance actually do become discrete (i.e. countable variables), so I do believe that the Planck time and distance units (i.e. tP ≈ 5.4×10−44 s and lP ≈ 1.6×10−35 m) are the smallest time and distance units that make sense. The argument is rather complicated but it’s related to the existence of the quantum of action itself: if ħ is what it is – i.e. some amount of energy that’s being delivered over some time, or some momentum over some distance – then there’s a limit to how small that time and/or distance can be: you can’t increase the energy (and/or momentum) and, simultaneously, decrease the space in which you’re packing all that energy (and/or momentum) indefinitely, as the equivalent mass density will turn your unimaginably tiny little space into a black hole, out of which that energy can’t escape anymore and, hence, it’s no longer consistent with your idea of some energy or some mass moving through spacetime because… Well… It can’t move anymore. It’s captured in the black hole. 🙂 In any case, that’s not what I want to talk about here. The point is: it’s not because we’d treat time and distance as countable variables, that energy and momentum and mass should be treated the same. Let me give an example to show it’s not so simple. We know – from the black-body radiation problem – that the energy of the photons will always be an integer multiple of ħ·ω – so we’ll have E1 = ħ·ω, E2 = 2·ħ·ω,… En = n·ħ·ω = n·h·f. Now, you may think that I’ve just given you an argument as to why energy should be a countable variable as well, but… No. The frequency of the light (= ω/2π) can take on any value, so ħ·ω = h·f is not something like 1, 2, 3, etcetera. If you want a simpler example, just think of the value we found for the electron mass and/or its energy. Expressed in Planck units, it’s equal to something like 0.0000000000000000000000418531… and I am not sure what follows. That doesn’t look like something that’s countable, does it? 🙂 However, I am not excluding that, at some very basic level, energy (and momentum) and, hence, mass, might be countable. Why? I am not sure but, in physics, we have this magical number—referred to as alpha (α), but better known as the fine-structure constant for reasons I explain in my posts on it (in which I actually take some of the ‘magic’ out!). It always pops up when discussing physical scales so… Well… Let’s quickly introduce it, although we’re not really going to use it much later. The Magical Number Let me first give its value and definition—or definitions (plural), I should say: fine-structure constant For the definition of all those physical constants, you can check the Wikipedia article on it, and then you’ll that these definitions are essentially all the same. 🙂 It’s easy to see that, using natural units so ε0 = ħ = c = 1, we can write the first equation just as: fine-structure constant in natural units Don’t break your head over it. In fact, in a first reading, you may just want to skip this section and move on to the next. However, let’s go through the motions here and so let me tell you whatever can be told about it. It’s a dimensionless number whose value is equal to something like 7.297352566… ≈ 1/137.0359991… In fact, you’ve probably seen the 1/137 number somewhere, but that’s just an approximation. In any case, it turns out that this fine-structure constant relates all of the fundamental properties of the electron, thereby revealing a unity that, admittedly, we struggle to understand. In that post of mine, I prove the following relationships: (1) α is the square of the electron charge expressed in Planck units: α = eP2. Now, this is essentially the same formula as the one above, as it turns out that the electron charge expressed in Planck units is equal to e/√(2π), with e the elementary charge. You can double-check this yourself, noting that the Planck charge is approximately 11.7 times the electron charge. To be precise, from this equation, it’s easy to see that the factor is 1/√α ≈ 11.706… You can then quickly check the relationship: qP = e/√α. In fact, as you play with these numbers, you’ll quickly notice most of the wonderful relations are just tautologies. [Having said that, when everything is said and done, α is and remains a very remarkable number.] (2) α is also the square root of the ratio of (a) the classical electron radius and (b) the Bohr radius: α = √(re /r). Note that this equation does not depend on the units, in contrast to equation 1 (above), and 4 and 5 (below), which require you to switch to Planck units. It’s the square of a ratio and, hence, the units don’t matter. They fall away. (3) Thirdly, α is the (relative) speed of an electron: α = v/c. The relative speed is, as usual, the speed as measured against the speed of light. Note that this is also an equation that does not depend on the units, just like (2): we can express v and c in whatever unit we want, as long we’re consistent and express both in the same units. (4) α is also equal to the product of (a) the electron mass and (b) the classical electron radius re (if both are expressed in Planck units, that is): α = me·re. And, of course, because the electron mass and energy are the same when measured in Planck units, we can also write: α = EeP·re. Now, from (2) and (4), we also get: Finally, we can substitute (1) in (5) to get: These relationships are truly amazing and, as mentioned, reveal an underlying unity at the atomic/sub-atomic level that we’re struggling to understand. However, at this point, I need to get back to the lesson. Indeed, I just wanted to jot down some ‘essentials’ here. Sorry I got distracted.:-) As mentioned above, while the number does incorporate a lot of what’s so fascinating about physics, the number is somewhat less magical than most popular writers would want you to believe. As said, you can check out my posts on it for further clues. Because… Well… I really need to move on—otherwise we’ll never get to the truly exciting bits—and to fully solve the riddle of God’s number, you’ll need to understand those more exciting bits, like how we get electron orbitals out of Schrödinger’s equation. 🙂 So… Well… We had better move on so we can get there. 🙂 The Principle of Least Action The concept of action is related to what might well be the most interesting law in physics—even if it’s possible you’ve never heard about it before: the Principle of Least Action. Let me copy the illustrations from the Master from his introduction to the topic: What’s illustrated above is that the actual motion of some object in a force field (a gravitational field, for example) will follow a path for which the action is least. So that’s the graph on the right-hand side. In contrast, if we add up all the action along the curve on the left-hand side (we do that with a line integral), we’ll get a much higher figure. From what we see, it’s obvious Nature tries to minimize the figures. 🙂 The math behind is not so easy, but one can show that the line integral I talked about is the following: You can check its dimension: the integrand is expressed in energy units, so that’s N·m, while the dt differential is expressed in seconds, so we do get some amount expressed in N·m·s. If you’re interested, I highly recommend reading Feynman’s chapter on it, although it’s not an easy one, as it involves a branch of mathematics you may not be familiar with: the calculus of variations. :-/ In any case, the point to note is the following: if space and time are only ‘shadows’ of something more fundamental – ‘some kind of union of the two’ – then energy and momentum are surely the same: they are just shadows of the more fundamental concept of action. Indeed, we can look at the dimension of Planck’s constant, or at the concept of action in general, in two different ways: The bracket symbols [ and ] mean: ‘the dimension of what’s between the brackets’. Now, this may look like kids stuff, but the idea is quite fundamental: we’re thinking here of some amount of action expressing itself in time or, alternatively, expressing itself in space. In the former case, some amount of energy (E) is expended during some time. In the latter case, some momentum (p) is expended over some distance. So we can now think of action in three different ways: 1. Action is force time distance times time 2. Action is energy times time; 3. Action is momentum time distance. Now, you’ve  surely seen the argument of the quantum-mechanical wavefunction: θ = (E/ħ)·t – (p/ħ)·x = (E·t)/ħ – (p·x)/ħ The E·t factor is energy times time, while the p·x factor is momentum times distance. Hence, the dimension of both is the same: it’s the dimension of physical action (N∙m∙s), but then we divide it by ħ, so we express both factors using a natural unit: Planck’s quantum of action. So θ becomes some dimensionless number. To be precise, it becomes a number that expresses an angle, i.e. a radian. However, remember how we got it, and also note that, if we’d look at ħ as just some numerical value – a scaling factor with no physical dimension – then θ would keep its dimension: it would be some number expressed in N·m·s. It’s an important note in light of what follows: you’ll see θ will come to represent the proper time of the object that’s being described by the wavefunction and… Well… Just read on. Sorry for jumping the gun all of the time. 🙂 What about the minus sign in-between the (E/ħ)·t and (p/ħ)·x? It reminds us of many things. One is the general shape of the argument of any wavefunction, which we can always write as k·x–ω·t (for a wave traveling in the positive x-direction, that is). In case you don’t know where this comes from, check my post on the math of waves. Also, if you’re worried about the fact that it’s k·x–ω·t rather than ω·t–k·x, remember we have a minus sign in front of θ in the wavefunction itself, which we’ll write as a·eiθ in a moment. Don’t worry about it now: just note we’ve got a proper wavefunction with this minus sign. An argument like this usually represents something periodic, i.e. a proper wave indeed. However, it reminds me also of other things. More difficult things. The minus sign in-between E·t and p·x also reminds me of the so-called variational principle in calculus, which is used to find a function (or a path) that maximizes or, in this case, minimizes the value of some quantity that depends on the function or the path. Think of the Lagrangian: L = T − V. [T represents the kinetic energy of the system, while V is its potential energy. Deep concepts. I’ll come back to them.] But let’s not digress too much. At this point, you should just note there is a so-called path integral formulation of quantum mechanics, which is based on the least action principle. It’s not an easy thing to understand intuitively because it is not based on the classical notion of a single, unique trajectory for some object (or some system). Instead, it allows for an infinity of possible trajectories. However, the principle behind is as intuitive as the minimum energy principle in classical mechanics. I am not going to say much about uncertainty at this point, but it’s a good place to talk about some basics here. Note that the definition of the second as packing 9,192,631,770 periods of light that’s emitted by a Caesium-133 atom going from one energy state to another (because that’s what transitioning means) assumes we can effectively measure one period of that radiation. As such, it should make us think: shouldn’t we think of time as a countable variable, rather than as a continuous variable? Huh? Yes. I mentioned the question before, when I discussed Planck units. But so here I am hinting at some other reason why we might want to think of time and distance as countable variables. Think of it: perhaps we should just think of time as any other scale: if our scale only allows measurements in centimeter (cm), we’ll say that this person is, say, 178 cm ± 1 cm, or – assuming we can confidently determine the measurement is closer to 178 cm than 179 or 177 cm – that his or her length is 178 cm ± 0.5 cm. Anyone who’s done surveys or studied a bit of statistics knows it’s a complicated matter: we’re talking confidence intervals, cut-off values, and so much other stuff. 🙂 But… Yes. I know what you’ll say now: this is not the fundamental Uncertainty – with capital U – that comes with Planck’s quantum of action, which we introduced already above. This is something that has to do with our limited powers of observation and all that. Well… Yes. You’re right. I was talking inaccuracy above—not uncertainty. But… Well… I have to warn you: at some fundamental level, the inaccuracy in our measurements can actually be related to the Uncertainty Principle, as I explain in my post on diffraction and the Uncertainty Principle. But… Well… Let’s first think a bit about inaccuracy then.  You’ll agree that the definition of the second of the International Bureau of Standards and Weights assumes we can – theoretically, at least – measure whether or not some event lasted as long as 1/9,192,631,770 seconds or, in contrast, if its duration was only 1/9,192,631,769 seconds, or 1/9,192,631,771 seconds. So the inaccuracy that’s implied is of the order of 1/9,192,631,770 s ≈ 1/(9.2×109) s ≈ 0.1×10−9 s, so that’s a tenth of a nano-second. That’s small but – believe it or not – we can accurately measure even smaller time units, as evidenced by the fact that, for example, scientists are able to confidently state that the mean lifetime of a neutral pion (π0) is (8.2±0.24)×10−17 seconds. So the precision here is like an atto-second (10−18 s). To put things into perspective: that’s the time it takes for light to travel the length of two hydrogen atoms. Obvious question: can we actually do such measurements? The answer is: yes, of course! Otherwise we wouldn’t have measurements like the one above, would we? 🙂 But so how do we measure stuff like that? Simple: we measure it by analyzing the distance over which these pions disintegrate after appearing in some collision, and – importantly – because we know, approximately, their velocity, which we know because we know their mass and their momentum, and because we know momentum is conserved. So… Well… There you go! 🙂 In any case, an atto-second (1×10−18 s) – so that’s a billionth of a billionth of a second – is still huge as compared with the Planck time unit, which is equal to t≈ 5.4×10−44 seconds. It’s so small that we don’t even have a name for such small numbers. Now, I’ll talk about the physical significance of the Planck time and distance later, as it’s not an easy topic. It’s associated with equally difficult concepts, such as the concept of the quantum vacuum, which most writers throw around easily, but which few, if any, bother to accurately define. However, we can already say something about it. The key point to note is that all of of our complicated reflections lead us to think that the Planck time unit may be a theoretical limit to how small a time unit can be. You’ll find many opinions on this topic, but I do find it very sensible to sort of accept that both time and distance become countable variables at the Planck scale. In other words, spacetime itself becomes discrete at that scale: all points in time, and all points in space, are separated by the Planck time and length respectively, i.e. t≈ 5.4×10−44 s and l≈ 1.6×10−35 m respectively. Separated? Separated by what? That’s a good question ! It shows that our concepts of continuous and discrete spacetime – or the concept of the vacuum versus the quantum vacuum – are not very obvious: if there is such thing as a fundamental unit of time and distance – which we think is the case – then our mathematical concepts are just what they are: mathematical concepts. So the answer to your question is: if spacetime is discrete, then these discrete points would still be separated by mathematical space. In other words, they would be separated by nothing. 🙂 The next step is to assume that the quantum of action may express itself in time only or, alternatively, in space only. To be clear, what I am saying here is that, at the Planck scale, we may think of a pointlike particle moving in space only, or in time only.  I know what you’ll say: that’s not possible. Everything needs some time to move from here to there. Well… […] Maybe. But maybe not. When you’re reading this, you’ve probably done some reading on quantum-mechanical systems already—like the textbook example of the ammonia two-state system. Think of that model: the nitrogen atom is once here, then there, and then it goes back again. With no travel time. So… Well… If you accepted those models, you’ve accepted  what I wrote above. 🙂 I think it is rather obvious that mathematical space is continuous, because it’s an idealization – a creation of our mind – of physical space, which looks like it’s fine-grained, i.e. discrete, but at the smallest scale only. In fact, quantum physics tells us the physical space must be fine-grained. Hence, in my view, there’s no contradiction here as long as we are clear the language we use to describe reality (i.e. math) is different from reality itself. 🙂 But, if time and distance become discrete or countable variables, and Planck’s quantum of action is what it is: a quantum, so action comes in integer multiples of it only, then energy and momentum must come in discrete units as well, isn’t it? Maybe. It’s not so simple. As I mentioned above, we know, for example, from the black-body radiation problem, that the energy of the photons will always be an integer multiple of ħ·ω – so we’ll have E1 = ħ·ω, E2 = 2·ħ·ω,… En = n·ħ·ω = n·h·f. Now, you may think that I’ve just given you an argument as to why energy should be a countable variable as well, but… No. The frequency of the light (= ω/2π) can take on any value, so ħ·ω = h·f is not something like 1, 2, 3, etcetera. At this point, we may want to look at the Uncertainty Principle for guidance. What happens if the E and p in our θ = (E/ħ)·t – (p/ħ)·x argument take on very small values, and x and t are measured in Planck units? Does our θ = (E/ħ)·t – (p/ħ)·x argument become a discrete variable. Some countable variable, like 1, 2, 3 etcetera? Let’s think of it. We know we should think of the Uncertainty Relations as a pair:  Δx·Δp ≥ ħ/2 and ΔE·Δt ≥ ħ/2 For example, if x and t are measured in Planck units, then we could imagine that both Δt and Δx will be positive integers, so they can only take on values like 1, 2, 3 etcetera. Let us then, just for argument’s sake, just equate  Δx and Δt with one. Now, let’s also assume we measure everything in natural units, so we measure E and p – and, therefore, Δp and ΔE as well – in Planck units, so our Δx·Δp ≥ ħ/2 and ΔE·Δt ≥ ħ/2 relations now become fascinatingly simple: Δp ≥ 1/2 and ΔE ≥ 1/2 What does this imply? Does it imply that E and p itself are discrete variables, to be measured in units of ħ/2 or… Well… 1/2. The answer is simple: No. It doesn’t. The Δp ≥ 1/2 equation just implies that the uncertainty about the momentum will be larger than the Planck momentum divided by two. Now, the Planck unit for the momentum is just as phenomenal – from an atomic or sub-atomic perspective, that is – as the Planck energy and mass. To be precise, pP ≈ 6.52485 kg·m/s. [You can calculate that yourself using the values for the Planck force and Planck time unit above, and then you should just convert the N·s dimension to kg·m/s using Newton’s Law. Just do it: this is a useful exercise which will give you some better feel about these quantities.] Now, you can google some examples of what this pP ≈ 6.52485 kg·m/s value corresponds to, but Wikipedia gives a nice example: it corresponds, among other things, to the momentum of a baseball with a mass m of about 145 grams travelling at 45 m/s, or 160 km/h. Now, that’s quite something,  wouldn’t you agree? Let me quickly make a methodological note here: I’ll often write things like ΔE ≥ ħ/2 and/or Δp ≥ ħ/2, but you should note that what we mean by this is the following: • ΔE ≥ (ħ/tP)/2 = EP/2 • Δp ≥ (ħ/lP)/2 = pP/2 Now, hang in there and think about the following. If we would not have any uncertainty, how would our wavefunction look like in our discrete spacetime? That’s actually quite simple. We just need to make an assumption in regard to E and p. Let’s assume E = p = ħ. Again, that means E = EP and p = pP. And please note that’s a huge value, as the energy of an electron is only 0.0000000000000000000000418531… times that value! In any case, that assumption implies the argument of our wavefunction can be written as follows: θ = (E/ħ)·t – (p/ħ)·x = [EP/(ħ)]·t – [pP/(ħ)]·x Now we know that we’re going to measure t and x as multiples of tP and lP. Hence, t and x become something like t = n·tand x = m·lP. In these two formulas, n and m are some integer number, and they’re both independent from each other. We can now further simplify the argument of our wavefunction: θ = [EP/(ħ)]·t – [pP/(ħ)]·x = [EP/(ħ)]·n·tP – [pP/(ħ)]·m·lP = [EP·tP/(ħ)]·n – [pP·lP/(ħ)]·m = [ħ/(ħ)]·n – [ħ/(ħ)]·m = (n – m) Hence, our elementary wavefunction is now equal to ei·(n − m). Now, that implies you may want to think of the wavefunction—and, yes, I know that I am getting way ahead of myself here because I still need to tell you what the wavefunction actually is— as some infinite set of points like: • ei(0 − 0) =ei(0) = cos(0) + i∙sin(0) • ei(1 − 0) = ei(1) = cos(1) + i∙sin(1) • ei(0 − 1) = ei(−1) = cos(−1) + i∙sin(−1) • ei(1 − 1) = ei(0) = cos(0) + i∙sin(0) The graphs hereunder show the results when we calculate the real and imaginary part of this wavefunction for n and m going from 0 to 14 (in steps of 1, of course). The graph on the right-hand side is the cosine value for all possible n = 0, 1, 2,… and m = 0, 1, 2,… combinations, and the left-hand graph depicts the sine values, so that’s the imaginary part of our wavefunction. You may still wonder what it represents really. Well… If you wonder how the quantum vacuum could possibly look like, you should probably think of something like the images above. 🙂 Sorry for not being more explicit but I want you to think about these things yourself. 🙂 In any case, I am jumping the gun here, as I’ll be introducing the wavefunction only later. Much later. So don’t worry if you didn’t understand anything here. I just wanted to flag some stuff. You should also note that what separates us from the Planck scale is a very Great Desert, and so I fully agree with the advice of a friend of mine: let’s first focus on the things we know, rather than the things we don’t know. 🙂 Before we move on, however, I need to note something else. The E and p in the argument of the wavefunction θ = (E/ħ)·t – (p/ħ)·x = (E·t)/ħ – (p·x)/ħ may – in fact, probably will – vary in space and time as well. I am saying that here, because I want to warn you: some of what follows  – in fact, a lot of what follows – assumes that E and p do not vary in space and time. So we often assume that we have no gravitational, electromagnetic or whatever other force fields causing accelerations or decelerations or changes in direction. In fact, I am going to simplify even more: I’ll often assume we’re looking at some hypothetical particle that has zero rest mass! Now that’s a huge simplification—but a useful one. Spacetime travel Let’s go back to basics and just look at some object traveling in spacetime. As we noted above, time and space are related, but they are still different concepts. So let’s be precise and attempt to describe how they differ, and let’s try to be exact. As mathematical concepts, we can represent them as coordinate axes, as shown below. Think of the spatial dimension (x) as combining all of classical Cartesian three-dimensional space or, if that’s easier, just think of one-dimensional space, like an object moving along a line. I’ve also inserted a graph, a function, which relates x and t. We can think of it as representing some object moving in spacetime, indeed. Note that the graph doesn’t tell us where the object is right now. It only shows us where it was. In fact, we might try to predict where it’s going, but you’ll agree that’s rather difficult with this one. 🙂 Of course, you’ll immediately say the trajectory above is not kosher, as our object is traveling back in time in not less than three sections of the graph. [Check this. Do you see where?] You’re right. We should not allow that to happen. Not now, at least. 🙂 It’s easy to see how we should correct it. We just need to ensure our graph is a well-defined function: for every value of t, we should have one, and only one, value of x. It’s easy to see that our concept of time going in one direction, and in one direction only, implies that we should only allow well-behaved functions. [This is rather nice because we’ve got an explanation for the arrow of time here without us having to invoke concepts of entropy or other references to some physical reality. In case you’d be interested in the topic, please check one of my posts on time reversal and symmetries.] So let’s replace our graph by something more kosher traveling in spacetime. Let’s that thing that I showed you already: trajectory 2 You’ll say: “It’s still a silly function. This thing accelerates, decelerates, and reverses direction all of the time. What is this thing?” I am not sure. It’s true it’s a weird thing: it only occupies a narrow band of space. But… Well… Why not? It’s only as weird as the concept of an electron orbital. 🙂 Indeed, all of the wonderful quantum-mechanical formulas I’ll give you can’t hide the fact we’re thinking of an electron in orbit in pretty much the same way as the object that’s depicted in the graph above: as a point-like object that’s zooming around and, hence, it’s somewhere at some point in time, but we just don’t know where it is exactly. 🙂 Of course, I agree we’d probably associate the path of our electron with something more regular, like a sine or cosine function. Here I need to note two things. First, the sine and cosine are essentially the same function: they only differ because of a phase shift: cosφ = sin(φ + π/2). Second, I need to show you Euler’s formula. Feynman calls it the the ‘most remarkable formula in mathematics’, and refers to it as ‘our jewel’. It’s true, although we can take some magic out of it by  constructing it algebraically—but I won’t bother you with that here. Let me just show you the formula: Now let me show you a nice animation that illustrates a fundamental idea that we’ll exploit in a moment. Think of φ (i.e. the argument of our sine and cosine) as time, and think of the whole thing as a clock, like the rather fancy clock below [Yes. I know it’s turning counter-clockwise at the moment, but that’s because of mathematical convention. Just put a minus sign in front of φ if you’d want to fix that.] Watch the red, blue and purple dots on the horizontal axis going back-and-forth. Watch their velocity: they oscillate between −1 and +1 (you can always re-scale), reach maximum velocity at the zero point (i.e. the center), and then decelerate to reverse direction, after which they accelerate in the other direction for another cycle. The points on the horizontal axis really behave like a mass on a spring. Note, for example, that the frequencies of the three waves are all the same: it’s just the amplitude that’s different (and then there’s also a fixed phase difference, as the blue wave is leading the others). You probably saw the formula: x = a·cos(ω·t + Δ), but a and Δ just incorporate the starting conditions (i.e. the initial stretch and the x = t(0) point), so let’s simplify and just write: cosφ = cos(ω·t). So, yes, φ is time, and the ω is just a scaling factor. What scaling factor? Well… I’ll come back to that. For the moment, just note that, for a mass on a spring, it’s the so-called natural frequency of the system, and it’s equal to ω = ω0 = √(k/m). In this equation, k is just the elastic (restoring) force that’s exerted by the spring, and m is the mass of the object that’s attached to it. You may also know our simple cosine function solves a differential equation, i.e. an equation involves derivatives. To be precise, the a·cos(ω·t + Δ) solution solves the d2x/dtkx equation. Don’t worry too much about it now. However, I do need to note something else we’ll also want to think about later. The energy formula for a mass on a spring tells us that the total energy—kinetic (i.e. the energy related to the momentum of that mass, which we’ll denote by T) plus potential (i.e. the energy stored in the spring, which we’ll denote by U)—is equal to T + U = m·ω02/2. Just look at this and note that it looks exactly the same as an another energy formula you’ll probably remember: E = v2/2, which describes the kinetic energy of some object in linear motion. Now, the last thing I want to show you here, is that Euler’s formula gives us one clock, but two springs, so to speak—as shown below. Wouldn’t you agree that a system like this would permanently store an amount of energy that’s equal to two times the above-mentioned amount, i.e. 2·m·ω02/2 = m·ω02? Now that is a very interesting idea! 🙂 Why? Think about the following. Remember the argument of the wavefunction: θ = ω·t – k·x = (E/ħ)·t – (p/ħ)·x Now, we know that the phase velocity of any wave is equal to vp = ω/k  = (E/ħ)/(p/ħ) = E/p, so we find that the phase velocity of the amplitude wave (vp) is equal to the E/p ratio. Now, for particles with zero rest mass (like photons, or the theoretical zero-mass spin-0 and spin-1/2 particles I’ll introduce shortly), we know that vp = c. Hence, for zero-mass particles we find that the classical velocity of the particle is equal to the speed of light, and that’s also the phase velocity of the amplitude wave. [As for the concept of group velocity, it just doesn’t apply here.] Hence, we can write: vp = = E/p = c2/c = c We just get a tautology. However, when discussing non-zero mass fermions (i.e. actual particles), I’ll show that the phase velocity of the wavefunction is equal c2/v, which simplifies to 1/β with β = v/when using natural units (so c = 1), but we don’t want to use natural units right now: vp = E/p ⇔ E = vp·p = (c2/v)·(mv·v) = mv·c2 You’ll say: so what? E = mv·c2? We know that already, so I am just proving the obvious here, isn’t it? Well… Yes and no. The mv·c2 formula looks just like m·ω02. So we can – and probably should – think of the real and imaginary part of our wavefunction as energy stores: both store half of the total energy of our particle. Isn’t that interesting? Of course, the smarter ones amongst you will immediately say the formula doesn’t make much sense, because mv·c2 = m·ω0implies that ω= and, hence, we’ve got a constant angular velocity here, which is not what we should have. Hmm… What can I say? Many things. First, it’s true that mv·c2 and m·ω0look similar but, when everything is said and done, the m in m·ω02 does represents an actual mass on an actual spring, doesn’t it? And so the k in the ω0 = √(k/m) formula is a very different k than the k in the θ = ω·t − k∙x argument of our wavefunction. Hence, it’s true that writing mv·c2 = m·ω0makes somewhat less sense than one would think at first. Secondly, you should also note that the m in the m·ω0is a non-relativistic mass concept, so m would be equal to m0, not mv. Let me first tackle the last remark, which is easy, because it’s really not to the point: for non-relativistic speeds, we’d have m0 ≈ mv, so they would not differ very much and, therefore, we should really think of the similarity—or, let’s be bolder, the equivalence—between those mv·c2 and m·ω02 equations. This brings me to the first remark. The smarter guys amongst you should be much bolder. In the next sections, I will show that we can re-write the argument of the wavefunction as θ = m0·t’. The mass factor is, of course, the inertial mass, so that’s the mass of the object as measured in its own (inertial) reference frame, so it’s not the mass factor as we see it (that’s mv). Likewise, the time variable, which I denote with a prime here (so I write t’ rather than t), is the proper time of the object we’re looking at. So… Well… The conclusion is, effectively, that the m·c2 and m·ω02 are fully equivalent. Indeed, as I will show in a moment, we can look at the wavefunction as a link function, which sort of projects what’s going on in our spacetime to what’s going on in the reference frame of the object that we’re looking at, and we can, effectively, think of what’s going on as some oscillation in two separate but similar energy spaces, which we can, effectively, represent by those two oscillations. The question now becomes a very different one: if ω0 = c, then what does the ω0 = √(k/m) equation correspond to?  If we’d really be talking some mass on a spring, then we know that the period and frequency of the oscillation are determined by the size of the mass (m) on that spring and the force constant k, which captures the strength of the restoring force—which is assumed to be proportional to the extension (or compression) of the spring, so we write: F = k·x, with the distance from the zero point. However, here we should remind ourselves that we should not take the metaphor too far. We should not really think we’ve got some spring in a real space and then a duplicate spring in an imaginary space and our object is not only traveling along some trajectory in our spacetime but – on top of that – also going up and down in that real and imaginary energy space. No. We may also use another metaphor: an electric circuit, for example, may also act as a harmonic oscillator and, in the process, store energy. In that case, the resonant frequency would be given by the ω0 = 1/√(L·C) formula, with the inductance and the capacitance of the circuit. In short, we should just think of the resonant frequency as some property of the system we’re looking at. In this case, we just find that ω0 = c, which is great, because… Well… When everything is said and done, we can actually look at the constant as just being some property of spacetime. I’ve done a few posts on that, notably one commenting on a rather poorly written article by a retired physics professor and so… Well… I won’t dwell on it here. OK. Onwards! [Oh… Before I continue, let me give credit to Wikipedia for the animations above. They’re simple but great—I think.] Cartesian versus polar coordinates The concept of direction is associated – in our mind, at least – with the idea of linearity. We associate the momentum of a particle, for example, with a linear trajectory in spacetime. But then Einstein told us spacetime is curved, and so what’s ‘linear’ in curved spacetime? You’ll agree we always struggle to represent – or imagine, if you prefer that term – curved spacetime, as evidenced by the fact that most illustrations of curved spacetime (like the one below, for example) represent a two-dimensional space in three-dimensional Cartesian space. I find such illustrations puzzling because they sort of mix the concept of physical space with that of mathematical space. curved spacetime Having said that, there’s no alternative, obviously: we do need the idea of a mathematical space to represent the physical space. So what’s mathematical space? Mathematical spaces can be defined in many ways: as mentioned above, the term has at least as many definitions as the term ‘dimension’. However, the most obvious mathematical space – and the one we’re usually referring to – is a coordinate space. Here I should note the simple Galilean or Newtonian relativity theory, so that’s pre-Einstein: when we’re talking mathematical space, we should always wonder whose space. So the concept of the observer and the inertial frame of reference creeps in. Note that, in general, we’ll want to look at things from our point of view. However, in what follows, I’ll introduce the notion of the proper space and the proper time, which is the space and time of the object that we’re looking at. Both were easy concepts before Einstein radically re-defined them. Before Einstein, the proper space was just the x = 0 space, and the proper time… Well… The proper time was just time: some universal clock that was the same for everyone and everything, moving or not. So relativity changed all of that, and we’ll come back to it. To conclude this introduction to the more serious stuff, let me define the concept of a ‘straight’ line in curved spacetime for you: if no force, or force field, is acting on some object, then it will just move in a straight line. The corollary of this, of course, is that it is not going to move in a straight line when some force is acting on it. The thing that you should note here is that, if you’d be the object, you’d feel the force – and the accelerations, decelerations or – quite simply – the change in direction it causes. Hence, you would know that you’re moving away from previous x = 0 point. Hence, you’d be picking up speed in your reference frame as well, and so you’d be able to calculate your acceleration and, hence, the force that’s acting on you, using Newton’s famous Law: F = m·a. So the straight line in your own space, i.e. your proper space, is the one for which x = 0, and t =… Well… Just time: a clock. It’s an important point to note in light of what will follow. But we need to move on. We’ll do some simple exercises with our mathematical space, i.e. our coordinate space. One such exercise is the transformation of a Cartesian coordinate space into a polar coordinate space, which is illustrated by the animation below. It’s neat but weird. Just look at it at a couple of times so as to understand what’s going on. It looks weird because we’re dealing with a non-linear transformation of space here – so it is not a simple rotation or reflection (even if the animation starts with one) – and, therefore, we’re not familiar with it. I described how it works in detail in one of my blog posts, so I won’t repeat myself here. Just note the results: the r = sin(6θ) + 2 function in the final graph (i.e. the curve that looks like a petaled flower) is the same as the y = sin(6x) + 2 curve we started out with, so y = r and x = θ. So it’s the same function. It’s just… Well… Two different spaces: one transforms into the other and we can, of course, reverse the operation. The transformation involves a reflection about the diagonal. In fact, this reflection can also be looked at as a rotation of all space, including the graph and the axes  – by 180 degrees. The axis of rotation is, obviously, the same diagonal. [I like how the animation (for which the credit must go to one of the more creative contributors to Wikipedia) visualizes this.] Note how the axes get swapped, which include a swap of the domain and the range of the function: the independent variable (x = θ) goes from −π to +π here, so that’s one cycle (we could also let it range from 0 to 2π), and, hence, the dependent variable (y = r) ranges between 1 and 3. [Whatever its argument, the sine function always yields a value between −1 and +1, but we add 2 to every value it takes, so we get the [1, 3] interval now.] Of course, the term ‘(in)dependent’ in ‘(in)dependent variable’ has no real meaning, as all is related to all in physics, so the concept of causality is just another concept that exists only in our mind, and that we impose on reality. At least that’s what philosophers like David Hume and Immanuel Kant were thinking—and modern physics does not disagree with it. 🙂 OK. That’s clear enough. Let’s move on. The operation that follows, after the reflection or rotation, is a much more complicated transformation of space and, therefore, much more interesting. Look what it does: it bends the graph around the origin so its head and tail meet. Note how this transformation wraps all of the vertical lines around a circle, and how the radius of those  circles depends on the distance of those lines from the origin (as measured along the horizontal axis). What about the vertical axis itself? The animation is somewhat misleading here, as it gives the impression we’re first making another circle out of it, which we then sort of shrink—all the way down to a circle with zero radius! So the vertical axis becomes the origin of our new space. However, there’s no shrinking really. What happens is that we also wrap it around a circle—but one with zero radius! So the vertical axis does become the origin – but not because of some shrinking operation: we only wrap stuff here—not shrinking anything. 🙂 Let’s now think of wrapping our own crazy spacetime graph around some circle. We’d get something like below. [Don’t worry about the precise shape of the  graph in the polar coordinate space, as I made up a new one. I made the two graphs with PowerPoint, and that doesn’t allow for bending graphs around a circle.] Note that the remark on the need for a well-behaved function – so time goes in one direction only – applies to our polar coordinate space too! Can you see how? We know that x and t were the space and time dimension respectively, so what’s r and θ here? […] Hey! Are you still there? Try to find the answer yourself! 🙂 It’s easy to see that the distance out (r) corresponds to x, but what about θ? The angle still measures time, right? Correct. But so we’ve got a weird thing here: our object just shakes around in some narrow band of space but so we made our polar graph start and stop at θ = 0 and θ = 2π respectively. This amounts to saying our graph covers one cycle. You’ll agree that’s kinda random. So we should do this wrapping exercise only when we’re thinking of our function as a periodic function. Fair enough. You’ll remember the relevant formulas here: if the period of our function is T, then its frequency is equal to = 1/T. The so-called angular frequency will be equal to ω = ∂θ/∂t = 2π·f = 2π/T. [Usually, you’ll just see something simple like θ = ω·t, so then it’s obvious that ω = ∂θ/∂t. Also note that I write ω = ∂θ/∂t, rather than ω = dθ/dt, so I am taking a partial derivative. Why? Because we’ll soon encounter another number, the wave number k = ω = ∂θ/∂x, which we should think of as a frequency in space, rather than as a frequency in time. Also note the 2π factor when switching from Cartesian to polar coordinates: one cycle corresponds to 2π radians. Please check out my post on the math of waves if you’re not familiar with basic concepts like this.] So we need something more regular. So let’s re-set the discussion by representing something very regular now: an object that just moves away from the zero point at some constant speed (see the spacetime graph on the right-hand side). Such trajectory becomes a spiral in the polar coordinate system (left-hand side). To be precise, we have a so-called Archimedian or arithmetic spiral here—as opposed to, let’s say, a logarithmic spiral. [There are many other types of spirals, but I let you google that yourself.] archi spiral The arithmetic spiral is described by the following general equation: r = a + b·θ. Changing a turns the spiral, while b controls the distance between successive turnings. We just choose a to be zero here – because we want r to be zero for θ = 0 – but what about b? One possibility is shown below. We just equate b to 1/(2π) here, so the distance out (r) is just the angle (θ) divided by 2π. Huh? How do we know that? Relax. Let’s calculate it. spec spiral The choice of the formula above assumes that one cycle (i.e. θ = 2π) corresponds to one distance unit, i.e. one meter in the SI system of units. So that’s why we write what we write: r = 1 = b·θ = b·2π  ⇔ b = 1/2π.  What formula do we get for θ? That’s easy to calculate. After one cycle, x = r = 1, but x = v·t, and so the time that corresponds to point x = 1 is equal to t = x/v = 1/v. Now, it’s easy to see that θ is proportional to t, so we write θ = ω·t, knowing that θ = 2π at t = 1/v. Indeed, the angle still measures time, but we’re looking for a scaling factor here. Hence, 2π = ω/v ⇔ ω = 2π·v.  To sum it up, we get: θ = 2π·v·t and r = θ/2π = v·t Piece of cake! However, while logical, our choice (i.e. us equating one cycle with one meter) is and remains quite arbitrary. We could also say that one cycle should correspond to 1 second, or 2π seconds, rather than 1 meter. So then we’d take the time unit as our reference, rather than the distance unit. That’s equally logical – if not more – because one cycle – in the polar representation – corresponds to 2π radians, so why wouldn’t we define θ as θ = 2π·t, as it’s the vertical axis – not the horizontal axis – that we are rolling up here? What do we get for r in that case? That’s equally easy to calculate: r = b·θ = 2π·b·t, but r is also equal to r = x = v·t. Hence, 2π·b·t = v·t and, therefore, must be equal to v/2π. We get: θ = 2π·t and r = v·t = θ/2π Huh? We get the same thing! No, we don’t. θ = 2π·v·t ≠ 2π·t. This is kinda deep. What’s going on here? Think about it: when we are making those choices above, we are basically choosing our time unit only, even when we thought we were picking the distance unit as our reference. Think about the dimensions of the θ = 2π·v·t formula but, more importantly, also think of its form: it’s still the same fundamental θ = ω·t formula. We just re-scale our time unit here, by multiplying it with the velocity of our object. Of course, the obvious question is: what’s the natural time unit here? The second, a jiffy, the Planck time, a galactic year, or what? Hard to say. However, one obvious way to try to get somewhere would be to say that we should choose our time and distance unit simultaneously, and in such a way so as to ensure = 1. Huh? Yes. Think about natural units: if we choose the second as our time unit, then our distance unit should be the distance that light travels in one second, i.e. 299,792,458 meter. The velocity of our object then becomes a relative velocity: a ratio between 0 and 1. This also brings in additional constraints on our graph in spacetime: the diagonal separates possible and impossible trajectories, as illustrated below. In jargon, we say that our spacetime intervals need to be time-like. You can look it up: two events that are separated by a time-like spacetime may be said to occur in each other’s future or past. OK. Fine. However, inserting the = 1 constraint doesn’t solve our scaling problem. We need something else for that—and I’ll tell you what in a moment. However, to understand what’s going to follow, you should think about the following fundamental ideas: 1. If we’d refer to the horizontal and vertical axis in our circle as a so-called real (Re) and imaginary (Im) axis respectively, then each point on our spiral above becomes a so-called complex number, which we write as ψ = a + b·i = ψ = r·eiθ = (cosθ+ i·sinθ). The is the imaginary unit – and it has all kinds of wonderful properties, which you may or may not remember from your high school math course. For example, i= −1. Likewise, the r·eiθ = (cosθ+ i·sinθ) is an equally wonderful formula, which I explained previously, so I am sorry I can’t dwell on it here. 2. Hence, we can now associate some complex-valued function ψ = r·eiθ = (cosθ+ i·sinθ) with some pointlike object traveling in spacetime. [If you don’t like the idea of pointlike objects – which, frankly speaking, I would understand, because I don’t like it either – then think of the point as the center of mass—for the moment, at least.] 3. The argument of our complex-valued function, i.e. θ, would be some linear function of time, but we’re struggling to find the right scaling factor here. Hence, for the moment, we’ll just write θ as θ = ω·t. 4. It’s really annoying that the r in our ψ = r·eiθ =function just gets larger and larger. What we probably would want to wrap around our circle would be a rotated graph, as shown below. So we’d want to rotate our coordinate axes (i.e. the t- and x-axis) before we wrap our graph representing our moving object around our circle—or, to put it differently, before we represent our graph in complex space. Also note we probably don’t want to wrap it around a circle of zero radius – so, in addition to the rotation, we’d need a shift along our new axis as well. Hmm… That’s getting complicated. How do we do that? I need to insert some more math here. If we’d not be talking t and x, but the ordinary Cartesian (x, y, z) space, and we’d do a rotation in the xy-plane over an angle that’s equal to α, our coordinates would transform as follows: t’ = t·cosα − x·cosα and x’ = x·cosα + t·cosα These formulas are non-relativistically though – so they are marked as ‘not correct’ in the first of the two illustrations below, which I took from Feynman. Look at those illustrations now. Think of what is shown here as a particle traveling in spacetime, and suddenly disintegrating at a certain spacetime point into two new ones which follow some new tracks, so we have an event in spacetime here. So now we want to switch to another coordinate space. One that’s rotated, somehow. In the Newtonian world, we just turn the axes, indeed, so we get a new pair, with the ‘primed’ coordinates (t’ and x’) being some mixture of the old coordinates (t and x). [The in c·t and c·t’ just tells us we’re measuring time and distance in equivalent units. In this case, we do so by measuring time in meter. Just write it out: c times one second is equal to (299,792,458 m/s)·(1 s) = 299,792,458 meter, so our old time unit (one second) now corresponds to 299,792,458 ‘time-meter‘. Note that, while we’re measuring time in meter now, the new unit should not make you think that time and distance are the same. They are not: we just measure them in equivalent units so = 1. That’s all.] But, since Einstein, we know we do not live in the Newtonian world, so we need to apply a relativistically correct transformation. That transformation looks very different. It’s illustrated in the second of the two graphs above, and I’ll remind you of the formulas, i.e. the Lorentz transformation rules, in the next section. You’ll see it solves our time scaling problem. 🙂 But let’s not get ahead of ourselves here. Let’s first do some more thinking here. From what preceded this discussion on polar and Cartesian coordinates, it’s pretty obvious we want to associate some clock with our traveling object. So we want the φ or θ to represent some proper time. Now, the concept of ‘proper time’ is actually a relativistic one – which I’ll talk about in a moment – but here we just want to do non-relativistic stuff. So what’s the proper time, classically? Well… We need to do that rotation and then shift the origin, right? Well… No. I led you astray—but for a good reason: I wanted you to think about what we’re doing here. The transformation we need is simple and complicated at the same time. We want our clock to just tick along the straight blue line, so we can nicely wrap it around Euler’s circle. So we do not want those weird graphs. We want to iron out all of the wrinkles, so to speak. So how do we do that? As I mentioned above, we’d feel it if we’re shaking along some shaky path. So we know what our proper space is: it’s the  x = 0 space. So we know when we’re traveling along a straight line – even in curved space 🙂 – and when we’re not. Now let’s look at one of those wrinkles, like that green curve below that is breaking away from the proper path. So we’d feel the force and, as mentioned above, we’d feel we’re picking up speed. In other words: we’d be deviating from our straight line in space, and we could calculate everything. We’d calculate that we’re moving away from the straight path with a velocity that we can calculate as v = Δx/Δt = tanα, so that’s the slope (m = tanα) of the hypotenuse of the dotted triangle. Of course, you have to think differentials and derivatives here, so you should think of dx and dt, and the instantaneous velocity = dx/dt here. But you see what I mean—I hope! The point is: once the force did what it had to do, we’re back on a straight line, but moving in some other direction, or moving at a higher velocity as observed from the non-inertial frame of reference. So… Well… It seems like we’re making things way too complicated here. It’s actually very easy to iron out all of the wrinkles in the Galilean or Newtonian world: time is time, and so the proper time is just t’ = t: it’s the same in any frame of reference. So it doesn’t matter whether we’re moving in a straight line or along some complicated path: the proper time is just time, and we can just equate that θ in our sine or cosine function (i.e. the argument of our wavefunction) with t, so we write θ = t = t’. So it’s simple: we literally stretch those weird graphs (or iron them out, if you prefer that term) and then we just measure time along them. So we just iron the graph and then wrap it around the unit circle. That’s all. 🙂 Well… Sort of. 🙂 The thing is: if it’s a complicated trajectory (i.e. anything that is not a straight line), the angular velocity will not be some constant: it will vary—and how exactly is captured by that action concept, as measured along the line. But I’ll come back to that. What about the proper space of our object? That’s easy: to ensure x’ = 0, we’ll permanently correct and the new origin will be equal to x’ = x−v·t. So, yes, very easy. We’re just doing simple Galilean (or Newtonian) transformations here: we’re looking at some object that’s traveling in spacetime, and we keep track of how space and time looks like in its own reference frame, i.e. in its proper space and time. So it’s classical relativity, which is usually represented by the following set of equations. classical relativity So… This all really looks like much ado about nothing, isn’t it? Well… Yes. I made things very complicated above and, yes, you’re right: you don’t need all these complicated graphs to just explain the concept of a clock that’s traveling with some object, i.e. the concept of proper time. The concept of proper time, in the classical world, is just time: absolute time. The thing is: since Einstein, we know the classical world is not the real world. 🙂 Now, quantum theory – i.e. the kind of wave mechanics that we will present below – was born 20 years after Einstein’s publication of his (special) relativity theory (we only need the special relativity theory to understand what the wavefunction is all about). That’s a long time, or a short time, depending on your perspective – another thing that’s relative 🙂 – but so here it is: the concept of proper time in the quantum-mechanical wavefunction is not the classical concept: it’s the relativistic one. Let me show you that now. II. The wavefunction You know the elementary wavefunction (if not, you’ll need to go through the essentials page(s) of this blog): ψ(x, t) = a·ei·[(E/ħ)·t − (p/ħ)∙x] = a·ei·(ω·t − kx= a·eiθ = a·eiφ = (cosφ + i·sinφ) with φ = −θ The latter part of the formula above is just Euler’s formula. Note that the argument of the wavefunction rotates clockwise with time, while the mathematical convention for the φ angle demands we measure that angle counter-clockwise. It’s a minor detail but important to note. The argument of the wavefunction Let’s have a closer look at the mathematical structure of the argument of the quantum-mechanical wavefunction, i.e. θ = ωt – kx = (E/ħ)·t – (p/ħ)·x. [We’ve simplified once again by assuming assuming one-dimensional space only, so the bold-face x and (i.e. the and vectors) are replaced by x and p. Likewise, the momentum vector p = m·v becomes just p = m·v.] The ω in the θ = ωt – kx argument is usually referred to as the frequency in time (i.e. the temporal frequency) of our wavefunction, while k is the so-called wave number, i.e. the frequency in space (or spatial frequency) of the wavefunction. So ω is expressed  in radians per second, while k is expressed in radians per meter. However, as we’ll see in a moment, the θ = ωt – kx expression is actually not very different from our previous θ = ω·t expression: the –kx term is like a relativistic correction because, in relativistic spacetime, you always have to wonder: whose time? However, before we get into that, let’s first play a bit with on of these online graphing tools to  see what that a·ei(k∙x−ω·t) =a·eiθ = (cosθ + i·sinθ) formula actually represents. Compare the following two graps, for example. Just imagine we either look at how the wavefunction behaves at some point in space, with the time fixed at some point t = t0, or, alternatively, that we look at how the wavefunction behaves in time at some point in space x = x0. As you can see, increasing k = p/ħ or increasing ω = E/ħ gives the wavefunction a higher ‘density’ in space or, alternatively, in time. density 1 density 2 Relativistic spacetime transformations Let’s now look at the whole thing once more. At first, it looks like that argument θ = ωt – kx = (E/ħ)·t – (p/ħ)·x is a lot more complicated than the θ = ω·t argument we introduced when talking about polar coordinates. However, it actually is not very different, as I’ll show below. Let’s see what we can do with this thing when assuming there are no force fields. In other words, we’re once again in the simplest of cases: we’re looking at some object moving in a straight line at constant velocity. In that case, it has no potential energy, except for the equivalent energy that’s associated with its rest mass m0. Of course, it also has kinetic energy, because of its velocity. Now, if mis the total mass of our object, including the equivalent mass of the particle’s kinetic energy, then its equivalent energy  (potential and kinetic—all included!) is E = mvc2. Let’s further simplify by assuming we measure everything in natural units, so = 1. However, we’ll go one step further. We’ll also assume we measure stuff in such units so ħ is also equal to unity. [In case you wonder what units that could be, think of the Planck units above. You can quickly check that the speed of light comes out alright: (1.62×10−35 m)/(5.39×10−44 s) = 3×10m·s, so if 1.62×10−35 m and 5.39×10−44 s are the new units – let’s denote them by 1 tP and 1 lP respectively – then c‘s value, as measured in the new units, will be one, indeed.] The velocity of any object, v, will now be measured as some fraction of c, i.e. a relative velocity. [You know that’s just the logical consequence of relativistic mass increase, which is real! It requires tremendous energy to accelerate elementary particles beyond a certain point, because they become so heavy!] To make a long story (somewhat) shorter, our energy formula E = mvc2 reduces to E = mv. Finally, just like in  our example with the Archimedean spiral we will also choose the origin of our axes such that x = 0 zero when t = 0, so we write: x(t = 0) = x(0) = 0. That ensures x = v·t for every point on the trajectory of our object. Hence, taking into account that the numerical value of ħ is also equal to 1 – and substituting p for p = mv·v – we can re-write θ = (E/ħ)·t – (p/ħ)·x as: θ = E·t – p·x = E·t − p·v·t = mv·t − mv·v·v·t = mv·(1 − v2)·t So our ψ(x, t) function becomes ψ = a·ei·mv·(1 −  v2)·t. This is really exciting! Our formula for θ now has the same functional form as our θ = 2π·v·t above: we just have an mv·(1 −  v2) factor here, times t, rather than a 2π·v factor times t. Note that we don’t have the 2π factor in our θ = mv·(1 −  v2) formula because we chose our units such that ħ = 1, so we equate the so-called reduced Planck constant with unity, rather than h = 2π·ħ. [In case you doubt: ħ is the real thing, as evidenced from the fact that we write the Uncertainty Principle as Δx·Δp ≥ ħ/2, not as Δx·Δp ≥ h/2.] But… Well… That mv·(1 −  v2) factor is a very different thing, isn’t it? Not at all like v, really. So what’s going on here? What can we say about this? Before investigating this, let me first look at something else—and, no, I am not trying to change the subject. I’ll answer the question above in a minute. However, let’s first do some more thinking about the coefficient in front of our wavefunction. Remember that the formula for the distance out (r) in that r·eiφ formula for that spiral was rather annoying: we didn’t want it to depend on φ. Fortunately, the in our ψ = a·eiθ will, effectively, not depend on θ. But so what is it then? To explain what it is, I must assume that you already know a thing or two about quantum mechanics. [If you don’t, don’t worry. I’ll come back to everything I write here. Just go with the flow right now.] One of the things you probably know, is that we should take the absolute square of this wavefunction to get the probability of our particle being somewhere in space at some point in time. So we get the probability as a function of x and t. We write: The result above makes use of the fact that |ei·φ|2 = 1, always. That’s mysterious, but it’s actually just the old cos2φ + sin2φ = 1 rule you know from high school. In fact, the absolute square takes the time dependency out of the probability, so we can just write: P(x, t) = P(x), so the probability depends on x only. Interesting! But… Well… That’s actually what we’re trying to show here, so I still have to show that the factor is not time-dependent. So let me show that to you, in an intuitive way. You know that all probabilities have to add up to one. Now, let’s assume, once again, we’re looking at some narrow band in space. To be specific, let’s assume our band is defined by Δx = x2 − x1. Also, as we have no information about the probability density function, we’ll just assume it’s a uniform distribution, as illustrated below. In that case, and because all probabilities have to add up to one, the following logic should hold: In short, the coefficient in the ψ = a·ei·mv·(1 −  v2)·t is related to the normalization condition: all probabilities have to add up to one. Hence, the coefficient of the elementary wavefunction does not depend on time: it only depends on the size of our box in space, so to speak. 🙂 OK. Done! Let’s now go back to our θ = mv·(1 −  v2)·t formula. 🙂 Both mv and vary, so that’s a bit annoying. Let us, therefore, substitute mv for the relativistically correct formula: mv = m0/√(1−v2). So now we only have one variable: v, or parameter, I should say, because we assume is some constant velocity here. [Sorry for simplifying: we’ll make things more complicated again later.] Let’s also go back to our original ψ(x, t) = a·ei·[(E/ħ)·t − (p/ħ)∙x] function, so as to include both the space as well as the time coordinates as the independent variables in our wavefunction. Using natural units once again, that’s equivalent to: ψ(x, t) = a·ei·(mv·t − p∙x) = a·ei·[(m0/√(1−v2))·t − (m0·v/√(1−v2)∙x) = a·ei·[m0/√(1−v2)]·(t − v∙x) Interesting! We’ve got a wavefunction that’s a function of x and t, but with the rest mass (or rest energy) and the velocity of what we’re looking at as parameters! But… Hey! Wait a minute! You know that formula, don’t you? The (t − v∙x)/√(1−v2) factor in the argument should make you think of a very famous formula—one that I am sure you must have seen a dozen of times already! It’s one of the formulas for the Lorentz transformation of spacetime. Let me quickly give you the formulas: Let me now remind you of what they mean. [I am sure you know it already, but… Well… Just in case.] The (x, y, z, t) coordinates are the position and time of an object as measured by the observer who’s ‘standing still’, while the (x′,y′,z′,t′) is the position and time of an object as measured by the observer that’s ‘moving’. In most of the examples, that’s the guy in the spaceship, who’s often referred to as Moe. 🙂 The illustration below shows how it works: Joe is standing still, and Moe is moving. relativity 3 The theory of relativity, as expressed in those transformation formulas above, shows us that the relationship between the position and time as measured in one coordinate system and another are not what we would have expected on the basis of our intuitive ideas. Indeed, the Galilean – or Newtonian – transformation of the coordinates as observed in Joe’s and Moe’s coordinate space would be given by the much simpler set of equations I already noted in the previous section, i.e.: We also gave you the (Newtonian) formulas for a rotation in the previous section. For a rotation (in the Newtonian world) we also got ‘primed’ coordinates which were a mixture of the old coordinates (t and x). So we’ve got another mixture here. It’s fundamentally different, however: that Lorentz transformation also involves mixing the old coordinates to get the new ones, but it’s an entirely different formula. As you can see, it’s an algebraic thing. No sines and cosines. The argument of the wavefunction as the proper time The primed time coordinate (t’), i.e. time as measured in Moe’s reference frame, is referred to as the proper time. Let me be somewhat more precise here, and just give you the more formal definition: the proper time is the time as measured by a clock along the path in four-dimensional spacetime. So what do we have here? A great discovery, really: we now know what time to use in our θ = ω·t formula. We need to use the proper time, so that’s t’ rather than t! Bingo! Let’s get the champagne out! Not yet. We shouldn’t forget the second factor: we also have m0 in our m0·[t − v∙x]/√(1−v2)] argument. But… Well… That’s even better ! So we also get the scaling factor here! The natural unit in which we should measure the proper time is given by the rest mass of the object that we’re looking at. To sum it all up, the argument of our wavefunction reduces to: θ = m0·t’ = m0·[t − v∙x]/√(1−v2)] In fact, when thinking about how the rest mass – through the energy and momentum factors in the argument of the wavefunction – affects its density, both in time as well as in space, I often think of an airplane propeller: as it spins, faster and faster (as shown below), it gives the propeller some ‘density’, in space as well as in time, as its blades cover more space in less time. It’s an interesting analogy, and it helps—me, at least—to try to imagine what that wavefunction might actually represent. In fact, every time I think about it, I find it provides me with yet another viewpoint or nuance. 🙂 The basics of the analogy are clear enough: our pointlike object (you may want to think of our electron in some orbital once again) is whizzing around, in a very limited box in space, and so it is everywhere and nowhere at the same time. At the same time, we may – perhaps – catch it at some point—in space, and in time—and it’s the density of its wavefunction that determines what the odds are for that. It may be useful for you to think of the following two numbers, so as to make this discussion somewhat more real: 1. The so-called Bohr radius is, roughly speaking, the size of our electron orbital—for a one-proton/one-electron hydrogen atom, at least. It’s about 5.3×10−11 m. 2. However, there is also a thing known as the classical electron radius (aka the Lorentz radius, or Thomson scattering length). It’s a complicated concept, but it does give us an indication of the actual size of an electron, as measured by the probability of ‘hitting’ it with some ray (usually a hard X-ray). That radius is about 2.8×10−15 m, so that’s  are the same. It is typically denoted σ and measured in units of area. Hence, the radius of our box is about 20,000 larger than the radius of our electron. I know what you’ll say: that’s not a lot. But that’s just because, by now, you’re used to all those supersonic numbers. 🙂 Think of what it represents in terms of volume: the cube in the volume formula – V = (4/3)·π·r3 – ensures the magnitude of the volume ratio is of the tera-order. To be precise: the volume of Bohr’s box is about 6,622,200,000,000 times larger than the Thomson box, plus or minus a few billion. Is that number supersonic enough? 🙂 So… Well… Yes. While, in quantum-mechanics, we should think of an electron as not having any actual size, it does help our understanding to think of it as having some unimaginably small, but actual, size, and as something that’s just whizzing around at some incredibly high speed. In short, think of the propeller picture. 🙂 To conclude this section, I’ll quickly insert a graph from Wikipedia illustrating the concept of proper time. Unlike what you might think, E1 and E2 are just events in spacetime, i.e. points in spacetime that we can identify because something happens there, like someone arriving or leaving. 🙂 The graph below would actually illustrate the twin paradox if the distance coordinate of E2 would have been the same as the distance coordinate of E1, and the t and τ are actually the time interval, which we’d usually denote by Δt and Δt’. In any case, you get the idea—I hope! 🙂 What does it all mean? Let’s calculate some stuff to see what it all means. We’ve actually done the calculations already. Look at those cosine and sine functions above: a higher mass will give the wavefunction a higher density, both in space as well as in time, as the mass factor multiplies both t as well as x. Of course, that’s obvious just from looking at θ = m0·t’. However, it’s interesting to stick to our x and t coordinates (rather than the proper time) and see what happens. Let’s make abstraction from the m0 factor for a moment because, as mentioned above, that’s basically just a scaling factor for the proper time. So let’s just look at the 1/√(1−v2) factor in front of t, and the v/√(1−v2) factor in front of x in our θ = m0·[t − v∙x]/√(1−v2)] = m0·[t/√(1−v2) − v∙x/√(1−v2)]. I’ve plotted them below. graph 2 First look at the blue graph  for that 1/√(1−v2) factor in front of t: it goes from one (1) to infinity (∞) as v goes from 0 to 1 (remember we ‘normalized’ v: it’s a ratio between 0 and 1 now). So that’s the factor that comes into play for time. For x, it’s the red graph, which has the same shape but goes from zero (0) to infinity (∞) as v goes from 0 to 1. Now that makes sense. Our time won’t differ from the proper time of our object if it’s standing still, and the in the v∙x/√(1−v2) term ensures it disappears when = 0. Just write it all out: θ = m0·[t/√(1−v2) − v∙x/√(1−v2)] = m0·[t/√(1−02) − 0∙x/√(1−02)] = m0·t However, as the velocity goes up, then the clock of our object – as we see it – will seem to be going slower. That’s the relativistic time dilation effect, with which most of us are more familiar with than the relativistic mass increase or length contraction effect. However, they’re all part of the same thing. You may wonder: how does it work exactly? Well… Let’s choose the origin of our axes such that x = 0 zero when t = 0, so we write: x(t = 0) = x(0) = 0. That ensures x = v·t for every point on the trajectory of our object. In fact, we’ve done this before. The argument of our wavefunction just reduces to: θ = m0·[t/√(1−v2) − vv·t/√(1−v2)]= m0·(1 − v2)/√(1−v2)·t = m0·√(1 − v2)·t I’ll let you draw the graph yourself: the √(1 − v2) factor goes from 1 to  0 as v goes from o to 1. OK. That’s obvious. But what happens with space? Here, the analysis becomes really interesting. The density of our wavefunction, as seen from our coordinate frame, also becomes larger in space, for any time of t or t’. However, note that all of our weird or regular graphs above assumed some fixed domain for our function and, hence, the number of oscillations is some fixed number. But if their density increases, that means we must pack them in a smaller space. In short, the increasing density of our wavefunction – as velocities increase – corresponds to the relativistic length contraction effect: it’s like space is contracting as the velocity increases. OK. All of the above was rather imprecise—an introduction only, meant to provide some more intuitive approach to the subject-matter. However, now it’s time for the real thing. 🙂 Unfortunately, that will involve a lot more math. And I mean: a lot more! 😦 However, before I move on, let me first answer a question you may have: is it important to include relativistic effects? The answer is: it depends on the (relative) velocity of what we’re looking at. For example, you may or may not know we do have some kind of classical idea of the velocity of an electron in orbit. It’s actually one of the many interpretations of what some physicists refer to as ‘God’s number’, i.e. the fine-structure constant (α). Indeed, among other things, this number may be interpreted as the ratio of the velocity of the electron in the first circular orbit of the Bohr model of an atom and the speed of light, so it’s our v. In fact, I mentioned it in that digression on α: one of the ways we can write it is α = v/c, indeed. Now, the numerical value of α is about 7.3×10−3 (for historical reasons, you’ll usually see it written as 1/α ≈ 137), so its (classical) velocity is just a mere 2,187 km per second. At that velocity, the 1/√(1−v2) factor in front of t is very near to 1 (the first non-zero digit behind the decimal point appears after four zeros only), while the v/√(1−v2) in front of x is (almost) equal to α ≈ o.007297… [In fact, at first, I actually thought they were actually equal, but then α/√(1−α2) is obviously not equal to α.] Hence, when calculating electron orbitals (like I did in one of my posts on Schrödinger’s equation), one might just as well not bother about the relativistic correction and just equate the proper time (t’) with the time of the observer (t). In fact, that’s what the Master (whose Chapter on electron orbitals I summarized in that post of mine) does routinely, as most of the time he’s talking about rather heavy objects, like electrons, or nitrogen atoms. 🙂 To be specific, the solutions for Schrödinger’s equation for the electron in a hydrogen atom all share the following functional form: ψ(x, t) = ψ(x)·ei·(E/ħ)·t Hence, the position vector does not appear in the argument of the complex exponential: we only get the first term of the full (E/ħ)·t − (p/ħ)∙x argument here. The position vector does appear in the coefficient in front of our exponential, however—which is why we get all these wonderful shapes, as illustrated below (credit for the illustration goes to Wikipedia). 🙂 Well… No. I should be respectful for the Master. Feynman does not write the wavefunction in the way he writes it – i.e. as ψ(x, t) = ψ(x)·ei·(E/ħ)·t – to get rid of the relativistic correction in the argument of the wavefunction. Think of it: the ψ = (r, t) =  e−(i/ħ)·E·t·ψ(r) expression is not necessarily non-relativistic, because we can re-write the elementary a·ei[(E/ħ)·t – (p/ħ)·x] function as e−(i/ħ)·E·t·ei·(p/ħ)·x]. Feynman just writes what he writes to ease the search for functional forms that satisfy Schrödinger’s equation. That’s all. [By the way, note that the coefficient in front of the complex exponential as ψ(r) = ei·(p/ħ)·x] still does the trick we want it to do: we do not want that coefficient to depend on time: it should only depend on the size of our ‘box’ in space.] So what’s next? Well… The inevitable, I am afraid. After introducing the wavefunction, one has to introduce… Yep. The Other Big Thing. 🙂 III. Schrödinger’s equation Schrödinger’s equation as a diffusion equation You’ve probably seen Schrödinger’s equation a hundred times, trying to understand what it means. Perhaps you were successful. Perhaps you were not. Its derivation is not very straightforward, and so I won’t give you that here. [If you want, you can check my post on it.] Let me first jot it down once more. In its simplest form – i.e. not including any potential, so then it’s an equation that’s valid for free space only—no force fields!—it reduces to: Schrodinger's equation heat diffusion 2 Moreover, I noted the similarity is not only structural. There is more to it: both equations model some flow in space and in time. Let me make the point once more by first explaining it for the heat diffusion equation. The time derivative on the left-hand side (∂T/∂t) is expressed in K/s (Kelvin per second). Weird, isn’t it? What’s a Kelvin per second? In fact, a Kelvin per second is actually a quite sensible and straightforward quantity, as I’ll explain in a minute. But I can understand you can’t make much sense of it now. So, fortunately, the constant in front (k) makes sense of it. That coefficient (k) is the (volume) heat capacity of the substance, which is expressed in J/(m3·K). So the dimension of the whole thing on the left-hand side (k·∂T/∂t) is J/(m3·s), so that’s energy (J) per cubic meter (m3) and per second (s). That sounds more or less OK, doesn’t it? 🙂 So what about the right-hand side? On the right-hand side we have the Laplacian operator  – i.e. ∇= ·, with ∇ = (∂/∂x,  ∂/∂y,  ∂/∂z) – operating on T. The Laplacian operator, when operating on a scalar quantity, gives us a flux density, i.e. something expressed per square meter (1/m2). In this case, it’s operating on T, so the dimension of ∇2T is K/m2. Again, that doesn’t tell us very much: what’s the meaning of a Kelvin per square meter? However, we multiply it by the thermal conductivity, whose dimension is W/(m·K) = J/(m·s·K). Hence, the dimension of the product is the same as the left-hand side: J/(m3·s). So that’s OK again, as energy (J) per cubic meter (m3) and per second (s) is definitely something we can associate with a flow. Hence, the diffusion constant does what it’s supposed to do: 1. As a constant of proportionality, it quantifies the relationship between both derivatives (i.e. the time derivative and the Laplacian) In fact, we can now scrap one m on each side 🙂 so the dimension of both sides then becomes joule per second and per square meter, which makes a lot of sense too—as flows through two-dimensional surfaces can easily be related to flows through three-dimensional surfaces. [The math buffs amongst you (unfortunately, I am not part of your crowd) can work that.] In any case, it’s clear that the heat diffusion equation does represent the energy conservation law indeed! What about Schrödinger’s equation? Well… We can – and should – think of Schrödinger’s equation as a diffusion equation as well, but then one describing the diffusion of a probability amplitude. Huh? Yes. Let me show you how it works. The key difference is the imaginary unit (i) in front, and the wavefunction in its terms. That makes it clear that we get two diffusion equations for the price of one, as our wavefunction consists of a real part (the term without the imaginary unit, i.e. the cosine part) and an imaginary part (i.e. the term with the imaginary unit, i.e. the sine part). Just think of Euler’s formula once more. To put it differently, Schrödinger’s equation packs two equations for the price of one: one in the real space and one in the imaginary space, so to speak—although that’s a rather ambiguous and, therefore, a rather confusing statement. But… Well… In any case… We wrote what we wrote. What about the dimensions? Let’s jot down the complete equation so to make sure we’re not doing anything stupid here by looking at one aspect of the equation only. The complete equation is: schrodinger 5 Let me first remind you that ψ is a function of position in space and time, so we write: ψ = ψ(x, y, z, t) = ψ(r, t), with (x, y, z) = r. Let’s now look at the coefficients, and at that −ħ2/2m coefficient in particular. First its dimension. The ħfactor is expressed in J2·s2. Now that doesn’t make much sense, but then that mass factor in the denominator makes everything come out alright. Indeed, we can use the mass-equivalence relation to express m in J/(m/s)2 units. Indeed, let me remind you here that the mass of an electron, for example, is usually expressed as being equal to 0.5109989461(31) MeV/c2, so that unit uses the E = m·cmass-equivalence formula. As for the eV, you know we can convert that into joule, which is a rather large unit—which is why we use the electronvolt as a measure of energy. In any case, to make a long story short, we’re OK: (J2·s2)·[(m/s)2/J] = J·m2. But so we multiply that with some quantity (the Laplacian) that’s expressed per m2. So −(ħ2/2m)·∇2ψ is something expressed in joule, so it’s some amount of energy! Interesting, isn’t it? Especially because it works out just fine with the additional Vψ term, which is also expressed in joule. But why the 1/2 factor? Well… That’s a bit hard to explain, and I’ll come back to it. That 1/2 factor also pops up in the Uncertainty Relations: Δx·Δp ≥ ħ/2 and ΔE·Δt ≥ ħ/2. So we have ħ/2 here as well, not ħ. Why do we need to divide the quantum of action by 2 here? It’s a very deep thing. I’ll show why we need that 1/2 factor in the next sections, in which I’ll also calculate the phase and group velocities of the elementary wavefunction for spin-0, spin-1/2 and spin-1 particles. So… Well… Be patient, please! 🙂 Now, we didn’t say all that much about V, but then that’s easy enough. V is the potential energy of… Well… Just do an example. Think of the electron here: its potential energy depends on the distance (r) from the proton. We write: V = −e2/│r│ = −e2/r. Why the minus sign? Because we say the potential energy is zero at  large distances (see my post on potential energy). So we’ve got another minus sign here, although  you couldn’t see it in the equation itself. In any case, the whole Vψ term is, obviously, expressed in joule too. So, to make a long story short, the right-hand side of Schrödinger’s equation is expressed in energy units.  On the left-hand side, we have ħ, and its dimension is the action dimension: J·s, i.e. force times distance times time (N·m·s). So we multiply that with a time derivative and we get J once again, the unit of energy. So it works out: we have joule units both left and right. But what does it mean? The Laplacian on the right-hand side should work just the same as the Laplacian  in our heat diffusion equation: it should give us a flux density, i.e. something expressed per square meter (1/m2). But so what is it that is flowing here? Well… Hard to say. In fact, the VΨ term spoils our flow interpretation, because that term does not have the 1/s or 1/m2 dimension. Well… It does and it doesn’t. Let me do something bold here. Let me re-write Schrödinger’s equation as: ∂ψ/∂t + i·(V/ħ)·ψ = i·(ħ/2m)·∇2ψ Huh? Yes. All I did, was to move the i·ħ factor to the other side here (remember that 1/is just −i), so our Vψ term becomes −i·(V/ħ)·ψ, and then I move it to the left-hand side. What do we get now when doing a dimensional analysis? • The ∂ψ/∂t term still gives us a flow expressed per second • The dimension of V/ħ is (N·m)/(N·m·s) = 1/s, so that’s nice, as it’s the same dimension as ∂ψ/∂t. So on the left-hand side, we have something per second. • The ħ/2m factor gives us (N·m·s)/(N·s2/m) = m2/s. That’s fantastic as that’s what we’d expect from a diffusion constant: it fixes the dimensions on both sides, because that Laplacian gives us some quantity per m2. In short, the way I re-write Schrödinger’s equation gives a lot more meaning to it! 🙂 Frankly, I can’t believe no one else seems to have thought of this simple stuff. :-/ But we’re still left with the question: what’s flowing here? Feynman’s answer is simple: the probability amplitude, which – as he repeats several times – is a dimensionless number—a scalar, albeit a complex-valued scalar. Frankly, I love the Master, but I find that interpretation highly unsatisfactory. My answer is much bolder: it’s energy. The probability amplitude is like temperature: temperature is a measure for – and is actually defined as – the mean molecular kinetic energy. So it’s a measure of the (average) energy in an unimaginably small box in space. How small? As small as we can make it, taking into account that the  notion of average must still make sense, of course! Now, you can easily sense we’ve got a statistical issue here that resembles the Uncertainty Principle: the standard error of the mean (average) energy will increase as the size of our box decreases. Interesting! Of course, you’ll think this is crazy. No one interprets probability amplitudes like this. This doesn’t make sense, does it? Well… I think it does, and I’ll give you some reasons why in a moment. 🙂 However, let me first wrap up this section by talking about the ħ2/(2m) coefficient. That coefficient is the diffusion constant in Schrödinger’s equation, so it should do the two quintessential jobs: (1) fix dimensions, and (2) give us a sensible proportionality relation. So… Well… Let’s look at the dimensions first. We’ve talked about the dimension of the mass factor m, but so what is the m in the equation? It’s referred to as the effective mass of the elementary particle that we’re looking at—which, in most practical cases, is the electron (see our introduction to electron orbitals above, for example). I’ve talked about the subtleties of the concept of the effective mass of the electron in my post on the Schrödinger equation, so let’s not bother too much about its exact definition right here—just like you shouldn’t bother – for the moment, that is – about that 1/2 factor. 🙂 Just note that ħ2/(2m) is the reciprocal of 2m/ħ. We should think of the 2m factor as an energy concept (I will later argue that we’ve got that factor 2 because the energy includes spin energy), and that’s why the 2m/ħ factor also makes sense as a proportionality factor. Before we go to the next section, I want you to consider something else. Think of the dimension of θ. We said it was the proper time of the thing we’re looking at, multiplied by the rest mass. At the same time, we said it was something that’s dimensionless. Some kind of pure number accompanying our object. To be precise: it became an angle, expressed in radians. But… Well… I want you to re-consider that. What? Yes. Look at it: the E·t and p·x factors in θ = (E/ħ)·t – (p/ħ)·x both have the same dimension as Planck’s constant, i.e. the dimension of action (force times distance times time). [The first term (E·t divided by ħ) is energy times time, while the second (p·x divided by ħ) is momentum times distance. Both can be re-written as force times distance time time.] So θ becomes dimensionless just because we include ħ’s dimension when dividing E and p by it. But what if we’d say that ħ, in this particular case, is just a scaling factor, i.e. some numerical value without a dimension attached to it? In that case, θ would no longer be dimensionless: it would,  effectively, have the action dimension: N·m·s. However, in that case, wouldn’t it be awkward to have a function relating some amount of action to something that’s dimensionless? I mean… Shouldn’t we then say that our wavefunction sort of projects something real – in this case, some amount of action – into some other real space? [In case you wonder: when I say a real space, I mean a physical space—i.e. a space which has physical dimensions (like time or distance, or energy, or action—or whatever physical dimensions) rather than just mathematical dimensions. So let’s explore that idea now. The wavefunction as a link function The wavefunction acts as a link function between two spaces. If you’re not familiar with the concept of link functions, don’t worry. But it’s quite interesting. I stumbled upon when co-studying non-linear regression with my daughter, as she was preparing for her first-year MD examinations. 🙂 Link functions link mathematical spaces. However, here I am thinking of linking physical spaces. The mechanism is something like this. Our physical space is an action space: some force moves something in spacetime. All the information is captured in the notion of action, i.e. force times distance times time. Now, the action is the  proper time, and it’s the argument of the wavefunction, which acts as a link function between the action space and what I call the energy space, which is not our physical space—but it’s another physical space: it’s got physical dimensions. You’ll say: what physical dimensions? What makes it different from our physical space?  Great question! 🙂 Not easy to answer. The philosophical problem here is that we should only have one physical space, right? Well… Maybe. I am thinking of any space whose dimensions are physical. So the dimensions we have here are time and energy. We don’t have x, though. So the spatial dimension got absorbed. But that’s it. And so, yes, our new energy space is a physical space. It just doesn’t have any spatial dimension: it just mirrors the energy in the system at any point in time, as measured by the proper time of the system itself. Does that make sense? Note, once again, the phase shift between the sine and the cosine: if one reaches the +1 or −1 value, then the other function reaches the zero point—and vice versa. It’s a beautiful structure. Of course, the million-dollar question is: is it a physical structure, or a mathematical structure? Does that energy space really have an energy dimension? In other words: is it an actual energy space? Is it real? I know what you think—because that’s what I thought too, initially. It’s just a figment of our imagination, isn’t it? It’s just some mathematical space, no? Nothing real: just a shadow from what’s going in real spacetime, isn’t it? I thought about this for a long time, and my answer is: it’s real! It’s not a shadow. That sine and cosine space is a very real space. It associates every point in spacetime – through the wavefunction, which acts as a link function here – with some real as well as some imaginary energy—and the imaginary energy is as real as the real energy. 🙂 It’s that energy that explains why amplitudes interfere—which, as you know, is what they do. So these amplitudes are something real, and as the dimensional analysis of Schrödinger’s equation reveals their dimension is expressed in joule, then… Well… Then these physical equations say what they say, don’t they? And what they say, is something like the diagram below. 🙂 summary 2 Unfortunately, it doesn’t show the phase difference between the two springs though (I should do an animation here), which… Well… That needs further analysis, especially in regard to that least action principle I mentioned: our particle – or whatever it is – will want to minimize the difference between kinetic and potential energy. 🙂 Contemplate that animation once again: And think of the energy formula for a harmonic oscillator, which tells us that the total energy – kinetic (i.e. the energy related to its momentum) plus potential (i.e. the energy stored in the spring) – is equal to T + U = m·ω02/2. The ωhere is the angular velocity. Now, the de Broglie relations tell us that the phase velocity of the wavefunction is equal to the vp factor in the E = m·vpequation. Look at it: not m·vp2/2. No 1/2 factor. All makes sense, because we’ve got two springs, ensuring the difference between the kinetic energy (KE) and potential energy (PE) in the integrand of the action integral is not only minimized (in line with the least action principle) but is actually equal to zero! But then we haven’t introduced uncertainty here: we’re assuming some definite energy level. But I need to move on. We’ll talk about all of this later anyway. 🙂 Another reason why I think this energy space is not a figment of our mind, is the fact that we need to to take the absolute square of the wavefunction to get the probability that our elementary particle is actually right there! Now that’s something real! Hence, let me say a few more things about that. The absolute square gets rid of the time factor. Just write it out to see what happens: |reiθ|2 = |r|2|eiθ|2 = r2[√(cos2θ + sin2θ)]2 = r2(√1)2 = r2 Now, the gives us the maximum amplitude (sorry for the mix of terminology here: I am just talking the wave amplitude here – i.e. the classical concept of an amplitude – not the quantum-mechanical concept of a probability amplitude). Now, we know that the energy of a wave – any wave, really – is proportional to the amplitude of a wave. It would also be logical to expect that the probability of finding our particle at some point x is proportional to the energy density there, isn’t it? [I know what you’ll say now: you’re squaring the amplitude, so if the dimension of its square is energy, then its own dimension must be the square root, right? No. Wrong. That’s why this confusion between amplitude and probability amplitude is so bad. Look at the formula: we’re squaring the sine and cosine, to then take the square root again, so the dimension doesn’t change: it’s √J2 = J. The third reason why I think the probability amplitude represent some energy is that its real and imaginary part also interfere with each other, as is evident when you take the ordinary square (i.e. not the absolute square). Then the i2   = –1 rule comes into play and, therefore, the square of the imaginary part starts messing with the square of the real part. Just write it out: (reiθ)2 = r2(cosθ + isinθ)2 = r2(cos2θ – sin2θ + 2icosθsinθ)2 = r2(1 – 2sin2θ + 2icosθsinθ)2  As mentioned above, if there’s interference, then something is happening, and so then we’re talking something real. Hence, the real and imaginary part of the wavefunction must have some dimension, and not just any dimension: it must be energy, as that’s the currency of the Universe, so to speak. Now, I should add a philosophical note here—or an ontological note, I should say. When you think we should only have one physical space, you’re right. This new physical space, in which we relate energy to the proper time of an object, is not our physical space. It’s not reality—as we know, as we experience it. So, in that sense, you’re right. It’s not physical space. But then… Well… It’s a definitional matter. Any space whose dimensions are physical—and, importantly, in which things happen (which is surely the case here!)—is a physical space for me. But then I should probably be more careful. What we have here is some kind of projection of our physical space to a space that  lacks… Well… It lacks the spatial dimension. 🙂 It’s just time – but a special kind of time: relativistic proper time – and energy—albeit energy in two dimensions, so to speak. So… What can I say? It’s some kind of mixture between a physical and mathematical space. But then… Well… Our own physical space – including the spatial dimension – is something like a mixture as well, isn’t it? 🙂 We can try to disentangle them – which is what I am trying to do here – but then we’ll probably never fully succeed. When everything is said and done, our description of the world (for which our language is math) and the world itself (which we refer to as the physical space), are part of one and the same reality. Energy propagation mechanisms One of my acquaintances is a retired nuclear physicist. A few years ago, when I was struggling a lot more with this stuff than I am now (although it never gets easy: it’s still tough!) – trying to find some kind of a wavefunction for photons – he bluntly told me photons don’t have a wavefunction—not in the sense I was talking at least. Photons are associated with a traveling electric and a magnetic field vector. That’s it. Full stop. Photons do not have a ψ or φ function. [I am using ψ and φ to refer to position or momentum wavefunction. Both are related: if we have one, we have the other.] So I could have given up – but then I just couldn’t let go of the idea of a photon wavefunction. The structural similarity in the propagation mechanism of the electric and magnetic field vectors and B just looks too much like the quantum-mechanical wavefunction. So I kept trying and, while I don’t think I fully solved the riddle, I feel I understand it much better now. Let me show you. I. An electromagnetic wave in free space is fully described by the following two equations: 1. B/∂t = –∇×E 2. E/∂t = c2∇×B We’re making abstraction here of stationary charges, and we also do not consider any currents here, so no moving charges either. So I am omitting the ∇·E = ρ/ε0 equation (i.e. the first of Maxwell’s four equations), and I am also omitting the j0 in the second equation. So, for all practical purposes (i.e. for the purpose of this discussion), you should think of a space with no charges: ρ = 0 and = 0. It’s just a traveling electromagnetic wave. To make things even simpler, we’ll assume our time and distance units are chosen such that = 1, so the equations above reduce to: 1. B/∂t = –∇×E 2.  E/∂t = ∇×B Perfectly symmetrical, except for the minus sign in the first equation. As for the interpretation, I should refer you to one of my many posts but, briefly, the ∇× operator is the curl operator. It’s a vector operator: it describes the (infinitesimal) rotation of a (three-dimensional) vector field. We discussed heat flow a couple of times, or the flow of a moving liquid. So… Well… If the vector field represents the flow velocity of a moving fluid, then the curl is the circulation density of the fluid. The direction of the curl vector is the axis of rotation as determined by the ubiquitous right-hand rule, and its magnitude of the curl is the magnitude of rotation. OK. Next. II. For the wavefunction, we have Schrödinger’s equation, ∂ψ/∂t = i·(ħ/2m)·∇2ψ, which relates two complex-valued functions (∂ψ/∂t and ∇2ψ). [Note I am assuming we have no force fields (so no V), and also note I brought the i·ħ to the other side: −(ħ2/2m)/(i·ħ) = −(ħ/2m)/i = +i·(ħ/2m).] Now, complex-valued functions consist of a real and an imaginary part, and you should be able to verify the ∂ψ/∂t = i·(ħ/2m)·∇2ψ equation is equivalent to the following set of two equations: Perfectly symmetrical as well, except for the minus sign in the first equation. 🙂 [In case you don’t immediately see what I am doing here, note that two complex numbers a + i·b and c + i·d are equal if, and only if, their real and imaginary parts are the same. However, here we  have something like this: a + i·b = i·(c + i·d) = i·c + i2·d = − d + i·c (remember i= −1).] Now, the energy E in the wave equation – i.e. the E in ψ(θ) = ψ(x, t) = a·eiθ = a·e−i(E·t − p∙x)/ħ wavefunction – consists of: 1. The rest rest energy E0 =  m0·c2 2. The kinetic energy mv·v2/2 = (mv·v)·(mv·v)/(2mv) = p2/(2m); 3. The potential energy V. [Note we’re using a non-relativistic formula for the kinetic energy here, but it doesn’t matter. It’s just to explain the various components of the total energy of the particle.] Now let’s assume our particle has zero rest mass, so E= 0. By the way, note that the rest mass term is mathematically equivalent to the potential term in both the wavefunction as well as in Schrödinger’s equation: (E0·t +V·t = (E+ V)·t, and V·ψ + E0·ψ = (V+E0)·ψ. So… Yes. We can look at the rest mass as some kind of potential energy or – alternatively – add the equivalent mass of the potential energy to the rest mass term. Note that I am not saying it is a photon. I am just hypothesizing there is such thing as a zero-mass particle without any other qualifications or properties. In fact, it’s not a photon, as I’ll prove later. A photon packs more energy. 🙂 All it’s got in common with a photon is that all of its energy is kinetic, as both E0 and V are zero. So our elementary wavefunction ψ(θ) = ψ(x, t) = eiθ = e−i[(E+ p2/(2m) + V)·t − p∙x]/ħ reduces to e−i(p2/(2m)·t − p∙x)/ħ. [Note I don’t include any coefficient (a) in front, as that’s just a matter of normalization.] So we’re looking at the wavefunction of a massless particle here. While I mentioned it’s not a photon – or, to be precise, it’s not necessarily a photon – it has mass and momentum, just like a photon. [In case you forgot, the energy and momentum of a photon are given by the E/p = c relation.] Now, it’s only natural to assume our zero-mass particle will be traveling at the speed of light, because the slightest force will give it an infinite acceleration. Hence, its velocity is also equal to 1. Therefore, we can write its momentum as p = m∙c = m∙c = m, so we get: E = m = p Waw! What a weird combination, isn’t it? It is… But… Well… It’s OK. [You tell me why it wouldn’t be OK. It’s true we’re glossing over the dimensions here, but natural units are natural units, and so c = c2 = 1. So… Well… No worries!] The point to note is that the E = m = p equality yields extremely simple but also very sensible results. For the group velocity of our ei(kx − ωt) wavefunction, we get: So that’s the velocity of our zero-mass particle (c, i.e. the speed of light) expressed in natural units once more—just like what we found before. For the phase velocity, we get: What’s the corresponding wavefunction? It’s  a·ei·[E·t − p∙x]/ħ, of course. However, because of that E = m = p relation (and because we use Planck units), we can write it as a·ei·(m·t − m∙x) = a·ei·m·(t − x) . Let’s now calculate the time derivative and the Laplacian to see if it solves the Schrödinger equation, i.e. ∂ψ/∂t = i·(ħ/2m)·∇2ψ: • ∂ψ/∂t = −i·a·m·ei∙m·(t − x) • 2ψ = ∂2[a·ei∙m·(t − x)]/∂x= a·∂[ei∙m·(t − x)·(i·m)]/∂x = −a·m2·ei∙m·(t − x) So the ∂ψ/∂t = i·(1/2m)·∇2ψ equation becomes: i·a·m·ei∙m·(t − x) = −i·a·(1/2m)·m2·ei∙m·(t − x) ⇔ 1 = 1/2 !? The damn 1/2 factor. Schrödinger wants it in his wave equation, but it does not work here. We’re in trouble! So… Well… What’s the conclusion? Did Schrödinger get that 1/2 factor wrong? Yes. And no. His wave equation is the wave equation for electrons, or for spin-1/2 particles with a non-zero rest mass in general. So the wave equation for these zero-mass particles should not have that 1/2 factor in the diffusion constant. Of course, you may think those zero-mass particle wavefunctions make no sense because… Well… Their argument is zero, right? Think of it. When we – as an outside observer – look at the clock of an object traveling at the  speed of light, its clock looks like it’s standing still. So if we assume t = 0 when x = 0, t will still be zero after it has traveled two or three light-seconds, or light-years! So its t – as we observe it from our inertial framework – is equal to zero forever ! So both t and x are zero—forever! Well… Maybe. Maybe not. It’s sure we have some difficulty here, as evidenced also from the fact that, if m0 = 0, then θ = m0·[t − v∙x]/√(1−v2)] should be zero, right? Well… No. Look at it: we both multiply and divide by zero here: mis zero, but √(1−v2) is zero too! So we can’t define the θ argument, and so we also can’t really define x and t here, it seems. The conclusion is simple: our zero-mass particle is nowhere and everywhere at the same time: it really just models the flow of energy in space! Let’s quickly do the derivations for E = m = p while not specifying any specific value. However, we will assume all is measured in natural units so ħ = 1. So the wavefunction becomes is just a·ei·[E·t − p∙x]. The derivatives now become: • ∂ψ/∂t = −a·i·E·ei∙[E·t − p∙x] • 2ψ = ∂2[a·ei∙[E·t − p∙x]]/∂x=  ∂[a·i·p·ei∙[E·t − p∙x]]/∂x = −a·p2·ei∙[E·t − p∙x] a·i·E·ei∙[E·t − p∙x] = −i·(1/m)·a·p2·ei∙[E·t − p∙x]  ⇔ E = p2/m It all works like a charm, as we assumed E = m = p. Note that the E = p2/m closely resembles the kinetic energy formula one often sees: K.E. = m·v2/2 = m·m·v2/(2m) = p2/(2m). We just don’t have the 1/2 factor in our E = p2/m formula, which is great—because we don’t want it! 🙂 Just to make sure: let me add that, when we write that E = m = p, we mean their numerical values are the same. Their dimensions remain what they are, of course. Just to make sure you get this rather subtle point, we’ll do a quick dimensional analysis of that E = p2/m formula: To  conclude this section, let’s now just calculate the derivatives in the ∂ψ/∂t = i·(ħ/m)·∇2ψ equation (i.e. the equation without the 1/2 factor) without any special assumptions at all. So no E = m = p stuff, and we also will not assume we’re measuring stuff in natural units, so our elementary wavefunction is just what it is: a·ei·[E·t − p∙x]/ħ. The derivatives now become: • 2ψ = ∂2[a·ei∙[E·t − p∙x]/ħ]/∂x=  a·∂[i·(p/ħ)·ei∙[E·t − p∙x]/ħ]/∂x = −a·(p22ei∙[E·t − p∙x]/ħ So the ∂ψ/∂t = i·(ħ/m)·∇2ψ equation now becomes: We get that E = p2/m formula again, so that’s twice the kinetic energy. Note that we do not assume stuff like E = m = p here. It’s all quite general. So… Well… It’s all perfect. 🙂 Well… No. We can write that E = p2/m as E = p2/m = m·v2, and that condition is nonsense, because we know that E = m·c2, so that E = p2/m is only fulfilled if m·c= m·v2, i.e. if v = c. So, again, we see this rather particular Schrödinger equation works only for zero-mass particles. In fact, what it describes is just a general propagation mechanism for energy. Fine. On to the next: the photon wavefunction. Indeed, the photon does have a wavefunction, and it’s different from the wavefunction of my hypothetical zero-mass particle. Let me show how it’s different. However, before we do, let me say something about the superposition principle—which I always think of as the ‘additionality’ principle, because that’s what we’re doing: we’re just adding waves. The superposition principle The superposition principle tells us that any linear combination of solutions to a (homogeneous linear) differential equation will also be a solution. You know that’s how we can localize our wavefunction: we just add a lot of them and get some bump. It also works the other way around: we can analyze any regular wave as a sum of elementary waves. You’ve heard of this: it’s referred to as a Fourier analysis, and you can find more detail on that in my posts on that topic. My favorite illustration is still the one illustrating the Fourier transform on Wikipedia: We can really get whatever weird shape we want. There is one catch, however: we need to combine waves with different frequencies, which… Well… How do we do that? For that, we need to introduce uncertainty, so we do not have one single definite value for E = p = m. This shows, once again, that we’re just analyzing energy here—not some real-life elementary particle. So… Well… We’ll come back to that. Now, before we look at the wavefunction for the photon, let me quickly add something on the energy concepts we are using here. Relativistic and non-relativistic kinetic energy You may have read that the Schrödinger equation is non-relativistic. That is correct, and not correct at the same time. The equation on his grave (below) is much more general, and encompasses both the relativistic as well as the non-relativistic case, depending on what you use for the operator (H) on the right-hand side of the equation: The ‘over-dot’ is Newton’s notation for the time derivative. In fact, if you click on the picture above (and zoom in a bit), then you’ll see that the craftsman who made the stone grave marker, mistakenly, also carved a dot above the psi (ψ) on the right-hand side of the equation—but then someone pointed out his mistake and so the dot on the right-hand side isn’t painted.:-) The thing I want to talk about here, however, is the H in that expression above. For the non-relativistic case, that operator is equal to: So that gives us the Schrödinger equation we started off with. It’s referred to as a non-relativistic equation because the mass concept is the m that appears in the classical kinetic energy formula: K.E. = m·v2/2. Now that’s a non-relativistic approximation. In relativity theory, the kinetic energy of an object is calculated as the difference between (1) the total energy, which is given by Einstein’s mass-energy equivalence relation: E = m·c2 = mv·c2, and (2) the rest mass energy, which – as mentioned above – is like potential energy, and which is given by E0= m0·c2. So the relativistically correct formula for the kinetic energy is the following: Now that looks very different, doesn’t it? Let’s compare the relativistic and non-relativistic formula by plotting them using equivalent time and distance units (so = 1), and for a mass that we’ll also equate to one. As you can see from the graph below, the two concepts do not differ much for non-relativistic velocities, but the gap between them becomes huge as approaches c. [Note the optics: it looks like the two functions are approaching each other again after separating, but it’s not the case! Remember to measure distance along the y-axis here!] kinetic energy Hence, for zero-mass particles we should use the relativistic kinetic energy formula, which, for m0 = 0 and c, becomes: K.E. = mv·c2 − m0·c2 = mv·c2 = mc·c2 = m·c2 = E So that’s all of the energy. No missing energy: in the absence of force fields, zero-mass particles have no potential energy: all of their energy is kinetic. What about our E = p2/m formulas? Is that formula relativistic or non-relativistic? Well… We derived these two formulas assuming v = c or, in equivalent units: = 1. Now, both the p = mv·v = m·v momentum formula and E = m·c2  equations are relativistically correct. Hence, for c, we can write: E = (m2/m)·c2 = (m2·c2)/m = p2/m. Bingo! No problem whatsoever. In contrast, the E = p2/(2m) = m·v2/2 is just the classical kinetic energy formula. Or… Well… Is it? The classical formula is m0·v2/2: it uses the rest mass, not the relativistic mass. So… Well… It’s really quite particular! 🙂 But don’t worry about. You’ll soon understand what it stands for. 🙂 IV. The wavefunction of a photon Look at the following images: Animation 5d_euler_f Both are the same, and then they’re not. The illustration on the right-hand side is a regular quantum-mechanical wavefunction, i.e. an amplitude wavefunction: the x-axis represents time, so we’re looking at the wavefunction at some particular point in space.[Of course, we  could just switch the dimensions and it would all look the same.] The illustration on the left-hand side looks similar, but it’s not an amplitude wavefunction. The animation shows how the electric field vector (E) of an electromagnetic wave travels through space. Its shape is the same. So it’s the same function. Is it also the same reality? Yes and no. And I would say: more no than yes—in this case, at least. Note that the animation does not show the accompanying magnetic field vector (B). That vector is equally essential in the electromagnetic propagation mechanism according to Maxwell’s equations, which—let me remind you—are equal to: 1. B/∂t = –∇×E 2. E/∂t = ∇×B In fact, I should write the second equation as ∂E/∂t = c2∇×B, but then I assume we measure time and distance in equivalent units, so c = 1. You know that E and B are two aspects of one and the same thing: if we have one, then we have the other. To be precise, B is always orthogonal to in the direction that’s given by the right-hand rule for the following vector cross-product: B = ex×E, with ex the unit vector pointing in the x-direction (i.e. the direction of propagation). The reality behind is illustrated below for a linearly polarized electromagnetic wave. E and b The B = ex×E equation is equivalent to writing B= i·E, which is equivalent to: B = i·E = ei(π/2)·ei(kx − ωt) = cos(kx − ωt + π/2) + i·sin(kx − ωt + π/2) = −sin(kx − ωt) + i·cos(kx − ωt) Now, E and B have only two components: Eand Ez, and Band Bz. That’s only because we’re looking at some ideal or elementary electromagnetic wave here but… Well… Let’s just go along with it.:-) It is then easy to prove that the equation above amounts to writing: We should now think of Ey and Eas the real and imaginary part of some wavefunction, which we’ll denote as ψE = ei(kx − ωt). So we write: E = (Ey, Ez) = Ey + i·E= cos(kx − ωt) + i∙sin(kx − ωt) = ReE) + i·ImE) = ψE = ei(kx − ωt) What about B? We just do the same, so we write: B = (By, Bz) = By + i·B= ψB = i·E = i·ψE = −sin(kx − ωt) + i∙sin(kx − ωt) = − ImE) + i·ReE) Now we need to prove that ψE and ψB are regular wavefunctions, which amounts to proving Schrödinger’s equation, i.e. ∂ψ/∂t = i·(ħ/m)·∇2ψ, for both ψE and ψB. [Note I use that revised Schrödinger’s equation, which uses the E = m·v2 energy concept, i.e. twice the kinetic energy.] To prove that ψE and ψB are regular wavefunctions, we should prove that: 1. Re(∂ψE/∂t) =  −(ħ/m)·Im(∇2ψE) and Im(∂ψE/∂t) = (ħ/m)·Re(∇2ψE), and 2. Re(∂ψB/∂t) =  −(ħ/m)·Im(∇2ψB) and Im(∂ψB/∂t) = (ħ/m)·Re(∇2ψB). Let’s do the calculations for the second pair of equations. The time derivative on the left-hand side is equal to: So the two equations for ψare equivalent to writing: 1. Re(∂ψB/∂t) =   −(ħ/m)·Im(∇2ψB) ⇔ ω·cos(kx − ωt) = k2·(ħ/m)·cos(kx − ωt) 2. Im(∂ψB/∂t) = (ħ/m)·Re(∇2ψB) ⇔ ω·sin(kx − ωt) = k2·(ħ/m)·sin(kx − ωt) So we see that both conditions are fulfilled if, and only if, ω = k2·(ħ/m). Now, we also demonstrated in that post of mine that Maxwell’s equations imply the following: Hence, using those B= −Eand B= Eequations above, we can also calculate these derivatives as: 2. ∂Bz/∂t = ∂Ey/∂t = ∂cos(kx − ωt)/∂t = −ω·[−sin(kx − ωt)] = ω·Ez In other words, Maxwell’s equations imply that ω = k, which is consistent with us measuring time and distance in equivalent units, so the phase velocity is  = 1 = ω/k. So far, so good. We basically established that the propagation mechanism for an electromagnetic wave, as described by Maxwell’s equations, is fully coherent with the propagation mechanism—if we can call it like that—as described by Schrödinger’s equation. We also established the following equalities: 1. ω = k 2. ω = k2·(ħ/m) The second of the two de Broglie equations tells us that k = p/ħ, so we can combine these two equations and re-write these two conditions as: ω/k = 1 = k·(ħ/m) = (p/ħ)·(ħ/m) = p/m ⇔ p = m What does this imply? The p here is the momentum: p = m·v, so this condition implies must be equal to 1 too, so the wave velocity is equal to the speed of light. Makes sense, because we actually are talking light here. 🙂 In addition, because it’s light, we also know E/p = = 1, so we have – once again – the general E = p = m equation, which we’ll need! OK. Next. Let’s write the Schrödinger wave equation for both wavefunctions: 1. ∂ψE/∂t = i·(ħ/mE)·∇2ψE, and 2. ∂ψB/∂t = i·(ħ/mB)·∇2ψB. Huh? What’s mE and mE? We should only associate one mass concept with our electromagnetic wave, shouldn’t we? Perhaps. I just want to be on the safe side now. Of course, if we distinguish mE and mB, we should probably also distinguish pE and pB, and EE and EB as well, right? Well… Yes. If we accept this line of reasoning, then the mass factor in Schrödinger’s equations is pretty much like the 1/c2 = μ0ε0 factor in Maxwell’s (1/c2)·∂E/∂t = ∇×B equation: the mass factor appears as a property of the medium, i.e. the vacuum here! [Just check my post on physical constants in case you wonder what I am trying to say here, in which I explain why and how defines the (properties of the) vacuum.] To be consistent, we should also distinguish pE and pB, and EE and EB, and so we should write ψand ψB as: 1. ψE = ei(kEx − ωEt), and 2. ψB = ei(kBx − ωBt). Huh? Yes. I know what you think: we’re talking one photon—or one electromagnetic wave—so there can be only one energy, one momentum and, hence, only one k, and one ω. Well… Yes and no. Of course, the following identities should hold: kE = kB and, likewise, ω= ωB. So… Yes. They’re the same: one k and one ω. But then… Well… Conceptually, the two k’s and ω’s are different. So we write: 1. pE = EE = mE, and 2. pB = EB = mB. The obvious question is: can we just add them up to find the total energy and momentum of our photon? The answer is obviously positive: E = EE + EB, p = pE + pB and m = mE + mB. Let’s check a few things now. How does it work for the phase and group velocity of ψand ψB? Simple: 1. vg = ∂ωE/∂kE = ∂[EE/ħ]/∂[pE/ħ] = ∂EE/∂pE = ∂pE/∂pE = 1 2. vp = ωE/kE = (EE/ħ)/(pE/ħ) = EE/pE = pE/pE = 1 So we’re fine, and you can check the result for ψby substituting the subscript E for B. To sum it all up, what we’ve got here is the following: 1. We can think of a photon having some energy that’s equal to E = p = m (assuming c = 1), but that energy would be split up in an electric and a magnetic wavefunction respectively: ψand ψB. 2. Schrödinger’s equation applies to both wavefunctions, but the E, p and m in those two wavefunctions are the same and not the same: their numerical value is the same (pE =EE = mE = pB =EB = mB), but they’re conceptually different. They must be: if not, we’d get a phase and group velocity for the wave that doesn’t make sense. Of course, the phase and group velocity for the sum of the ψand ψwaves must also be equal to c. This is obviously the case, because we’re adding waves with the same phase and group velocity c, so there’s no issue with the dispersion relation. So let’s insert those pE =EE = mE = pB =EB = mB values in the two wavefunctions. For ψE, we get: ψ= ei[kEx − ωEt) ei[(pE/ħ)·x − (EE/ħ)·t]  You can do the calculation for ψyourself. Let’s simplify our life a little bit and assume we’re using Planck units, so ħ = 1, and so the wavefunction simplifies to ψei·(pE·x − EE·t). We can now add the components of E and B using the summation formulas for sines and cosines: 1. B+ Ey = cos(pB·x − EB·t + π/2) + cos(pE·x − EE·t) = 2·cos[(p·x − E·t + π/2)/2]·cos(π/4) = √2·cos(p·x/2 − E·t/2 + π/4) 2. B+ Ez = sin(pB·x − EB·t+π/2) + sin(pE·x − EE·t) = 2·sin[(p·x − E·t + π/2)/2]·cos(π/4) = √2·sin(p·x/2 − E·t/2 + π/4) Interesting! We find a composite wavefunction for our photon which we can write as: E + B = ψ+ ψ= E + i·E = √2·ei(p·x/2 − E·t/2 + π/4) = √2·ei(π/4)·ei(p·x/2 − E·t/2) = √2·ei(π/4)·E What a great result! It’s easy to double-check, because we can see the E + i·E = √2·ei(π/4)·formula implies that 1 + should equal √2·ei(π/4). Now that’s easy to prove, both geometrically (just do a drawing) or formally: √2·ei(π/4) = √2·cos(π/4) + i·sin(π/4ei(π/4) = (√2/√2) + i·(√2/√2) = 1 + i. We’re bang on! 🙂 We can double-check once more, because we should get the same from adding E and B = i·E, right? Let’s try: E + B = E + i·E = cos(pE·x − EE·t) + i·sin(pE·x − EE·t) + i·cos(pE·x − EE·t) − sin(pE·x − EE·t) = [cos(pE·x − EE·t) – sin(pE·x − EE·t)] + i·[sin(pE·x − EE·t) – cos(pE·x − EE·t)] Indeed, we can see we’re going to obtain the same result, because the −sinθ in the real part of our composite wavefunction is equal to cos(θ+π/2), and the −cosθ in its imaginary part is equal to sin(θ+π/2). So the sum above is the same sum of cosines and sines that we did already. So our electromagnetic wavefunction, i.e. the wavefunction for the photon, is equal to: What about the √2 factor in front, and the π/4 term in the argument itself? No sure. It must have something to do with the way the magnetic force works, which is not like the electric force. Indeed, remember the Lorentz formula: the force on some unit charge (q = 1) will be equal to F = E + v×B. So… Well… We’ve got another cross-product here and so the geometry of the situation is quite complicated: it’s not like adding two forces Fand Fto get some combined force F = Fand F2. In any case, we need the energy, and we know that its proportional to the square of the amplitude, so… Well… We’re spot on: the square of the √2 factor in the √2·cos product and √2·sin product is 2, so that’s twice… Well… What? Hold on a minute! We’re actually taking the absolute square of the E + B = ψ+ ψ= E + i·E = √2·ei(p·x/2 − E·t/2 + π/4) wavefunction here. Is that legal? I must assume it is—although… Well… Yes. You’re right. We should do some more explaining here. We know that we usually measure the energy as some definite integral, from t = 0 to some other point in time, or over the cycle of the oscillation. So what’s the cycle here? Our combined wavefunction can be written as √2·ei(p·x/2 − E·t/2 + π/4) = √2·ei(θ/2 + π/4), so a full cycle would correspond to θ going from 0 to 4π here, rather than from 0 to 2π. So that explains the √2 factor in front of our wave equation. It’s quite fascinating, isn’t it? A natural question that pops up, of course, is whether or not it can explain the different behavior of bosons and fermions. Indeed, we know that: 1. The amplitudes of identitical bosonic particles interfere with a positive sign, so we have Bose-Einstein statistics here. As Feynman writes it: (amplitude direct) + (amplitude exchanged). 2. The amplitudes of identical fermionic particles interfere with a negative sign, so we have Fermi-Dirac statistics here: (amplitude direct) − (amplitude exchanged). I’ll think about it. I am sure it’s got something to do with that B= i·E formula or, to put it simply, with the fact that, when bosons are involved, we get two wavefunctions (ψand ψB) for the price of one. The reasoning should be something like this: I. For a massless particle (i.e. a zero-mass fermion), our wavefunction is just ψ = ei(p·x − E·t). So we have no √2 or √2·ei(π/4) factor in front here. So we can just add any number of them – ψ1 + ψ2 + ψ3 + … – and then take the absolute square of the amplitude to find a probability density, and we’re done. II. For a photon (i.e. a zero-mass boson), our wavefunction is √2·ei(π/4)·ei(p·x − E·t)/2, which – let’s introduce a new symbol – we’ll denote by φ, so φ = √2·ei(π/4)·ei(p·x − E·t)/2. Now, if we add any number of these, we get a similar sum but with that √2·ei(π/4) factor in front, so we write: φ1 + φ2 + φ3 + … = √2·ei(π/4)·(ψ1 + ψ2 + ψ3 + …). If we take the absolute square now, we’ll see the probability density will be equal to twice the density for the ψ1 + ψ2 + ψ3 + … sum, because |√2·ei(π/4)·(ψ1 + ψ2 + ψ3 + …)|2 = |√2·ei(π/4)|2·|ψ1 + ψ2 + ψ3 + …)|2 2·|ψ1 + ψ2 + ψ3 + …)|2 So… Well… I still need to connect this to Feynman’s (amplitude direct) ± (amplitude exchanged) formula, but I am sure it can be done. Now, we haven’t tested the complete √2·ei(π/4)·ei(p·x − E·t)/2 wavefunction. Does it respect Schrödinger’s ∂ψ/∂t = i·(1/m)·∇2ψ or, including the 1/2 factor, the ∂ψ/∂t = i·[1/2m)]·∇2ψ equation? [Note we assume, once again, that ħ = 1, so we use Planck units once more.] Let’s see. We can calculate the derivatives as: • ∂ψ/∂t = −√2·ei(π/4)·ei∙[p·x − E·t]/2·(i·E/2) • 2ψ = ∂2[√2·ei(π/4)·ei∙[p·x − E·t]/2]/∂x= √2·ei(π/4)·∂[√2·ei(π/4)·ei∙[p·x − E·t]/2·(i·p/2)]/∂x = −√2·ei(π/4)·ei∙[p·x − E·t]/2·(p2/4) So Schrödinger’s equation becomes: i·√2·ei(π/4)·ei∙[p·x − E·t]/2·(i·E/2) = −i·(1/m)·√2·ei(π/4)·ei∙[p·x − E·t]/2·(p2/4) ⇔ 1/2 = 1/4!? That’s funny ! It doesn’t work ! The E and m and p2 are OK because we’ve got that E = m = p equation, but we’ve got problems with yet another factor 2. It only works when we use the 2/m coefficient in Schrödinger’s equation. So… Well… There’s no choice. That’s what we’re going to do. The Schrödinger equation for the photon is ∂ψ/∂t = i·(2/m)·∇2ψ ! This is all great, and very fundamental stuff! Let’s now move on to Schrödinger’s actual equation, i.e. the ∂ψ/∂t = i·(ħ/2m)·∇2ψ equation. V. The wavefunction for spin-1/2 particles Schrödinger’s original equation – with the 1/2 factor – is not wrong, of course! It can’t be wrong, as it correctly explains the precise shape of electron orbitals! So let’s think about the wavefunction that makes Schrödinger’s original equation work. Leaving the Vψ term out, that equation is: Hence, if our elementary wavefunction is a·ei·[E·t − p∙x]/ħ, then the derivatives become: So the ∂ψ/∂t = i·(ħ/2m)·∇2ψ = i·(ħ/m)·∇2ψ equation now becomes: a·i·(E/ħ)·ei∙[E·t − p∙x]/ħ = −i·(ħ/2m)·a·(p22ei∙[E·t − p∙x]/ħ  ⇔ E = p2/2m That’s a very weird condition, because we can re-write p2/2m as m·v2/2, and so we find that our wavefunction is a solution for Schrödinger’s equation if, and only if, E = m·v2/2. But that’s a weird formula, as it captures the kinetic energy only—and, even then, it should be written as m0·v2/2. But so we know E = E = m·c2. So what’s going on here? We must be doing something wrong here, but what? Let’s start with the basics by simplifying the situation first: we’ll, once again, assume a fermion with zero rest mass. I know you think you will not come back to a non-zero rest mass particle so as to answer the deep question here, but I promise you I will. Don’t worry. The zero-mass fermion If we do not want to doubt the formula for the elementary wavefunction – noting that the E = p2/2m comes out of it when combining it with Schrödinger’s equation, which we do not want to doubt at this point either – then… Well… The only thing we did was substitute p for m·v. So that must be wrong. Perhaps we’re using the wrong momentum formula. [Of course, we didn’t: we used the wrong mass concept, as I’ll explain in a moment. But just go along with the logic here as for now.] The wrong momentum formula? Well… Yes. After all, we’ve used the formula for linear momentum, and we know an electron (or any spin-1/2 particle) has angular momentum as well. Let’s try the following: if we allow for two independent directions of motion (i.e. two degrees of freedom), then the equipartition theorem tells us that energy should be equally divided over two. Assuming the smallest possible value for the mass (m) in the linear momentum formula (p = m·v) and for the moment of inertia (I) in the angular momentum formula (L = I·ω) is equal to ħ/2, and also assuming that c = ω = 1 (so we are using equivalent time and distance units), we could envisage that the total momentum could be m·+ I·ω = ħ/2 + ħ/2 = ħ. Let’s denote that by a new p, which is the sum of the old linear momentum and the angular momentum. Hence, using natural units, we get: p = 2m It’s a weird formula, so let’s try to find in some other way. The Schrödinger equation is equivalent to writing: 1. Re(∂ψ/∂t) =   −(ħ/2m)·Im(∇2ψ) ⇔ ω·cos(kx − ωt) = k2·(ħ/2m)·cos(kx − ωt) 2. Im(∂ψ/∂t) = (ħ/2m)·Re(∇2ψ) ⇔ ω·sin(kx − ωt) = k2·(ħ/2m)·sin(kx − ωt) So ω = k2·(ħ/2m). At the same time, ω/k = vp, i.e. the phase velocity of the wave. Hence, we find that: That’s sweet. Now we can use the E = p2/2m that we got when combining our elementary wavefunction with Schrödinger’s equation to get the following: E = p2/2m and p = 2m ⇔ E = (2m)2/2m = 2m E = p = 2m This looks weird but comprehensible. Note that the phase velocity of our wave is equal to = 1 as vp = p/2m = 2m/2m = 1. What about the group velocity, i.e. vg = ∂ω/∂k? Let’s calculate it: vg = ∂ω/∂k = ∂[k2·(ħ/2m)]/∂k = 2k·(ħ/2m) = 2·(p/ħ)·(ħ/2m) = p/m = m·v/m = v = c = 1 That’s nice, because it’s what we wanted to find. If the group velocity would not equal the classical velocity of our particle, then our model would not make sense. Now, it’s nice we get that p = 2m equation when calculating the phase velocity, but… Well… Think about it: there’s something wrong here: if vp = p/2m, and p = m·v, then this formula cannot be correct for fermions that actually do have some rest mass, because it implies vp = m·v/2m = v/2. That doesn’t make sense, does it? Why not? Well… I’ll answer that question in the next section. We’re actually mixing stuff here that we shouldn’t be mixing. 🙂 The actual fermion (non-zero mass) In what I wrote above, I showed that Schrödinger’s wave equation for spin-zero, spin-1/2, and spin-one particles in free space differ from each other by a factor two: 1. For particles with zero spin, we write: ∂ψ/∂t = i·(ħ/m)·∇2ψ. We get this by multiplying the ħ/(2m) factor in Schrödinger’s original wave equation – which applies to spin-1/2 particles (e.g. electrons) only – by two. Hence, the correction that needs to be made is very straightforward. 2. For fermions (spin-1/2 particles), Schrödinger’s equation is what it is: ∂ψ/∂t = i·[ħ/(2m)]·∇2ψ. 3. For spin-1 particles (photons), we have ∂ψ/∂t = i·(2ħ/m)·∇2ψ, so here we multiply the ħ/m factor in Schrödinger’s wave equation for spin-zero particles by two, which amounts to multiplying Schrödinger’s original coefficient by four. We simplified the analysis by assuming our particles had zero rest mass, and we found that we were basically modeling an energy flow when developing the model for the spin-zero particle—because spin-zero particles with zero rest mass don’t exist. In contrast, the model for the spin-one particle is a model that works for the photon—an actual bosonic particle. To be precise, we derived the photon wavefunction from Maxwell’s equations in free space, and then found the wavefunction is a solution for the ∂ψ/∂t =i·(2ħ/m)·∇2ψ equation only. For a real-life electron, we had a problem. If our elementary wavefunction is a·ei·[E·t − p∙x]/ħ, then the derivatives in Schrödinger’s wave equation become: The true answer is: Schrödinger did use a different energy concept when developing his equation. He used the following formula: E = meff·v2/2 What’s meff? It’s referred to as the effective mass, and it has nothing to do with the real mass, or the true mass. In fact, the effective mass, in units of the true mass, can be anything between zero and infinity. So that resolves the paradox. I know you won’t be happy with the answer, but that’s what it is. 😦 I’ll come back to the question, however. Let’s do something more interesting now. Let’s calculate the phase velocity. It’s easy to see the phase velocity will be equal to: Using natural units, that becomes: vp = E/p = mv/mv·v = 1/v Interesting! The phase velocity is the reciprocal of the classical velocity! This implies it is always superluminal, ranging from vp = ∞ to vp= 1 for going from 0 to 1 = c, as illustrated in the simple graph below. phase velocity However, we are, of course, interested in the group velocity, as the group velocity should correspond to the classical velocity of the particle. The group velocity of a composite wave is given by the vg = ∂ω/∂k formula. Of course, that formula assumes an unambiguous relation between the temporal and spatial frequency of the component waves, which we may want to denote as ωn and kn, with n = 1, 2, 3,… However, we will not use the index as the context makes it quite clear what we are talking about. The relation between ωn and kn is known as the dispersion relation, and one particularly nice way to calculate ω as a function of k is to distinguish the real and imaginary parts of the ∂ψ/∂t =i·[ħ/(2m)]·∇2ψ wave equation and, hence, re-write it as a pair of two equations: 1. Re(∂ψB/∂t) =   −[ħ/(2m)]·Im(∇2ψB) ⇔ ω·cos(kx − ωt) = k2·[ħ/(2m)]·cos(kx − ωt) 2. Im(∂ψB/∂t) = [ħ/(2m)]·Re(∇2ψB) ⇔ ω·sin(kx − ωt) = k2·[ħ/(2m)]·sin(kx − ωt) Both equations imply the following dispersion relation: ω = ħ·k2/(2m) We can now calculate vg = ∂ω/∂k as: vg = ∂ω/∂k = ∂[ħ·k2/(2m)]/∂k = 2ħk/(2m) = ħ·(p/ħ)/m = p/m = m·v/m = v Now, let’s have another at the energy concept that’s implicit in Schrödinger’s equation. We said he used the E = meff·v2/2 formula, but let’s look at the energy concept once more. We said the phase velocity of our wavefunction was equal to vp = E/p. Now p = m·v, but it’s only when we’re modeling zero-mass particles that vvp. So, for non-zero rest mass particles, the energy concept that’s implicit in the de Broglie relations and the wavefunction is equal to: E = vp·p = mv·vp·v ≠ m·vfor v ≠ vp Now, we just calculated that v= 1/v, so we can write E = vp·p = mv·vp·v as E = m. So… Well… That’s consistent at least! However, that leads one to conclude that the 1/2 factor in Schrödinger’s equation should not be there. Indeed, if we’d drop it, and we’d calculate those derivatives once more, and substitute them, we’d get a condition that makes somewhat more sense: This comes naturally out of the p = mv·v = m·v and E = m·c2  equations, which are both relativistically correct, and for c, this gives us the E = m·c2 equation. It’s still a weird equation though, as we do not get the E = m·c2 equation when ≠ c. But then… Well… So be it. What we get is an energy formula that says the total energy is twice the kinetic energy. Or… Well… Not quite, because the classical kinetic energy formula is m0·v2, not mv·v2. Now, you’ll have to admit that fits much better with our θ = m0·t’ and m0·c2 energy formula for those two springs, doesn’t it? The whole discussion makes me think of an inconsistency in that relativistic definition of the kinetic energy. We said that, for particles with zero rest mass, all of the energy was kinetic, and we wrote it as: Because we know that the energy of a photon is finite (like 2 or 3 eV for visible light, or like 100,000 eV for gamma-rays), we know mc must have a finite value too, but how can some mass moving at the speed of light be finite? It’s one of those paradoxes in relativity theory. The answer, of course, is that we only see some mass moving (a photon) in our reference frame: the photon in it’s own space is just a wave, and it’s frozen—so to speak—in its own (inertial) frame of reference. However, while that’s (probably) the best answer we can give at this point, it’s not very satisfactory. At this point, I am thinking of a quote that I like a lot. It’s from an entirely different realm of experience – much less exact than math or physics: “We are in the words, and at the same time, apart from them. The words spin out, spin us out, over a void. There, somewhere between us, some words form some answer for some time, allowing us to live more fully in the forgetting face of nonexistence, in the dissolving away of each other.” (Robert Langan, in Jeremy Safran (2003), Psychoanalysis and Buddhism: an Unfolding Dialogue) As all of physics is expressed in the language of math, we should subsitute the “words” in that quote above for “the math”: it’s the math that spins us out now – not the words – over some void. And the math does give us some answer, for some time at least. 🙂 But… Then… Well… The math also gives us new questions to solve, it seems. 🙂 I must assume the relativistic version of Schrödinger’s equation, i.e. the Klein-Gordon equation, does away with these inconsistencies. But so that’s even more advanced stuff that what we’ve been dealing with so far. And then it’s an equation which does not correctly give us the electron orbitals, so we don’t know what it describes—exactly. Let me remind you, at this point, of the relativistically correct relation between E, m, p and v, which is the following one: p = (E/c2or, using natural units, p = v·E Now, if the m (or 2meff) factor in the ħ/m (or ħ/2meff) diffusion constant is to be interpreted as the (equivalent mass) of the total energy, so E = m (expressed in natural units), then the condition that comes out of the Schrödinger equation is: E2 = p2 ⇔ p = E So now we’ve got that old E = p = m equation now. And how can we reconcile it with the relativistically correct p = v·E. Well… All is relative, so whose v are we talking about here? Well… I am sorry, but the answer to that is rather unambiguous: our v, as we measure it. Then the question becomes: what v? Phase velocity? Group velocity? Again, the answer is unambiguous: we’re talking the group velocity, which corresponds to the classical velocity of our particle. So… When everything is said and done, there is only one conclusion: the use of that meff factor seems to ensure we’ve got the right formula, and we know that – for our wavefunction to be a solution to the Schrödinger equation – the following condition must be satisfied: E·(2meff) = p= m2·v2 Now, noting that the E = m relation holds (all is in natural units once more), then that condition only works if: m·(2meff) = m2·v⇔ meff = m·v2/2 As mentioned above, I’d rather drop that 1/2 factor and, hence, re-define meff as two times the old meff, in which case we get: meff = Eeff = m·v2 = E·v2 It’s an ugly solution, at first sight, but it makes all come out alright. And, in fact, it’s not all that ugly: the effective mass only differs from the classical mass because of a correction factor, which is equal to v2, so that’s the square of some value between 0 and 1 (as is a relative velocity here), so it’s some factor between 0 and 1 itself. Let me quickly insert the graph: Interesting! Let’s double-check it by substituting those derivatives we calculated into Schrödinger’s equation. It now gives us the following condition: a·i·(E/ħ)·ei∙[E·t − p∙x]/ħ = −i·(ħ/meffa·(p22ei∙[E·t − p∙x]/ħ   ⇔ E·meff = p2 ⇔ m·m·v= m2·v= p2 It works. You’ll still wonder, of course: what is that efficient mass? Well… As it appears in the diffusion constant, we can look at it as some property of the medium. In free space, it just becomes m, as becomes c. However, in a crystal lattice (which is the context for which Schrödinger developed his equation), we get something entirely different. What makes a crystal lattice different? The presence of other particles and/or charges which, in turn, gives us force fields and potential barriers we need to break. So… Well… It’s all very strange but, when everything is said and done, the whole story makes sense. 🙂 For some time at least. 🙂 OK. We’re done. Let me just add something on the superposition principle here. The superposition principle re-visited As you can see from our calculations for the group velocity of our wave for spin-1/2 particle, the 1/2 factor in the Schrödinger equation ensures we’ve got a ‘nice’ dispersion relation, as it also includes the 1/2 factor – ω = k2/(2m) – and that factor, in turn, cancels the 2 that comes down when doing that vg = ∂ω/∂k = ∂[k2/(2m)]/∂k = 2k/(2m) = k/m derivation. And then we do the p = m·substitution and all is wonderful: we find that the phase velocity corresponds to the classical particle velocity: vg = k/m = p/m = m·v/m = v But we’re still talking some wave here whose amplitude is the same all over spacetime, right? So how can we localize it? Think about it: for our zero-mass particles, there was no real need to resort to this funny business of adding waves to localize it, because we did not need to localize it. Why not? Well… It’s here that quantum mechanics and relativity theory come together in what might well be the most logical and absurd conclusion ever: As an outside observer, we’re going to see all those zero-mass particles as point objects whizzing by because of the relativistic length contraction. So their wavefunction is only all over spacetime in their proper space and time, not in ours! I know it will take you some time to think about this, and you may actually refuse to believe this, but… Then… Well… I’ve been thinking about this for years, and I’ve come to the conclusion it’s the only way out: it must be true. So we don’t need to add waves for those zero-mass particles. In other words, we can have definite values for E and so there’s no need for an uncertainty principle here. Furthermore, if we have definite values for E, we’ll also have definite values for p and m and… Well… Just note it: we only need one wave for our theoretical spin-zero particle and our photon. No superposition. No Fourier analysis! 🙂 So let’s now get back to our spin-1/2 particles, and spin-1/2 particles with actual mass, like electrons. We can get a localized wave in two ways: I. We can introduce the Uncertainty Principle, so we allow some uncertainty for both E and p. This uncertainty is fundamental, because it’s directly linked to the agreed-upon hypothesis that, in physics, we have a quantum of action: ħ = 1.0545718×10−34 N·m·s So E and p can vary, and the order of magnitude of that variation is given by the Uncertainty Relations: Δx·Δp ≥ ħ/2 and ΔE·Δt ≥ ħ/2 That’s a very tiny magnitude, but then E and p are measured in terms of ħ, so it’s actually very substantial! [One needs to go through an actual exercise to appreciate this, like the calculation of electron orbitals for the hydrogen atom, which we’ll discuss in the next section.] II. We can insert potential energy. Remember, the E in ψ(θ) = ψ(x, t) = a·eiθ = a·e−i(E·t − p∙x)/ħ wavefunction – consists of: 1. The rest mass E0; 3. The potential energy V. We only discussed particles in free space so far, so no force fields. No potential. The whole analysis changes if our particles are actually traveling in a force field, as an electron does when it’s in some orbital around the nucleus. But so here it’s better to work examples, and so that’s what we’ll start doing now.:-) [Note we’re using a non-relativistic formula for the kinetic energy here but, as mentioned above, the velocity of an electron in orbital is not relativistic. Indeed, you’ll remember that, when we were writing about natural units and the fine-structure constant, we calculated its (classical) velocity as a mere 2,187 km per second only.] The solution for actual electron orbitals As mentioned above, our easy E = p = 2m identity does not show the complexities of the real world. In fact, the derivation of Schrödinger’s equation is not very rigorous. Richard Feynman – who knows the historical argument much better than I do – actually says that some of the arguments that Schrödinger used were false. He also says that does not matter, stating that “the only important thing is that the ultimate equation gives a correct description of nature.” Indeed, if you click on the link to my post on that argument, you’ll see I also doubt if the energy concepts that are used in that argument are the right ones. In addition, some of the simplifications are as horrendous as the ones I used above. 🙂 So… Well… It’s actually quite surprising that Schrödinger’s derivation, “based on some heuristic arguments and some brilliant intuitive guesses”, actually does give a formula we can use to calculate electron orbitals. So how does that work? Well… The wave equation above described electrons in free space. Actual electrons are always in some orbit—in a force field around some nucleus. So we have to add the Vψ term and solve the new equation? But… Well… I am not going to give you those calculations here because you can find them elsewhere: check my post on it or, better still, read the original. 🙂
13673271a1974a17
Engelman Review Review by Dr. Engelman November 11, 1996 “The unquestioning acceptance of the Copenhagen interpretation of quantum theory has, in the last 40 years or so, held back progress on the development of alternative theories. … Blind acceptance of the orthodox position cannot produce the challenges needed to push the theory eventually to its breaking point. And break it will, probably in a way no one can predict to produce a theory no one can imagine.” Jim Baggott, 1992 [1] The grand unified theory of Randell L. Mills: a natural unification of quantum mechanics and relativity ? This could well evolve as being true. At any rate, Mills proposes such a basic approach to quantum theory that it deserves considerably more attention from the general scientific community than it has received so far. The new theory appears to be a realization of Einstein’s vision and a fitting closure of the “Quantum Century” that started in 1900 with Max Planck’s work on black-body radiation and his subsequent postulate of energy quanta. It was Einstein’s lifelong dream to unify the quantum world with his theory of (special and general) relativity [2]. Even though he was one of the three eminent fathers of quantum mechanics – besides Planck and Bohr – Einstein had serious doubts about the uncertainties that were a basic feature of its theoretical framework. In his response to Born’s interpretation of the wave function as a probability-field (“ghost-field”) he made the now famous statement: “I am at all events convinced that He does not play dice.” [1,2] In addition, of course, quantum mechanics is fundamentally inconsistent with relativity. The somewhat forced unification in Dirac’s approach was hardly satisfying to that great genius of an Einstein: “I incline to belief that physicists will not be permanently satisfied with … an indirect description of Reality, even if the [quantum] theory can be fitted successfully to the General Relativity postulates.” [2] Einstein’s dream for a unified field theory envisioned a “programme which may suitably be called Maxwell’s … .” As his biographer Abraham Pais put it, his vision called for “start[ing] with a classical field theory, a unified field theory, and demand[ing] of that theory that the quantum rules should emerge as constraints imposed by that theory itself.” [2] Randell L. Mills proposes such a unified field theory. He outlines a quantum theory for the atomistic world that is fully consistent with “Maxwell’s programme”: It is founded solely upon the classical laws of physics in the framework of Einstein’s relativity with an additional Lorentz-invariant scalar wave equation for de-Broglie matter waves. This additional wave equation is completely compatible with Maxwell’s vector wave equation of electromagnetism. The key for quantization of the steady-state is the well known physical law that a steady-state of moving charge or matter, with or without acceleration, must not radiate either electromagnetic or gravitational waves. This postulate was originally derived from Maxwell’s Equations in 1986 by Hermann Haus for a moving charge [3] and was generalized by Mills as follows: the steady-state eigenfunction of charge/matter has to be free of Fourier components synchronous with waves traveling at the speed of light. The condition is equivalent to the violation of phase-matching for the exchange of energy in coupled mode theory. Retrospectively, the non-radiation postulate is the only quantization condition that seems to make perfect sense. Applied to the central force field of a hydrogen atom, Mills derives eigenfunction solutions that correspond to concentric spherical shells (called “orbitspheres”) with radii that are integer multiples n of the Bohr radius ao. These eigenfunctions can be naturally interpreted as two-dimensional charge/mass density functions of the electron confined to a spherical surface. Charge/mass points on the orbitsphere move along great circles with a fixed magnitude of linear velocity in a strictly coordinated motion to each other (the orbitsphere is not a rigid spinning globe). All electromagnetic field energy is trapped inside the orbitsphere as in a resonant cavity with perfectly conducting walls, except for a static magnetic field produced by the surface currents of the orbitsphere. In the excited states n > 1 this trapping is meta-stable. The well-known quantized energy states of the hydrogen atom are predicted by Mills’ solutions. As a corollary, Mills derives the properties of the electron spin and the Bohr magneton in agreement with the Stern-Gerlach experiment. These properties arise out of a constant “spin-term” in the angular function required in the solutions to satisfy the condition of negative definite charge (or positive definite mass) everywhere on the orbitsphere for any set of quantum numbers. Mills assigns to this spin-term the quantum number s = 1/2. Thus, the spin is a natural by-product of the theory, whereas in the traditional quantum mechanics of Schrödinger and Heisenberg it had to be introduced artificially. In the ground state (n = 1, l = 0) Mills derives a homogeneous charge/mass distribution on the orbitsphere surface, in the excited states (n > 1, l > 0), on the other hand, the charge distribution becomes non-uniform and generates, together with the central charge of the nucleus, multipoles. Transition probabilities would follow from classical multipole radiation theory. At ionization the orbitsphere would expand to infinity, thus becoming the wavefront of a quasi-plane de-Broglie wave traveling away from the central nucleus, and once “free” from the nucleus, the electron orbitsphere would collapse into a spinning disk in order to conserve angular momentum. Mills’ orbitspheres, the electron eigenfunctions of the atom, emerge as complete charge/matter equivalents of standing electromagnetic waves in a resonant cavity. The compatibility of the respective wave equations allows a harmonic self-consistent description of electron (charge/mass) and electromagnetic-field (energy) distribution in the atom. It would bring back determinism to quantum theory, a heroic task that Schrödinger set out to accomplish with his wave mechanics but, to his own dismay [1], tragically failed to do. If, then, the charge/mass density functions of Mills were the correct solutions, the “real thing” that Einstein’s “inner voice” predicted [1,2], what are Schrödinger’s wave functions? In order to find some answer to this question one has to realize that in the case of time harmonic motion the steady-state Schrödinger equation is identical to the steady-state charge/matter wave equation in Mills’ theory, specified for the non-relativistic limit. The connection is provided by the de-Broglie relation combined with conservation of energy. What is vastly different, of course, is the boundary condition! The Haus criterion in Mills’ theory, which was outlined above, leads to non-radiating eigenfunctions. This situation is equivalent to a perfectly closed lossless resonant cavity. Schrödinger’s boundary condition, on the other hand, requires that the wave function vanishes at infinity and is well behaved anywhere else. As demonstrated by Mills, the resulting eigenfunctions have Fourier transforms with components traveling at the speed of light and, thus, should involve radiation. Schrödinger’s eigenfunctions can be considered the normal modes of a spherical resonator of infinite extent. In such a context, how could Schrödinger’s solution describe a steady-state that has some physical meaning? To this reviewer it is quite conceivable that such a state, for each set of quantum numbers, can be characterized by the superposition of two dynamic states: one would consist of a continually contracting orbitsphere emitting “virtual” photons, whereas the other one would constitute the reverse, the orbitsphere continually expanding and thereby re-absorbing these photons, where the principle quantum number n refers to the “home” orbit and the angular quantum numbers (s,l,m) determine the angular charge/mass distribution on the contracting and expanding orbitsphere surface. No net emission or absorption of photons takes place. Such a quasi-dynamic state could, perhaps, best be compared with the superposition of a lasing resonant cavity emitting a light beam to infinity (there vanishing just like a spherical wave) and its time-reversed counterpart, i.e. a lossy cavity absorbing the opposite light beam as it travels from infinity into the cavity. This would result, in effect, in a leaky resonant cavity with lossless feedback from infinity. Obviously, the described hybrid quasi-dynamic state could not be a real state. Rather it should be viewed as a virtual state. As such, it is expected to provide some statistical information about the possible dynamic behavior which the real steady-state of a Mills charge/mass eigenfunction may be subjected to. In the hydrogen atom the statistics would refer to all possible expansion and contraction events starting from a particular orbitsphere with a given set of quantum numbers: the orbitsphere expansion/contraction events are an endless “Monte Carlo game” forced onto the Schrödinger eigenfunction by perfect feedback from infinity! Thus, in a quasi-dynamic sense, one could consider a Mills orbitsphere of a given set of quantum numbers as being statistically “projected” onto a Schrödinger wave function of the same set of quantum numbers. To make this projection complete the latter needs to be generalized by adding the spin-term in the angular function which Schrödinger did not consider. Such a view point would lead to the conclusion that the statistical interpretation of the Schrödinger wave function remains compatible with the unified field theory of Mills. However, the statistics became purely classical, they were totally equivalent, e.g., to the statistics of thermodynamics: statistics of, in effect, an infinite number of real individual events that proceed in a completely deterministic way without the intervention of a measurement apparatus. Hence, Heisenberg’s uncertainty principle would loose all its mystique, in the context of the Mills theory it just became the charge/mass-density-function equivalent of the classical relation between the decay time and bandwidth of a damped harmonic oscillator and its spatial twin for propagating waves! The unified theory of Mills provides a simple, exceptionally pleasing, resolution of the conceptual problems with the traditional quantum mechanics of Schrödinger and Heisenberg. In fact, this resolution is amazingly close to Einstein’s vision [2]: Quantum mechanics is revealed as incomplete but remains a valid branch of statistical physics. It is highly accurate when dealing with a large number of quantum events, but utterly fails in the description of individual “quantum jumps”. Here, the unified theory of Mills re-establishes determinism, as is demonstrated by Mills with the example of electron scattering from a He atom: Schrödinger ‘s approach provides accurate results only for relatively large scattering angles for which the statistics are expected to be good. Mills’s deterministic approach, however, provides an accurate solution for the full angular range. Hence, traditional quantum mechanics – a better term in the framework of Mills’ theory would be statistical quantum mechanics – is related to the quantum laws of the Mills unified theory – Mills calls it classical quantum mechanics – in a way that is somewhat reminiscent of the relation between Newton’s mechanics and its generalization in special relativity. May, at last, Einstein’s spirit rest in peace? Einstein’s theory of relativity modified Newton’s law but yielded more: it predicted the equivalence of matter and energy! What are the exciting new predictions of the grand unified theory of Mills? This theory predicts the existence of so-called “shrunken” atomic states, substates below the ground state. These substates are non-radiating electron orbitspheres at the simple fractions n = 1/2, 1/3, 1/4, … of the Bohr radius ao (the “subharmonics” of the atom!). The existence of these substates is consistent with the above speculation for a new statistical interpretation of Schrödinger’s wave functions, since these wave functions remain finite below the Bohr radius dropping to zero only at the nuclear center. The ground state is completely stable, so the substates are generally inaccessible. According to Mills’ hypothesis, however, the atomic substates can be accessed by interaction with the proper partner atom(s) or ion(s) in a resonant energy exchange. For hydrogen, Mills calculates this critical energy to be just twice the hydrogen ionization energy from the ground state (2 x 13.6 eV). Once this energy quantum is transferred from the hydrogen ground state orbitsphere to the interacting partner atom(s) or ion(s) by exciting it to a higher orbitsphere level, the hydrogen orbitsphere becomes unstable and collapses to its next lower stable non-radiating substate with additional release of energy. Thus, to activate such a Coulomb field collapse the hydrogen atom has to absorb – as Mills calls it – an “energy hole” of 27.2 eV. According to Mills, absorption of multiples of energy holes is also allowed for this activation, and the size of the energy hole remains the same for activating further collapse from any of the substates. Considerable shrinkage should, hence, be possible in a catalytic process with the release of considerable amount of energy. Mills also predicts that atomic Coulomb field collapse can proceed to such a degree that, with fusible atomic nuclei, e.g. deuterons, fusion can set in: Mills predicts the possibility of cold fusion or, in his terminology, Coulombic annihilation fusion (CAF)! Fleischmann and Pons [4] appear to be vindicated. But the postulated Coulomb field collapse itself is predicted by Mills to lead to the release of large amounts of energy and by itself could explain the observed excess heat in electrolytic cell experiments. The process, thus, deserves earnest attention as a potential future energy source. Mills has some convincing experimental evidence for both catalytic exothermic formation of shrunken substates of hydrogen (so-called “hydrinos”), as well as for catalytic cold fusion. Only the future can tell, if these catalytic processes can be made efficient enough to be viable for useful energy production. As an encouraging sign, Mills has designed a hydrogen gas energy cell based on his shrinkage reaction that provides far superior performance than any of the original liquid electrolytic cells. A most exciting feature of the Mills theory is, however, that it promises to be a true grand unified theory: Mills applies the orbitsphere concept not only to single and multiple electron atoms and ions, to the hydrogen molecule and the chemical bond, but also to pair production and positronium, and to the weak and strong nuclear forces. Mills proposes that just three basic concepts, i.e., electromagnetism, gravity, and mass/energy, suffice to describe all known phenomena from the dimensions of the atomic nucleus to those of the cosmos. Mills’ new “classical” quantum mechanics is of beautiful conceptual simplicity and fully deterministic without the uncertainties, quantum jumps and probability functions of traditional quantum mechanics, without “spooky action at a distance” [1]. This should be a pure joy for every searching scientist! It appears that the scientific community has taken little notice of this new theory. Considering its revolutionary nature and seemingly far-out conclusions this is perhaps not too surprising. On the other hand, critical dialogue is necessary for any new and unconventional thinking in order to mature and reach a high degree of rigor and precision in its formulation. In view of the fact that recently receptiveness for alternate views on quantum theory has increased, as e.g. the renewed interest in the deterministic interpretation of Bohm attests [5], it is hoped that the theory of Randell L. Mills will find its deserved resonance. Let me close, as I started this review, with a quote from Jim Baggott [1]: “Science is a democratic activity. It is rare for a new theory to be adopted by the scientific community overnight. Rather scientists need a good deal of persuading before they will invest belief in a new theory … . This process of persuasion must be backed up by hard experimental evidence, preferably from new experiments designed to test the predictions of the new theory. Only when a large cross-section of the scientific community believes in the new theory is it accepted as ‘true’.” I am indebted to Professor Anthony Bell for originally bringing the 1992 edition of Mills’ book to my attention and to Professor Lawrence Ruby and Mr. Thomas Stolper for helpful advice. Reinhart Engelmann – Professor of Electrical Engineering Oregon Graduate Institute of Science and Technology, Portland, OR 97291-1000 [1] Jim Baggott, The Meaning of Quantum Theory, Oxford University Press, 1992 [2] Abraham Pais, ‘Subtle is the Lord…’ The Science and the Life of Albert Einstein, Oxford University Press, 1982 [3] H.A Haus, “On the radiation from point charges,” Am. J. Phys. 54 (12), 1126 (December 1986) [4] M. Fleischmann and S. Pons, “Electrochemically induced nuclear fusion of deuterium,” J. Electroanal. Chem. 261, 301 (1989) [5] David Z. Albert, “Bohm’s Alternative to Quantum Mechanics,” Scientific American, May 1994, p.58
c13a6157e92edeac
A Talk with Frank Wilczek FRANK WILCZEK, a theoretical physicist at MIT and recipient of the Nobel Prize in Physics (2004), is known, among other things, for the discovery of asymptotic freedom, the development of quantum chromodynamics, the invention of axions, and the discovery and exploitation of new forms of quantum statistics (anyons). He is the author of Lightness of Being: Mass, Ether, and the Unification of Forces. Frank Wilczek's Edge Bio Page In retrospect, I realize now that having the Nobel Prize hovering out there but never quite arriving was a heavy psychological weight; it bore me down. It was a tremendous relief to get it. Fortunately, it turns out I didn't anticipate that getting it is fantastic fun—the whole bit: there are marvelous ceremonies in Sweden, it's a grand party, and it continues, and is still continuing. I've been going to big events several times a month. The most profound aspect of it, though, is that I've really felt from my colleagues something I didn't anticipate: a outpouring of genuine affection. It's not too strong to call it love. Not for me personally—but because our field, theoretical fundamental physics, gets recognition and attention. People appreciate what's been accomplished, and it comes across as recognition for an entire community and an attitude towards life that produced success. So I've been in a happy mood. But that was a while ago, and the ceremonial business gets old after a while, and takes time. Such an abrupt change of life encourages thinking about the next stage. I was pleased when I developed a kind of three-point plan that gives me direction. Now I ask myself, when I'm doing something in my work: Is it relating to point one? Is it relating to point two? Is it relating to point three? If it's not relating to any of those, then I'm wasting my time. Point one is in a sense the most straightforward. An undignified way to put it would be to say it's defending turf, or pissing on trees, but I won't say that: I'll say it's following up ideas that I've had physics in the past that are reaching fruition. There are several that I'm very excited about now. The great machine at CERN, the LHC, is going to start operating in about a year. Ideas—about unification and supersymmetry and producing Higgs particles—that I had a big hand in developing 20-30 years ago, are finally going to be tested. Of course, if they're correct that'll be a major advance in our understanding of the world, and very gratifying to me personally. Then there's the area of exotic behavior of electrons at low temperature, so-called anyons, which is a little more tech It was thought for a long time that all particles were either bosons or fermions. In the early 80s, I realized there were other possibilities, and it turns out that there are materials in which these other possibilities can be realized, where the electrons organize themselves into collective states that have different properties from individual electrons and actually do obey the peculiar new rules, and are anyons. This is leading to qualitatively new possibilities for electronics. I call it anyonics. Recently advanced anyonics has been notionally bootstrapped into strategy for building quantum computers that might even turn out to be successful. In any case, whether it's successful or not, the vision of anyonics—this new form of electronics—has inspired a lot of funding and experimentalists are getting into the game. Here similarly, there are kinds of experiments that have been in my head for 20 years but are very difficult, and people needed motivation and money to do them, that are now going to be done. It's a lot of fun to be involved in something that might actually have practical consequences and might even change the world. This stuff also, in a way, brings me back to my childhood because when I was growing up, my father was an electrical engineer and was taking home circuit diagrams, and I really admired these things. Now I get to think about making fundamentally new kinds of circuits, and it's very cool. I really like the mixture of abstract and concrete. At a deeper level, what excites me about quantum computing and this whole subject of quantum information processing is that it touches such fundamental questions that potentially it could lead to qualitatively new kinds of intelligences. It's notorious that human beings have a hard time understanding quantum mechanics; it's hard for humans to relate to its basic notions of superpositions of states—that you can have Schrödinger's cat that's both dead and alive—that are not in our experience. But an intelligence based on quantum computers—mechanical quantum thinking—from the start would have that in its bones, so to speak, or in its circuits. That would be its primary way of thinking. It's quite challenging but fascinating to try to put yourself in the other guy's shoes, when that guy has a fundamentally different kind of mind, a quantum mind. It's almost an embarrassment of riches, but some of the ideas I had about axions turn out to go together very very well with inflationary cosmology, and to get new pictures for what the dark matter might be. It ties into questions about anthropic reasoning, because with axions you get really different amounts of dark matter in different parts of the multiverse. The amounts of dark matter would be different elsewhere and the only way to argue about how much dark matter there should be turns out to be, if you have too much dark matter, life as we know it couldn't arise. There's a lot of stuff in physics that I really feel I have to keep track of, and do justice to. That's point one. The second point is another way of having fun: looking for outlets, cultivating a public, not just thinking about science all the time. I'm in the midst of writing a mystery novel that combines physics with music, philosophy, sex, the rule that only three people at most can share a Nobel Prize—and murder (or was it suicide?). When a four-person MIT-Harvard collaboration makes a great discovery in physics (they figure out what the dark matter is) somebody's got to go. That project and I hope other subsequent projects will be outlets in reaching out to the public and bringing in all of life and just having fun. The third point is what I like to call the Odysseus project. I'm a great fan of Odysseus, the wanderer who had adventures and was very clever. I really want to do more great work—not following up what I did before, but doing essentially different things. I got into theoretical physics almost by accident; when I was an undergraduate, I had intended to study how minds work and neurobiology. But it became clear to me rather quickly, at Chicago, that that subject at that time wasn't ripe for the kind of mathematical analytical approach that I really like and get excited about, and am good at. I switched and majored in mathematics and eventually wound up in physics. But I've always maintained that interest and in the meantime the tools available for addressing those questions have improved exponentially. Both in terms of studying the brain itself—imaging techniques and genetic techniques and a variety of others—but also the inspiring model of computation. The explosion of computational ability and understanding of computer science and networks is a rich source of metaphors and possible ways of thinking about the nature of intelligence and how the brain works. That's a direction I really want to explore more deeply. I've been reading a lot; I don't know exactly what I want to do, but I have been nosing out what's possible and what's available. I think it's a capital mistake, as Sherlock Holmes said, to start theorizing before you have the data. So I'm gathering the data. Quantum Computers and Anyons Quantum computing is an inspiring vision, but at present it's not clear what the technical means to carry it off are. There is a variety of proposals. It's not clear which is the best, or if any of them is practical. Let me backtrack a little bit, though, because even before you get to a full-scale quantum computer, there are information processing tasks for which quantum mechanics could be useful with much less than a full-scale quantum computer. A full-scale quantum computer is extremely demanding: you have to build various kinds of gates, you have to connect them in complicated ways, you have to do error correction—it's very complicated. That's sort of like envisioning a supersonic aircraft when you're at the stage of the Wright brothers. However, there are applications that I think are almost in-hand. The most mature is for a kind of cryptography: you can exploit the fact that quantum mechanics has this phenomenon that's roughly called 'collapse of the wave' function—I don't like it—I don't think that's a really good way to talk about it—but for better or worse, that's the standard terminology. Which in this case means that if you send a message that's essentially quantum mechanical—in terms of the direction of spins of photons, for instance—then you can send photons one by one with different spins and encode information that way. If someone eavesdrops on this, you can tell because the act of observation necessarily disturbs the information you're sending. So that's very useful. If you want to transmit messages and make sure that they haven't been eavesdropped, you can have that guaranteed by the laws of physics. If somebody eavesdrops, you'll be able to tell. You can't prevent it, necessarily, but you can tell. If you do things right, the probability of anyone being able to eavesdrop successfully can be made negligibly small. So that's a valuable application that's almost tangible. People are beginning to try to commercialize that kind of idea. I think in the long run the killer application of quantum computers will be doing quantum mechanics. Doing chemistry by numbers, designing molecules, designing materials by calculation. A capable quantum computer would let chemists and materials scientists work at a another level, because instead of having to mix up the stuff and watch what happens, you can just compute. We know exactly what the equations are that govern the behavior of nuclei and electrons and the things that make up atoms and molecules. So in principle, it's a solved problem to figure out chemistry: just compute. We don't know all the laws of physics, but it's essentially certain that we know the adequate laws of physics with sufficient accuracy to design molecules and to predict their properties with confidence. But our practical ability to solve the equations is limited. The equations live in big multi-dimensional spaces, and they have a complicated structure and, to make a long story short, we can't solve any but very simple problems. With a quantum computer we'll be able to do much better. As I sort of alluded to earlier, it's not decided yet what the best long-term strategy is for achieving powerful quantum computers. People are doing simulations and building little prototypes. There are different strategies being pursued based on nuclear spins, electron spins, trapped atoms, anyons. I am very fond of anyons because I worked at the beginning on the fundamental physics involved. It was thought, until the late 70s and early 80s, that all fundamental particles, or all quantum mechanical objects that you could regard as discrete entities fell into two classes: so-called bosons after the Indian physicist Bose, and fermions, after Enrico Fermi. Bosons are particles such that if you take one around another, the quantum mechanical wave function doesn't change. Fermions are particles such that if you take one around another the quantum mechanical wave function is multiplied by a minus sign. It was thought for a long time that those were the only consistent possibilities for behavior of quantum mechanical entities. In the late 70s and early 80s, we realized that in two plus one dimensions, not in our everyday three dimensional space (plus one dimension for time), but in planar systems, there are other possibilities. In such systems, if you take one particle around another, you might get not a factor of one or minus one, but multiplication by a complex number—there are more general possibilities. More recently, the idea that when you move one particle around another, it's possible not only that the wave function gets multiplied by a number, but that it actually gets distorted and moves around in bigger space, has generated a lot of excitement. Then you have this fantastic mapping from motion in real space as you wind things around each other, to motion of the wave function in Hibert space—in quantum mechanical space. It's that ability navigate your way through Hilbert space—that connects to quantum computing and gives you access to a gigantic space with potentially huge bandwidth that you can play around with in highly parallel ways, if you're clever about the things you do in real space. But in anyons we're really at the primitive stage. There's very little doubt that the theory is correct, but the experiments are at a fairly primitive stage—they're just breaking now. Quantum Logic and Quantum Minds To do justice to the possible states, the possible conditions that just a few objects can be in, say, five spins, classically you would think you would have to say for each one whether it's up or down. At any one time they are in some particular configuration. In quantum mechanics, every single configuration—there are 32 of them, up or down for each spin—has some probability of existing. So simultaneously to do justice to the physical situation, instead of just saying that there is some configuration these objects are in, you have to specify roughly that there is a certain probability for each one and those probabilities evolve. But that verbal description is too rough because what's involved is not probabilities, it's something called amplitudes. The difference is profound. Whereas probabilities have a kind of independence, with amplitudes the different configurations can interact with one other. There are different states which are in the physical reality and they are interacting with each other. Classically they would be different things that couldn't happen simultaneously. In quantum theory they coexist and interact with one another. That also goes to this issue of logic that I mentioned before. One way of representing true or false that is famously used in computers is, you have true as one and false as zero, spin up is true, spin down in false. In quantum theory the true statement and the false statement can interact with each other and you can do useful computations by having simultaneous propositions that contradict each other, sort of interacting with each other, working in creative tension. I just love that idea. I love the idea of opposites coexisting and working with one another. Come to think of it, it's kind of sexy. More on Quantum Computers Realizing this vision will be a vast enterprise. It's hard to know how long it's going to take to get something useful, let alone something that is competitive with the kind of computing we already have developed, which is already very powerful and keeps improving, let alone create new minds that are different from and more powerful than the kind of minds we're familiar with. We'll need to progress on several fronts. You can set aside the question of engineering, if you like, and ask: Suppose I had a big quantum computer, what would I do with it, how would I program it, what kind of tasks could it accomplish? That is a mathematical investigation. You abstract the physical realization away. Then it becomes a question for mathematicians, and even philosophers have got involved in it. Then there is the other big question: how do I build it? How do I build it in practice? That's a question very much for physicists. In fact, there is no winning design yet. People have struggled to make even very small prototypes. My intuition, though, is that when there is a really good idea, progress could be very rapid. That is what I am hoping for and going after. I have glimmers of how it might be done, based on anyons. I've been thinking about this sort of thing on and off for a long time. I pioneered some of the physics, but other theorists including Alexei Kitaev and my former student Chetan Nayak have taken things to another level. There's now a whole field called "topological quantum computing" with its own literature, and conferences, and it's moving fast. What has changed is that now a lot of people, and in particular experimentalists, have taken it up. Methods and Styles in Physics The great art of theoretical physics is the revelation of surprising things about reality. Historically there have been many approaches to that art, which have succeeded in different ways. In the early days of physics, people like Galileo and Newton were very close to the data and stressed that they were trying to put observed behavior into mathematical terms. They developed some powerful abstract concepts, but by today's standards those concepts were down-to-earth; they were always in terms of things that you could touch and feel, or at least see through telescopes. That approach very much dominated physics, at least through the 19th century. Maxwell's great synthesis of electricity and magnetism and optics, and leading to the understanding that light was a form of electricity and magnetism, predicting new kinds of light that we call radio and microwaves and so forth—that came from a very systematic review of all that was known about electricity and magnetism experimentally and trying to put it into equations, noticing an inconsistency and fixing it up. That's the kind of classic approach. In the 20th century, some of the most successful enterprises have looked rather different. Without going into the details it's hard to do justice to all the subtleties, but it's clear that theories like special relativity—especially general relativity—were based on much larger conceptual leaps. In constructing special relativity, Einstein abstracted just two very broad regularities about the physical world: that is that the laws of physics should look the same if you're moving at a constant velocity, and that the speed of light should be a universal constant. This wasn't based on a broad survey of a lot of detailed experimental facts and putting them together; it was selecting a few very key facts and exploiting them conceptually for all they're worth. General relativity even more so: it was trying to make the theory of gravity consistent with the insights of special relativity. This was a very theoretical enterprise, not driven by any specific experimental facts*, but led to a theory that changed our notions of space and time, did lead to experimental predictions, and to many surprises. (*Actually, there was a big "coincidence" that Newtonian gravity left unexplained, the equality of inertial and gravitational mass, which was an important guiding clue.) The Dirac equation is a more complicated case. Dirac was moved by broad theoretical imperatives; he wanted to make the existing equation for quantum mechanical behavior of electrons—that's the Schrödinger equation—consistent with special relativity. To do that, he invented a new equation—the Dirac equation—that seemed very strange and problematic, yet undeniably beautiful, when he first found it. That strange equation turned out to require vastly new interpretations of all the symbols in it, that weren't anticipated. It led to the prediction of antimatter and the beginnings of quantum field theory. This was another revolution that was, in a sense, conceptually driven. On the other hand, what gave Dirac and others confidence that his equation was on the right track, was that it predicted corrections to the behavior of electrons in hydrogen atoms that were very specific, and that agreed with precision measurements. This support forced them to stick with it, and find an interpretation to let it be true! So there was important empirical guidance, and encouragement, from the start. Our foundational work on QCD falls in the same pattern. We were led to specific equations by theoretical considerations, but the equations seemed problematic. They were full of particles that aren't observed (quarks and—especially—gluons), and didn't contain any of the particles that are observed! We persisted with them nevertheless, because they explained a few precision measurements, and that persistence eventually paid off. In general, as physics has matured in the 20th century, we've realized more and more the power of mathematical considerations of consistency and symmetry to dictate the form of physical laws. We can do a lot with less experimental input. (Nevertheless the ultimate standard must be getting experimental output: illuminating reality.) How far can esthetics take you? Should you let that be your main guide, or should you try to assemble and do justice to a lot of specific facts? Different people have different styles; some people try to use a lot of facts and extrapolate a little bit; other people try not to use any facts at all and construct a theory that's so beautiful that it has to be right and then fill in the facts later. I try to consider both possibilities, and see which one is fruitful. What's been fruitful for me is to take salient experimental facts that are somehow striking, or that seem anomalous—don't really fit into our understanding of physics—and try to improve the equations to include just those facts. My reading of history is that even the greatest advances in physics, when you pick them apart, were always based on a firm empirical foundation and straightening out some anomalies between the existing theoretical framework and some known facts about the world. Certainly QCD was that way; when we developed asymptotic freedom to explain some behaviors of quarks, that they seem to not interact when they're close together seemed inconsistent with quantum field theory, but we were able to push and find very specific quantum field theories in which that behavior was consistent which essentially solved the problem of the strong interaction, and has had many fruitful consequences. Axions also—similar thing—a little anomaly— there's a quantity that happens to be very small in the world, but our theories don't explain why it's small; you can change the theories to make them a little more symmetrical—then we do get zero—but that has other consequences: the existence of these new particles rocks cosmology, and they might be the dark matter—I love that kind of thing. String theory is sort of the extreme of non-empirical physics. In fact, its historical origins were based on empirical observations, but wrong ones. String theory was originally based on trying to explain the nature of the strong interactions, the fact that hadrons come in big families, and the idea was that they could be modeled as different states of strings that are spinning around or vibrating in different ways. That idea was highly developed in the late 60s and early 70s, but we put it out of business with QCD, which is a very different theory that turns out to be the correct theory of the strong interaction. But the mathematics that was developed around that wrong idea, amazingly, turned out to contain, if you do things just right, and tune it up, to contain a description of general relativity and at the same time obeys quantum mechanics. This had been one of the great conceptual challenges of 20th century physics: to combine the two very different seeming kinds of theories—quantum mechanics, our crowning achievement in understanding the micro-world, and general relativity, which was abstracted from the behavior of space and time in the macro-world. Those theories are of a very different nature and, when you try to combine them, you find that it's very difficult to make an entirely consistent union of the two. But these evolved string theories seem to do that. The problems that arise in making a quantum theory of gravity, unfortunately for theoretical physicists who want to focus on them, really only arise in thought experiments of a very high order—thought experiments involving particles of enormous energies, or the deep interior of black holes, or perhaps the earliest moments of the Big Bang that we don't understand very well. All very remote from any practical, do-able experiments. It's very hard to check the fundamental hypotheses of this kind of idea. The initial hope, when the so-called first string revolution occurred in the mid-1980s, was that when you actually solved the equations of string theory, you'd find a more or less unique solution, or maybe a handful of solutions, and it would be clear that one of them described the real world. From these highly conceptual considerations of what it takes to make a theory of quantum gravity, you would be led "by the way" to things that we can access and experiment, and it would describe reality. But as time went on, people found more and more solutions with all kinds of different properties, and that hope—that indirectly by addressing conceptual questions you would be able to work your way down to description of concrete things about reality—has gotten more and more tenuous. That's where it stands today. My personal style in fundamental physics continues to be opportunistic: To look at the phenomena as they emerge and think about possibilities to beautify the equations that the equations themselves suggest. As I mentioned earlier, I certainly intend to push harder on ideas that I had a long time ago but that still seem promising and still haven't been exhausted in supersymmetry and axions and even in additional applications of QCD. I'm also always trying to think of new things. For example, I've been thinking about the new possibilities for phenomena that might be associated with this Higgs particle that probably will be discovered at the LHC. I realized something I'd been well aware of at some low level for a long time, but I think now I've realized has profound implications, which is that the Higgs particle uniquely opens a window into phenomena that no other particle within the standard model would be sensitive to. If you look at the mathematics of the standard model, you discover that there are possibilities for hidden sectors—things that would interact very weakly with the kind of particles we've had access to so far, but would interact powerfully with the Higgs particles. We'll be opening that window. Very recently I've been trying to see if we can get inflation out of the standard model, by having the Higgs particle interact in a slightly nonstandard way with gravity. That seems promising too. Most of my bright ideas will turn out to be wrong, but that's OK. I have fun, and my ego is secure. On National Greatness In 1983 the Congress of the United States canceled the SSC project, the Superconducting SuperCollider, that was under construction near Waxahachie, Texas. Many years of planning, many careers had been invested in that project, also $2 billion had already been put into the construction. All that came out of it was a tunnel from nowhere to nothing. Now it's 2009 and a roughly equivalent machine, the Large Hadron Collider LHC, will coming into operation at CERN near Geneva. The United States has some part in that. It has invested half a billion dollars out of the $15 billion total. But it's a machine that is in Europe, really built by the Europeans; there's no doubt that they have contributed much more. Of course, the information that comes out will be shared by the entire scientific community. So the end result, in terms of tangible knowledge, is the same. We avoided spending the extra money. Was that a clever thing to do? I don't think so. Even in the narrowest economic perspective, I think it wasn't a clever thing to do. Most of the work that went into this $15 billion was local, locally subcontracted within Europe. It went directly into the economies involved and furthermore into dynamic sectors of the economy for high-tech industries involved in superconducting magnets, fancy cryogenic engineering and civil engineering of great sophistication and of course computer technology. All that know-how is going to pay off much more than the investment in the long run. But even if it weren't the case that purely economically it was a good thing to do, The United States missed an opportunity for national greatness. A 100 years or 200 years from now, people will largely have forgotten about the various spats we got into, the so-called national greatness of imposing our will on foreigners, and they will remember the glorious expansion of human knowledge that is going to happen at the LHC and the gigantic effort that went into getting it. As a nation we don't get many opportunities to show history our national greatness, and I think we really missed one there. Maybe we can recoup. The time is right for an assault on the process of aging. A lot of the basic biology is in place. We know what has to be done. The aging process itself is really the profound aspect of public health, eliminating major diseases, even big ones like cancer or heart disease, would only increase life expectancy by a few years. We really have to get to the root of the process. Another project on a grand scale would be to search systematically for life in the galaxy. We have tools in astronomy, we can design tools, to find distant planets that might be earth like, study their atmospheres, and see if there is evidence for life. It would be feasible, given a national investment of will and money, to survey the galaxy and see if there are additional earth-like planets that are supporting life. We should think hard about doing things we will be proud to be remembered for, and think big. John Brockman, Editor and Publisher Russell Weinberger, Associate Publisher contact: [email protected] Copyright © 2009 By Edge Foundation, Inc All Rights Reserved.
19e8522afbd62c1a
Posted on Q Field Science Quantum Reality Field Science by Mr. Terry Skrinjar The QEnergySpa, BEFE and Quantum Reality Field Science The QEnergySpa, BEFE technology is based on Quantum Reality Field Science or QRFS. The study of quantum field theory is alive and flourishing, as are applications of this method to many physical problems. It remains one of the most vital areas of theoretical physics today, providing a common language to many branches of physics. A model of the universe can help us to explain why it acts the way it does. To understand how the model is constructed from the theory we need to make it clear what a scientific theory is. The simplified view is that a theory is just a model of the universe or part of it, with rules relating to the model based on observations we make. A model is assumed to be true until an observation is found that no longer agrees with the model. The model is either then abandoned or reconstructed to include the new observation. The electromagnetic field can be viewed as the combination of an electric field and a magnetic field and as a physical, wavelike, dynamic entity that affects the behaviour of charged objects in the vicinity of the field and which is also affected by them. It extends indefinitely throughout space. Albert Einstein Albert Einstein. Almost one hundred years ago Albert Einstein, in his special and general relativity theories, developed mathematical formulas which suggested that time and space are inseparable, that nothing can travel faster than the speed of light, and that the passage of time for a body in motion is relative to that body’s rate of travel through space. Einstein was able to demonstrate mathematical formulas to connect these apparently different aspects of physical reality. Although Einstein knew that all was connected through the underlying structure of reality, he was unable to say exactly how and why this is so and he did not have a visual model to further any such construct. “I wished to show that space-time is not necessarily something to which one can ascribe a separate existence, independently of the actual objects of physical reality.  Physical objects are not in space, but objects are spatially extended.  In this way the concept of `empty space` loses its meaning”. – Albert Einstein June 9th, 1952.  Note to the Fifteenth Edition of “Relativity” “Einstein’s efforts to uncover a unified field theory were rooted in his belief that the structure of space-time is the key to understanding the characteristics of the electromagnetic and gravitational forces”. –  “The World of Physics, Volume III” pg. 120 Whilst Einstein’s endeavour to educate the Physics community of the day was cut short by his death in 1955 curtailing the evolution of a possible model for evaluation, the worlds physicist’s could only postulate the construct which had been presented to them previously. The theories however continued to flow with the worlds leading physicists divided between the expansion of general relativity and the more recent acceptance of quantum mechanics. Both general relativity and quantum mechanics were claimed to provide the predictive ability required to set the fundamental foundation of physical Law. With both theories so heavily debated, the search for the ‘Holy Grail’ or the Grand Unification of Physics continued in the hope that a merger would be forthcoming between the two forming the start of any unified space-time equation. To date, most of the current models of the universe remain static rather than dynamic. For the unification of physics a dynamic structured model must be employed. Albert Einstein in a note to the fifteenth edition of “Relativity” summarised this concept by suggesting that “Physical objects are not in space, but these objects are spatially extended. In this way the concept of empty space loses its meaning” . The concept that space is not empty and has substance has never been fully theorised. The current division with Relativity representing the macro physics and Quantum mechanics the representative of micro physics extrapolates any possibility of particle science and field science merging to unification. This raises the question as to whether any such unified theory should have one basis or the other. Current physics deems the existence of  four forces, gravity, the electromagnetic force, the strong nuclear force and the weak nuclear force. The evolution of Quantum mechanics gives the physicist the ability to unify three of these forces, the electromagnetic, and the strong nuclear and the weak nuclear, however the unification of gravity has not been possible. “Q-Mechanics” which is neither Relativity nor Quantum mechanics unifies all four forces into one force and one theory. This is achieved through the use of a new fully dynamic three dimensional interactive fundamental field structure that replaces the complete current theoretical atomic model. Not only the current atomic model comes under scrutiny, a revaluation of current mathematical procedure is taken into account to disallow the use of impossible conceptual equations. For the purpose of example, an impossible conceptual equation would be to subtract three quantities from two, to equal negative one [2 – 3 = -1]. This equation can never be realised since it is impossible to subtract a number of objects greater than, what exists.  This mathematical concept is not the only one that could be deemed as physically illogical. What is more important however is the use of a three dimensional tertiary based mathematical system as used in Q-Mechanics rather than the current binary to vector conversion system which establishes the spatial coordinates of a physical object (or atomic structure). To fully describe and document the existence of a physical reality requires more than just a three dimensional spatial reference based on the position of the observer. All physical existence has a component called Time. This time component is as physical as the object itself and should never be disregarded. The equation to incorporate the time component is not that difficult, however to understand the full implications of the interaction of time within the object needs far greater explanation. Einstein may have understood the basis of this concept with his time theories suggesting that time itself is relevant to the individual rather than applying a global time standard which effects entirety equally. The effect we know as gravity can also be unified into the space-time equation as an actual part of it without the need for any separate theorem. This complete unification works at both the micro and the macro levels. Q-Mechanics achieves this unification with the introduction of a model which is based upon dynamic fields and their structures. These fields represent the second instance of creation, since the first instance would be the creation of the universe itself.  Q-Mechanics does not offer opinion on the first instance of creation. The new dynamic field structure is to be termed as the Matrix structure and proposes to replace all current conceptual models.  Current theory deems as existing a number of basic particles of which the universe is comprised of. The numbers of these basic particles are increasing as technology builds higher and higher energy accelerators. The higher the energy accelerator science uses, the more of these basic particles will be claimed as discovery. The current initial base theory consists of protons, neutrons and electrons. The protons and neutrons can be broken into smaller particles called quarks. Neutrons consist of two down quarks with -1/3 charge and one up quark with +2/3 charge. The proton consists of two up quarks with +2/3 charge and one down quark with -1/3 charge. It has just recently been announced by experimenters at Fermilab that there is evidence for a structure within quarks. This discovery trend will continue until the level of what Q-Mechanics has termed as the Base Matrix. The base matrix is the smallest possible field structure than can constitute a physical space-time value. To express this value using current mathematical description would be 1.4144986 x 10-16 m3 . When this is compared to the current atomic structure size of say a hydrogen atom, which is 2.1217479 x 10-7 m3 , it demonstrates clearly that light speed technology is inadequate for such finite measurement. Q-Mechanics explains how and why frequencies well in excess of light speed are possible. The Matrix model when understood fully has a predictive ability that eliminates all chance. Should Science start to use such a model it would soon recognise that a complete revaluation of current physics would quickly be deemed necessary. The most important concept that needs to be reconciled before an understanding of the Matrix model can be grasped, is complete unification. To completely unify physics requires a starting point of one from which all evolves or arises (the Matrix). This means one force, one value or one field, regardless of it terminology it is simply one. If one is the starting point for unification, then everything beyond that point is a combination of one, or the same thing. It should be emphasised that Time is an integrated part of the one, however an explanation of this time integration concept only becomes apparent with the complete understanding of the model. With unification represented by the concept one, it is logical that one has no opposite and what are currently deemed as opposing forces are a different manifestation of the same force. To date, no force has been discovered that would suggest an opposite to gravity. With this being the case it is not likely that such a force should ever be discovered. Q-Mechanics explains that this is exactly the case, and that gravity has no opposite. The case for no opposites does not end with gravity, but applies to the complete Matrix model. There are no opposite or negative concepts used to define the Matrix, its action is completely independent of the requirement for the equation having opposing forces such as is the case with the current atomic model. Q-Mechanics explains why due to the inconsistencies deemed as anomalies, the current atomic model consisting of protons, neutrons and electrons is abandoned. The current most popular candidate for the ultimate unified theory is the super-string theory, in which all particles are just different vibrational modes of very small loops of string. Of course this theory is a lot more complicated than just stated, but regardless it has its problems too. In Q-Mechanics the Universal model is fully interconnected at all levels and has zero conflict between the micro and macro since the origin of the macro is an expansion of the micro. There is absolutely no evidence to suggest that the fundamental laws of the universe should be any different at either end of the scale. To understand the universe is to accept the challenge of understanding the greatest puzzle in science today, gravity. Physicists believe that to solve such a riddle would yield the secret or ultimately the grand unification of physics and eventually the universe. Once the actual nature of gravity is defined correctly it will become clear as to just what and why gravity is. Gravity is just as is every other force, an effect. All force which can also be termed as energy is an action, or combination of actions in sequence. Actions or motions are created effects from the effect of existence or that that has substance. Prorogation of any effect can only be achieved via existence or substance. All existence has and is substance. The Base matrix is the source of the gravitational effect and is present and part of every single space-time coordinate in the complete universe. This brings us back to the concept that space is not empty. On more than one occasion during the brief history of physics the scientific community have postulated the existence of what Einstein proposed as an ether or substance in space to facilitate propagation. This idea has never really died, however the famous experiment conducted by Michelson and Morley using light at a time when the propagation of such was not fully understood was interpreted as proof for the non-existence of a universal fabric. Q-Mechanics illustrates that there is a universal fabric which can be measured. This universal fabric is discussed in the book titled Q-Mechanics where it is termed as the universal grid on which everything exists. This book contains enough information about this grid for it to be measured. Figure 2 is a representation of the universe existing as a grid. The Universal grid exists because everything on the grid forms and is part of the grid. The Grid is fully connected three dimensionally, meaning everything in the universe is interlocked and part of everything else. The motions of a body or object moving upon this grid gives rise to forces (effects) that are very measurable using current scientific method. Two of these forces are known and others that exist are explained in Q-Mechanics. The two forces that are known are centrifugal force and inertia, both of which are direct effects of bodies in motion on the grid, so if space was an empty void, which is the suggestion of Earth’s greatest minds, then there would be nothing for a body to interact with and no effect could be created, henceforth the body or object itself would and could not exist. In this same situation centrifugal force and inertia would also not exist due to the simple assumption made by most that their velocity alone gives rise to forces that seem not to exist in the object while stationary. These manifestations are simple interactive effects that are present throughout everything. Once it is understood how the Base Matrix forms the Universal grid and bodies move upon it, the question of the origin of these forces becomes self explanatory. With only a basic understanding of Q-Mechanics all the physical aspects of reality can become self evident. Answers beyond the realms of physics also become accessible to all. The major components of Q-Mechanics have been proven through experiments and devices in recent years, however much of the recent information is not publicly available. It cannot be said for certain just how much of Q-Mechanics is known by the hierarchy of physics faculties around the world, however what is certain is that they are in possession of pieces of such knowledge . Q- Mechanics is the assembly of the complete pieces which is all that is needed for grand unification. In this chapter a step by step system will be used to present a completely new atomic model. Reference will be made to the old model to illustrate the new which will be built in conceptual form to simplify a complex action which will come to be better understood in the latter pages. A brief explanation of current atomic theory was illustrated in figure 7. where protons, neutrons and electrons are shown to be the basis of current theoretical concepts. Figures 8. and 9 are the current first two elements on the current periodic table being that of hydrogen as (1) and helium as (2). Helium and Hydrogen Atoms Figure 8, Helium & Figure 9, Hydrogen In hydrogen there is one proton and one electron but no neutron whereas helium has two protons, two electrons and two neutrons. Upon examination of the rest of the periodic table it becomes apparent that hydrogen with no neutron is an exception to the rule. When dealing with any atomic theory it must be clearly understood that there is absolutely no current technology capable of examining atomic structure at any level. Current magnification techniques or technologies are not even approaching the scale necessary for viewing even a compound of atoms.  To help understand any theory it is useful to be aware of the origin of the concept.  The development of the currently accepted atomic model is as follows in chronological order. Thomson’s model: An atom consists of a sphere of positively charged fluid in which is distributed a number of negatively charged electrons. Lenard’s model: An atom is made up of pairs of positive and negative charges called dynamids that are distributed throughout the volume of the atom. Dynamids were thought to be extremely small and most of the atom was empty space. Rutherford’s model: The Geiger and Marsden scattering experiment involved: Alpha particles (+e) from a radioactive source were scattered by a thin piece of gold foil and detected by a zinc sulphide screen. Most Alpha particles passed through the foil with little or no change in direction, but a few were scattered through large angles. By reviewing the results of the Geiger and Marsden scattering experiment Rutherford concluded that the positive charge of an atom was concentrated in a tiny central nucleus and that most of the atom was empty space. Alpha particles only suffer a large deflection when they approach a nucleus closely and this occurs only rarely. Rutherfords atomic model consists of: 1. A tiny central nucleus containing most of the mass and having a positive charge Ze where e is the charge on the electron and Ze is the atomic number of the element. 2. Orbiting electrons surrounding the nucleus, the number of electrons being Z in a neutral atom. Chadwick’s model: Beryllium bombarded with alpha particles emitted highly penetrating neutral radiation. Chadwick suggested that the neutral radiation was neutral particles which were a combination of a proton and an electron. This differs from today’s accepted theory of a neutron in that Chadwicks model of the atom did not contain a neutron particle, As stated above, a neutron particle was created from an electron and a proton at the time of emission. Before this the neutron particle did not exist. It is only in recent times that the neutron particle has been added to the model which has helped to overcome a small portion of the problems created with having a model based on an opposite concept The largest assumption however, is that of an empty space existing between the particles which infers some kind of a natural void or lack of matter. The concept of a natural void, nothing or zero is not possible, even space itself is not a void. Space is also covered in the later chapters. The step by step technique used to explain the new atom will use current theory as a starting point, replacing each component one part at a time. The first component of current theory to be replaced is going to be the neutron or neutral particle. The replacements are not going to be that of matter but that of fields, that is we are going to use fields and not particles to illustrate the new atom. This first field shall be called the ‘neutronic field’ since it replaces the old concept of the neutron particle. The terming for the neutronic field shall remain in effect for the entirety of the remaining contents in reference to the new atomic model. The placing of the neutronic field shall not be in the nucleus as was the neutron particle but shall be placed around the remainder of the current atom being that of protons and electrons. This placing of the neutronic field around the outside is illustrated below in figure 10. Figure 10. The neutronic field can also be viewed in terms of the program of the atom. This field defines the shape and size of a particular atom or group of atoms. The definition of such a field is determined by its frequency, so the program is the direct result of its frequency. Different atoms are constructed with different frequencies or programs. Figure 10 is a simplified diagram representing the complex neutronic field, however this simplification makes it possible to begin an understanding of such a field. An important point is that the neutronic field is a three dimensional field, however it may be viewed in simple two dimensional form at this stage for the sole purpose of conceptual acceptance. The neutronic field, at this point, can also be considered as the housing for the protons and electrons. The size and shape would determine how many protons and electrons the field could hold at any one point in time. A neutronic field is also not perfectly round, that is, it is not a perfect sphere. The field has high points and low points. These high and low points we shall call bumps and divots. The high points are the bumps and the low points are the divots. Figure 11. illustrates these bumps and divots which are not all the same. Figure 11. The bumps and divots represent the points that define the shape and size of the neutronic field. Figure 11. does not represent any particular atom, it illustrates the size-shape concept only. It is beneficial to start to visualise these shapes in their three dimensional form. So far we have only considered the neutronic field of a stable atom. To gain a better understanding of the shape concept, the neutronic field of an unstable atom such as neodymium is illustrated in figure 12. The periodic table lists neodymium as element number 60 and is considered as having an overcharge or having more electrons than it needs. This overcharge is considered as having a value of four, that is the four overcharge is an excess of four electrons. These extra four electrons are not trapped within the neutronic field and can move freely should the right conditions arise. Figure 12. It could be said that the program or frequency of the atom allows a four overcharge. The neutronic field is also a neutral field that is it is neither attracted nor repelled by another neutronic field. The next step in our new model is to delete the electrons. The concept of an electron as a charged particle shall be replaced with another field, that is a field that is neither positive nor negative but unified. This field shall be referred to as the ‘unified charge’. This ‘unified charge’ shall remain in effect for the rest of the contents. Since this unified charge is neither positive nor negative it has no opposite. A unified charge is also attracted to another unified charge, however a unified charge is also attracted to a neutronic field. The attraction of the unified charge to the neutronic field is negative but unified. This field shall be referred to as the ‘unified charge’. This ‘unified charge’ shall remain in effect for the rest of the contents. Figure 13. Representation of a stable atom. Figure 13. is the representation of the stable atom, figure 14. illustrates an atom with an overcharge. The unified charge is contained within the neutronic field of the stable atom, whereas the unstable atom has some of its charge on the outside of the neutronic field. Just as the unified charge is determined by the neutronic field of the stable atom so is the overcharge of the unstable atom. So far there are no opposites in the new model, nor are there going to be since as explained earlier there are no opposites in real terms. In figure 14. of the unstable atom we can use some charge values to help show the difference between the unified charge and the overcharge. Figure 14. Over Charged Atom Some rules have been established to govern the new model which is thus far. 1. Unified charge attracts to unified charge. 2. Unified charge attracts to neutronic field. 3. Unified charge attraction to neutronic field is greater than to another charge. 4. Neutronic field is not attracted to a neutronic field. The only part of the original old theory that now remains is that of the protons which we will now replace with another field. This field is to be placed into the center of our new model directly replacing the protons and shall be termed the emanation which shall remain in effect for the entirety of the contents. The emanation is the most complex part of the new model since its action determines the atom. This action can only be considered in three dimensional terms for an atom is a three dimensional object. The emanation can also be considered as the source of the program for the neutronic field. The mathematical system used by the emanation shall be covered in a little more detail in a later chapter. The first concept of the emanation field required to be understood, is that the field emanates and this emanation is from a center point outwards in a three dimensional plane. The three dimensional plane can be considered sphere like but not perfect. Figure 15. illustrates the forming of each outward point which combine to form the total emanation. Figure 15. The ends of the lines emanating outward in figure 15. above represent the point or tip of the outer boundary of the emanation. All these points form the three dimensional shape and size of the single emanation. This pattern or program is its frequency. The number of lines shown in figure 15. are by no means a correct amount for any particular emanation since the emanation would have many more points which we are unable to illustrate on the scale necessary. Figure 16. Representation of the new model with the emanation now in the center. The natural overcharge of an unstable atom as illustrated above can be seen in the areas where the emanation does not protrude beyond the unified charge. The areas between the emanations that completely protrude either side of the unified charge would be considered as stable or as areas containing their natural unified charge levels. It is possible however to overcharge an atom beyond its natural stable point which would then be termed as hyper -charging the atom. Hyper-charging and its effects will also be covered in later chapters. Another view of the natural overcharge of the unstable atom is illustrated in figure 17 . Figure 17. The emanation of the atom is also in constant motion. This motion could be described as the forming of the individual points, where each individual point is formed one point at a time with such speed as to appear as a complete emanation. This sequence is covered in more detail shortly. The emanation is not the only part of the new model that is in motion as the unified charge is also in motion. The emanation is also the source of the neutronic field since the neutronic field exists at the boundary of the emanation field. The neutronic field could also be considered as an effect of the emanation field. The emanation field could also be considered as the neutronic field, however it is necessary to keep the two concepts separate at this point since the outer neutronic field plays a much larger part when dealing with groups of atoms. One important point which will become clear later is that single atoms do not exist by themselves. Our single atom is for progressive teaching only. In figure 18. below, two atoms are illustrated to show how the neutronic field becomes one field around the two atoms. This is an example of the 1+1=1 concept where the combining fields form the one field. The emanations however remain two separate emanations. This figure also helps to explain the relationship between the emanation field and the neutronic field since the boundaries of the emanation fields form the single neutronic field surrounding the number of atoms as a unit. Should you add more atoms to the unit, the boundary of the unit will change, hence changing the neutronic field. The atoms can also be seen meshing close together, however they mesh considerably closer than illustrated, which also allows the unified charges to become one unified charge or field. The pattern or program of the atom allows this very close meshing to take place. Should you wish to add more atoms to the unit several conditions would apply. Figure 18. Firstly the program patterns of both the unit you wish to add too and the atoms you wish add must be compatible, that is the programs of both must be capable of meshing together. This can be observed in modern chemistry where some particular atoms will not bond with others since their programs are not compatible, but with the use of a third atom that is compatible with the other two, a bond can be achieved. This bonding system is common and present in nearly all known matter. Secondly the atoms must be allowed to get close enough together to be able to mesh. There are many variables and processes which make this possible, all of which need to be considered on an individual basis. The next chapter is where we will be covering a number of these effects. The close meshing of the atoms is part of what forms the structure of the universe, meaning every single atom is in close proximity to the next as illustrated in figure 19. Figure 19. This concept of atomic structure also applies to space since space itself is not a void as voids do not exist. The atomic structure of space has however never been defined. Some attempts have been made to describe space as an unexplained ether to help negate the concept of the void however it is the structure of the atom that is most important. The close mesh of the atom is not just the product of its shape and size, it is its individual motion that is important and this motion needs to be considered for each individual atom itself and then for each other atom individually within the group. After this we can consider what all the individual actions mean to the group as a whole. Current science could be understood as interpreting the group’s action as a whole, as the singular action rather than the result of the group action, hence the search for things like quarks and other smaller particles. The formation of the motion for the individual atom needs to be simplified for the purpose of initial understanding. The three dimensional motion is its frequency or program, providing it is not effected by an external source. To describe the frequency of the atom would be to select a single point of the emanation field and then track each other point in a three dimensional plane until you had the complete motional signature for that particular atom. A simpler way to visualise this would be to use an artificial or mock emanation field that has very few points in its frequency, then place the emanation field into an elastic bubble which is smaller than the emanation so that each time the emanation forms a point you would visually see the surface of the bubble rise and fall with the emanation field. Observing this would give the view of a pulsating surface on the bubble. Each time there is a pulse the shape of the bubble changes, however this is only part of the view since in real terms some of the emanations will not reach the surface of the bubble, so where this occurs you must visualise the bubble dipping inwards to meet the point of the emanation. All this is viewed in slow motion as this action is performed in real terms at great speed. Once this visualisation is complete use the same motion concept except without the bubble because in real terms the emanation is not housed within a bubble. This motion is considered as a rotational motion as it moves in a three dimensional plane. The rotational motion is the frequency of the atom. The frequency can also be described as a vibration. That is the rotational motion, frequency and vibration are all the same in reference to the new atomic model as they all describe the same motion. An important point about the use of the term vibration is that it is not used in reference to any current conception of the term where it is used and described as a back and forth motion especially in reference to any current wave theory, since wave theory is not a motion that can be used to describe the interactions within the new atomic model. The next step is to consider the action of the atom that is beside the atom whose motion we have just described. If the two atoms were of the same kind they would have the same action except the orientation of the starting points of both atoms may not be the same which is the case in most circumstances. This means that both atoms are not orientated with the same rotational alignment in reference to each other. This applies to each and every individual atom within the group, however it is possible to achieve different degrees of rotational alignment which will alter the effect of the group as a whole. Figure 20. Figure 20. simplifies the concept to illustrate the difference in rotational alignment. The complexity of the whole can only be calculated on the basis of each individual atom. This multiple interaction will be expanded upon as we proceed through the chapters on effects and the advanced atom. Posted on Quantum Physics and Notable Physicists Quantum Physics and Notable Physicists The QEnergySpa is based on quantum physics. Quantum physics is a science not very well understood by the average member of the public. Niels Bohr says it best: “Anybody who is not shocked by quantum mechanics has not understood it”! In order to understand the QEnergySpa, BEFE technology, it may be necessary to understand the basic principles of nature (natural laws), matter and the universe, right down to the smallest accepted unit, ‘Rutherford’s Atom’ and/or ‘the Quantum Model of the Atom’. It is also helpful to understand quantum field science and the history behind quantum physics. What is quantum physics? To understand this question requires first a simple step in perception, which anyone can take. You simply have to discard the notion of atoms as billiard balls and replace it with a notion of them as waves or vibrations. It is just a new way of looking at the same old reality. Quantum Physics is basedon the one overlooked unifying physical principle. It completely replaces the current atomic theories, allowing explanation of all actions and functions including anomalies and random chaotic events that occur naturally in nature, ranging from the simplest physical reaction through to the complexity of the human biomass. Definition of Quantum Physics: The study and theory of tiny packages, which are called quanta. Quantum theory deals with the mathematical equations of motion and interaction of atomic and subatomic particles, incorporating the concepts of the wave-particle duality. Matter is made of Waves Dr Milo Wolff in 1986, John Beaulieu (1995), Dr. Hans Jenny suggested in 1972, Max Born 1954 Nobel Prize in physics in 1954, Paul Dirac & Erwin Schrodinger noble laureate 1933, De Broglie Nobel Prize 1929, Albert Einstein Nobel Prize in 1921, Niels Bohr The Nobel Prize in Physics 1922, Werner Karl Heisenberg Nobel prize 1932, H. P. Duerr late 1930’s and Max Planck Nobel Prize in Physics in 1918. The discrete energy states of Matter can be determined by Wave Equations. When frequency f and de Broglie wavelength y are substituted into general wave equations it becomes possible to express energy E and momentum mv as wave functions. Erwin Schrodingerfor which he won the Nobel Prize in 1933. “A form that appears solid is actually created by an underlying vibration”. John Beaulieu, 1995 “Every cell has its own frequency and a number of cells with the same frequency create a new frequency which is in harmony with the original, which in its turn possibly forms an organ that also creates a new frequency in harmony with the two preceding ones . The key to understanding how we can heal the body lies in our understanding of how different frequencies influence genes, cells and various structures in the body”. Dr. Hans Jenny 1972 “The laws of physics and the structure of matter ultimately depend upon the waves from the total of matter in a universe . Every particle communicates its wave state with all other matter so that the particle structure, energy exchange and the laws of physics are properties of the entire ensemble”. Wolff 1998 “The nature of reality and of consciousness is a coherent whole, which is never static or complete but which is an unending process of movement and unfoldment….. Particles are not individual entities, but are actually extensions of the same fundamental something”. David Bohm early 1980’s “Electromagnetic energy is the most plentiful constant energy of our universe”. Jon Barron. Quantum Field Science Electromagnetic fields are present everywhere in our environment and are invisible to the human eye. Besides natural sources, the electromagnetic spectrum also includes fields generated by man-made sources such as x-rays which are employed to diagnose broken limbs. The field concept was originally developed by Michael Faraday, Feynman suggested that f ields are used to describe all cases where two bodies separated in space, exert a force on each other . The field is thus a kind of “middleman” for transmitting forces. Each type of force (electric, magnetic, nuclear, or gravitational) has its own appropriate field; a body experiences the force due to a given field only if the body itself, it also a source of that kind of field . Physicists developed the quantum field theory, in which the quantum field or the vibration, is understood as the one true reality and the particle or form and the wave or motion, are only two polar manifestations of the one reality, vibration – John Beaulieu, 1995. In other words, afield is a signature emanation of an object. Furthermore, it is a culmination of many constituents and/or actions/functions that make it up. Hence, Matter is to be considered as function of the field. Since every event is constantly in motion and evolving within its own environmental constraints and those environments are a function of the outcome of the previous evolvement, then this can be considered as “fluid” motion, with no pauses, stops or stationary events occurring within its construct. A simple analogy to illustrate this conceptis water. When in its natural environmental state based on its location and functionality, it is liquid. When heated it vaporizes to steam and when frozen, it solidifies to ice. As its environment is altered, so is its representative format. In reference to the observer however, it always remains water. There are two elements or properties of a field: the frequency and its corresponding wavelength. Fields of different frequencies interact with the body in different ways. One can imagine electromagnetic waves as series of very regular waves that travel at an enormous speed, the speed of light. The frequency simply describes the number of oscillations or cycles per second, while the term wavelength describes the distance between one wave and the next. Hence wavelength and frequency are inseparably intertwined: the higher the frequency the shorter the wavelength. Extract from the World Health Organization (WHO), 2008 1687 Sir Isaac Newton is one of the greatest scientists the world has known. Newton described universal gravitation and the three laws of motion, effectively laying the groundwork for classical mechanics which dominated the scientific view of the physical universe for the next three centuries and is the basis for modern engineering. 1820 Andre Marie Ampere, the French scientist, discovered the relationship between magnetism and electricity and defined the electrical measurement that bears his name: the ampere, or “amp” for short. 1800 Sir Humphry Davy, a British chemist and physicist, founded the new field of electrochemistry. When passing an electrical current through some substances (a process later called electrolysis), these substances decomposed. His research suggested that electrical forces could act (generate current) only when the electrolyte was capable of oxidizing one of the metals, and that the intensity of its effect (the voltage generated) was directly related to the reactivity of the electrolyte with the metal. His work led him to propose that “the elements of a chemical compound are held together by electrical forces. 1800’s Michael Faraday an English chemist and physicist and Sir Davy’s assistant, was one of the most influential scientists in history, establishing the basis for the magnetic field concept in physics. He discovered electromagnetic induction diamagnetism and electrolysis. He established that magnetism could affect rays of light and that there was an underlying relationship between the two phenomena. Mid 1800 James Clerk Maxwell, the Scottish mathematician and theoretical physicist, founded a set of equations in electricity, magnetism and inductance— Maxwell’s equations — including an important modification to the Ampere’s Circuital Law. It was the most unified model of electromagnetism yet. He became famous for introducing to the physics community a detailed model of light as an electromagnetic phenomena , building upon the earlier hypothesis by Faraday (The Faraday Effect). In Late 1800 Thomas Alva Edison, also a notable American, published numerous inventions including the following: the Printing Telegraph, Automatic Telegraph, Electric Pen, Carbon Telephone Transmitter, Phonograph, Dynamo, Incandescent Electric Lamp, Electric Motor, Carbons for Incandescent Lamps. In 1883 he observed the flow of electrons from a heated filament—the so-called “Edison effect”, Projecting Kinetoscope , 1900 Storage Battery. In 1879, he publicly demonstrated his incandescent electric light bulb. In 1893 Nikola Tesla demonstrated the ‘wireless’ communication radio and was credited as the inventor of the radio in 1943. He was widely respected as America’s greatest electrical engineer. Much of his early work pioneered modern electrical engineering and many of his discoveries were of groundbreaking importance. The SI unit measuring magnetic flux density or magnetic induction (known as the Tesla) was named in his honor. He discovered many secrets of energetic interactions and did experiments with new energy forms . Some of his developments still are not understood by other scientists and to this day only a small part of the results of his research is known. Dr. Alfred Partheil, end of the 1800’s. On the Numerical Relationship of Atomic Weights end of the 1903 Dr. Alfred Partheil was the first scientist to suggest a correlation between substances, or specifically chemical elements, and frequency. Max Planck 1900 the German physicist was considered to be the founder of quantum theory and received the Nobel Prize in Physics in1918. Max Planck discovered The Wave Structure of Matter (WSM) & Standing Wave Interactions (which occur at discrete Frequencies f) explains Quantum Energy States of Matter & Light ‘Quanta’ (E=hf). He made a profound discovery in modern physics/Quantum Theory. He showed, from purely formal/mathematical foundations, that light must be emitted and absorbed in discrete amounts if it was to correctly describe observed phenomena. Hendrik Antoon Lorentz & Pieter Zeeman received the Nobel Prize in Physics in 1902 for their research into the influence of magnetism upon radiation phenomena – They found that light waves were due to oscillations or electromagnetic wave nature of an electric charge in the atom . This was verified experimentally by measuring the change in the wavelength of the light produced demonstrating the effect of a strong magnetic field on the oscillations, known as the ‘Zeeman effect’. Lorentz’ theoretical work on the electromagnetic theory of light assumed that charged particles called electrons carry currents or transport electric charge and their vibrations are the cause of electromagnetic waves. Antoine Henri Becquerel Pierre Curie and Marie Curie. They received the Nobel Prize in Physics in 1903 in recognition of the discovery of spontaneous radioactivity. They discovered the chemical elements radium and polonium. These radioactive elements contributed to the understanding of atoms on which modern nuclear physics is based . The experiments on radioactivity contributed to our knowledge of the structure of the atom and was later used by Rutherford to formulate the structure of the atom. Ernest Rutherford 1911 proposed a revolutionary view of the atom. He suggested that the atom consisted of a small, dense core of positively charged particles in the centre (or nucleus) of the atom, surrounded by a swirling ring of electrons . Rutherford’s atom resembled a tiny solar system with the positively charged nucleus always at the centre and the electrons revolving around the nucleus. He showed that the atom consisted of a positively charged nucleus, with negatively charged electrons. This is a realization within quantum theory of a classical object that has been called a “Rutherford atom”. The word ‘quantum’ was first referred to by Werner Karl Heisenberg 1920 a German physicist and Nobel laureate in 1932, when he discovered that energy is not a continuous stream but in fact consists of tiny packages, which he called quanta . He saw light as a particle and a wave, he stated that our universe is based on the concept of both, rather than the idea of either/or. In the late 20’s, following de Broglie’s idea, the question was posed: if an electron travelled as a wave, could you locate the precise position of the electron within the wave? Heisenberg answered no in what he called the uncertainty principle. ‘Anybody who is not shocked by quantum mechanics has not understood it!’ – Niels Bohr . Bohr received The Nobel Prize in Physics in 1922 while investigating of the structure of atoms and the radiation emanating from them. He expanded upon Rutherford’s theory in 1913, by proposing that electrons travel only in certain successively larger orbits. He came up with a Quantum Model of the Atom. He suggested that the outer orbits could hold more electrons than the inner ones and that these outer orbits determine the atom’s chemical properties. Bohr also described the way atoms emit radiation by suggesting that when an electron jumps from an outer orbit to an inner one, that it emits light. Bohr also postulated that an atom would not emit radiation while it was in one of its stable states but rather only when it made a transition between states . The frequency of the radiation so emitted would be equal to the difference in energy between those states divided by Planck’s constant. Albert Einstein, the German Physicist, received the Nobel Prize in 1921 but not for his theory on relativity, rather for his 1905 work on the photoelectric effect. He said that “all matter is energy”. Over a thirty year period, h e continuously built on and improved his own theory of energy and the universe. Einstein paved the way for modern-day quantum Physics, building the conceptual model from which we understand the Human energy field and human consciousness . “All these fifty years of conscious brooding have brought me no nearer to the answer to the question, ‘What are light quanta?’ Nowadays every Tom, Dick and Harry thinks he knows it, but he is mistaken”. Albert Einstein, 1954. “Since the theory of general relativity implies the representation of physical reality by a continuous field, the concept of particles or material points cannot play a fundamental part, nor can the concept of motion”. Albert Einstein is correct that there are no discrete particles and that the particle can only appear as a limited region in space in which the field strength or the energy density are particularly high. But it is the high Wave-Amplitude of the Wave-Centre of a Spherical Standing Wave in Space (not of a continuous spherical force field) that causes the particle effect. Thus of three concepts, particles, force fields, and motion, it finally turns out that Motion, as the spherical wave motion of space, is the correct concept, as it then explains both particles and fields. In 1923 Robert Andrews Millikan won the Nobel Prize in Physics, for his work on the elementary charge of electricity and on the photoelectric effect. The maximum kinetic energy that any photoelectron can possess, the energy required to free an electron from the material, varies with the particular material. Earning a Nobel Prize in Physics 1929 for his discovery of the wave nature of electrons and suggested that electrons, like light, could act as both particles and waves. De Broglie said “If electrons are waves, then it kind of makes sense that they don’t give off or absorb photons unless they change energy levels. If it stays in the same energy level, the wave isn’t really orbiting or “vibrating” the way an electron does in Rutherford’s model, so there’s no reason for it to emit any radiation. And if it drops to a lower energy level… the wavelength would be longer, which means the frequency would decrease, so the electron would have less energy. Then it makes sense that the extra energy would have to go some place, so it would escape as a photon … and the opposite would happen if a photon came in with the right amount of energy to bump the electron up to a higher level”. Erwin Schrodinger , Austrian physicist, famous for his contributions to quantum mechanics, especially the Schrödinger equation, for which he won the Nobel Prize (along with Dirac) in 1933. He made a profound discovery in 1927 by showing that the discrete energy states of Matter could be determined by Wave Equations . Erwin Schrodinger discovered that when frequency f and de Broglie wavelength y were substituted into general wave equations it becomes possible to express energy E and momentum mv as wave functions – thus, a confined particle (e.g. an electron in an atom/molecule) with known energy and momentum functions could be described with a certain wave function. After extensive correspondence with personal friend Albert Einstein, he proposed the Schrödinger’s cat thought experiment. “The task is, not so much to see what no one has yet seen; but to think what nobody has yet thought, about that which everybody sees”. Certain standing wave frequencies of matter corresponded to certain energy states. The agreement of observed frequencies and Schrodinger’s Wave Equations further established the fundamental importance of Quantum Theory and thus the Wave properties of both light and matter. Paul Dirac won the Nobel Prize in 1933 with Erwin Schrodinger. His contribution was the ‘Quantum Physics Dirac Equation which is the mathematical more so than the theoretical aspects of quantum mechanics that sought to find a relation between quantum theory and the conservation of energy in special relativity. The importance of Dirac’s work lies essentially in his famous wave equation, which introduced special relativity into Schrödinger’s equation. Taking into account the fact that, mathematically speaking, relativity theory and quantum theory are not only distinct from each other, but also oppose each other, Dirac’s work could be considered a fruitful reconciliation between the two theories. Max Born proposed a statistical interpretation of the wave function called the ‘Quantum Physics Probability Waves’ and earned the Nobel Prize in physics in 1954. Richard Phillips Feynman 1970 was an American physicist known for expanding the theory of quantum electrodynamics and particle theory . For his work on quantum electrodynamics Feynman was a joint recipient of the Nobel Prize in Physics in 1965, together with Julian Schwinger and Sin-Itiro Tomonaga. He developed a widely-used pictorial representation for the mathematical expressions governing the behaviour of subatomic particles, which later became known as Feynman diagrams. Simple graphs represent possible variations of interactions and provide for precise mathematical equations. He said: “The charge on a particle is proportional to the probability that it will emit or absorb a photon”. The field concept was developed by M. Faraday based on his investigation of the lines of force that appear to leave and return to a magnet at its poles. Feynman suggests that f ields are used to describe all cases where two bodies separated in space exert a force on each other . If a change occurs at the source, its effect propagates outward through the field at a constant speed and is felt at the detector only after a certain delay in time. The field is thus a kind of “middleman” for transmitting forces . Each type of force (electric, magnetic, nuclear, or gravitational) has its own appropriate field; a body experiences the force due to a given field only if the body itself it also a source of that kind of field . Quantum field theory applied to the understanding of electromagnetism is called quantum electrodynamics (QED) and it has proved spectacularly successful in describing the interaction of light with matter. The calculations, however, are often complex and are usually carried out with the aid of Feynman diagrams. Feynman’s talks about the conception of charged particles having Spherical Electromagnetic ‘advanced and retarded waves’ which are later called ‘In and Out Waves’ by Wolff. 1979 The German physicist Burkhard Heim came up with a 6-dimensional unified field theory based on Einstein’s theory of relativity, specifically Quantum physics. He concluded that before any chemical reaction [can take place] at least one electron must be activated by a photon with a certain wavelength and enough energy. Dr. Hans Jenny suggested in 1972 that evolution is a result of vibrations and that their nature determined the ultimate outcome. He speculated that every cell had its own frequency and that a number of cells with the same frequency created a new frequency which was in harmony with the original, which in its turn possibly formed an organ that also created a new frequency in harmony with the two preceding ones . Jenny was saying that the key to understanding how we can heal the body with the help of tones lies in our understanding of how different frequencies influence genes, cells and various structures in the body. He also suggested that through the study of the human ear and larynx we would be able to come to a deeper understanding of the ultimate cause of vibrations. Jenny sums up these phenomena in a three-part unity. The fundamental and generative power is in the vibration which, with its periodicity, sustains phenomena with its two poles . At one pole we have form, the figurative pattern. At the other is motion, the dynamic process. These three fields – vibration and periodicity as the ground field and form and motion as the two poles – constitute an indivisible whole, Jenny says, even though one can dominate sometimes. John Bell’s Inequality Quantum Mechanics EPR Paradox 1964 published his mathematical proof, a theorem that elegantly proved that if momentum and position were absolute values (that is, they exist whether they were measured or not) then an inequality, Bell’s Inequality, would be satisfied. Scientists have said that there were “hidden variables” that exist in the photons that allow them to behave this way. Hidden variables are variables that we have yet to discover. Bell proved mathematically that this was impossible with this inequality. Bell’s Inequality equation: Number (A, not B) + Number (B, not C) >= Number (A, not C) David Bohm early 1980’s proposed his interpretation of the nature of physical reality, which is rooted in his theoretical investigations, especially quantum theory and relativity theory. “I would say that in my scientific and philosophical work, my main concern has been with understanding the nature of reality in general and of consciousness in particular as a coherent whole, which is never static or complete but which is an unending process of movement and unfoldment ….”. (David Bohm: Wholeness and the Implicate Order) The Aharonov-Bohm effect, sometimes called the Ehrenberg-Siday-Aharonov-Bohm effect , is a quantum mechanical phenomenon by which a charged particle is affected by electromagnetic fields in regions from which the particle is excluded. The earliest form of this effect was predicted by Werner Ehrenberg and R.E. Siday in 1949, and similar effects were later rediscovered by Aharonov and Bohm in 1959. Such effects are predicted to arise from both magnetic fields and electric fields, but the magnetic version has been easier to observe. In general, the profound consequence of Aharonov-Bohm effect is that knowledge of the classical electromagnetic field acting locally on a particle is not sufficient to predict its quantum-mechanical behavior. The most commonly described case, often called the Aharonov-Bohm solenoid effect , is when the wave function of a charged particle passing around a long solenoid experiences a phase shift as a result of the enclosed magnetic field, despite the magnetic field being zero in the region through which the particle passes. This phase shift has been observed experimentally by its effect on interference fringes. (There are also magnetic Aharonov-Bohm effects on bound energies and scattering cross sections, but these cases have not been experimentally tested.) An electric Aharonov-Bohm phenomenon was also predicted, in which a charged particle is affected by regions with different electrical potentials but zero electric field, and this has also seen experimental confirmation. A separate “molecular” Aharonov-Bohm effect was proposed for nuclear motion in multiply-connected regions, but this has been argued to be essentially different, depending only on local quantities along the nuclear path (Sjöqvist, 2002). John Beaulieu, in his book Music and Sound in the Healing Arts (1995), draws a comparison between his own three-part structure which in many respects resembles Jenny’s and the conclusions researchers working with subatomic particles have reached. “There is a similarity between cymatic pictures and quantum particles. In both cases; that which appears to be a solid form is also a wave. This is the great mystery with sound: there is no solidity! “A form that appears solid is actually created by an underlying vibration”. In an attempt to explain the unity in this dualism between wave and form, physics developed the quantum field theory, in which the quantum field or in our terminology, the vibration, is understood as the one true reality, and the particle or form, and the wave or motion, are only two polar manifestations of the one reality, vibration, says Beaulieu. The Wave Structure of Matter (WSM) was formalised by mathematical physicist       Dr Milo Wolff in 1986. The WSM explains and solves many of the problems of modern physics from the most simple science foundation possible. Matter is made of Waves. Currently Physics represents matter as ‘particles’ which generate ‘forces/fields’ that act on other particles at a distance in Space and Time. The Spherical Standing Wave Structure of Matter explains how the particle is formed from the Wave-Centre of the Spherical Waves. Fields are caused by wave interactions of the spherical IN and OUT Waves with other matter (explaining action-at-a-distance). The Spherical In-Waves are formed from the Out-Waves of other matter in the universe which then explains Mach’s Principle, i.e. the mass of a body is determined by all other matter in the universe. Wolff (1986). He discovered two things (both of which deserve a Nobel Prize in their own right): Firstly, from reading Feynman’s PhD thesis he was aware of Feynman’s conception of charged particles which ‘somehow’ generated Spherical Electromagnetic In and Out Waves (The Dynamic Waves of a Space Resonance) but realised that there are no solutions for spherical vector electromagnetic waves (which are mathematical waves which require both a quantity of force and a direction of force, i.e. vector). Wolff had the foresight to try using real waves, which are Scalar (defined by their Wave-Amplitude only). And this then led to a series of remarkable discoveries. ‘Although the origin of spin has been a fascinating problem of physics for sixty years, spin itself is not the important result. Instead, the most extraordinary conclusion of the wave electron structure is that the laws of physics and the structure of matter ultimately depend upon the waves from the total of matter in a universe . Every particle communicates its wave state with all other matter so that the particle structure, energy exchange, and the laws of physics are properties of the entire ensemble. This is the origin of Mach’s Principle. The universal properties of the quantum space waves are also found to underlie the universal clock and the constants of nature”.
155f7603611e8d3e
Fine structure From Wikipedia, the free encyclopedia Jump to: navigation, search Interference fringes, showing fine structure (splitting) of a cooled deuterium source, viewed through a Fabry-Pérot étalon. In atomic physics, the fine structure describes the splitting of the spectral lines of atoms due to electron spin and relativistic corrections to the non-relativistic Schrödinger equation. The gross structure of line spectra is the line spectra predicted by the quantum mechanics of non-relativistic electrons with no spin. For a hydrogenic atom, the gross structure energy levels only depend on the principal quantum number n. However, a more accurate model takes into account relativistic and spin effects, which break the degeneracy of the energy levels and split the spectral lines. The scale of the fine structure splitting relative to the gross structure energies is on the order of ()2, where Z is the atomic number and α is the fine-structure constant, a dimensionless number equal to approximately . The fine structure energy corrections can be obtained by using perturbation theory. To do this one adds three corrective terms to the Hamiltonian: the leading order relativistic correction to the kinetic energy, the correction due to the spin-orbit coupling, and the Darwinian term. The full Hamiltonian is given by where is the Hamiltonian from the Coulomb interaction. These corrections can also be obtained from the non-relativistic limit of the Dirac equation, since Dirac's theory naturally incorporates relativity and spin interactions. Kinetic energy relativistic correction[edit] Classically, the kinetic energy term of the Hamiltonian is where is the momentum and is the mass of the electron. However, when considering a more accurate theory of nature via. special relativity, we must use a relativistic form of the kinetic energy, where the first term is the total relativistic energy, and the second term is the rest energy of the electron. ( is the speed of light) Expanding this in a Taylor series ( specifically a binomial series ), we find Then, the first order correction to the Hamiltonian is Using this as a perturbation, we can calculate the first order energy corrections due to relativistic effects. where is the unperturbed wave function. Recalling the unperturbed Hamiltonian, we see We can use this result to further calculate the relativistic correction: For the hydrogen atom, , , and where is the Bohr Radius, is the principal quantum number and is the azimuthal quantum number. Therefore, the first order relativistic correction for the hydrogen atom is where we have used: On final calculation, the order of magnitude for the relativistic correction to the ground state is . Spin-orbit coupling[edit] For a hydrogen-like atom with protons, orbital momentum and electron spin , the spin-orbit term is given by: is the electron mass, is the vacuum permittivity and is the spin g-factor. is the distance of the electron from the nucleus. The spin-orbit correction can be understood by shifting from the standard frame of reference (where the electron orbits the nucleus) into one where the electron is stationary and the nucleus instead orbits it. In this case the orbiting nucleus functions as an effective current loop, which in turn will generate a magnetic field. However, the electron itself has a magnetic moment due to its intrinsic angular momentum. The two magnetic vectors, and couple together so that there is a certain energy cost depending on their relative orientation. This gives rise to the energy correction of the form Notice that there is a factor of 2, called the Thomas precession, which comes from the relativistic calculation that changes back to the electron's frame from the nucleus frame. the expectation value for the Hamiltonian is: Thus the order of magnitude for the spin-orbital coupling is . Remark: On the (n,l,s)=(n,0,1/2) and (n,l,s)=(n,1,-1/2) energy level, which the fine structure said their level are the same. If we take the g-factor to be 2.0031904622, then, the calculated energy level will be different by using 2 as g-factor. Only using 2 as the g-factor, we can match the energy level in the 1st order approximation of the relativistic correction. When using the higher order approximation for the relativistic term, the 2.0031904622 g-factor may agree with each other. However, if we use the g-factor as 2.0031904622, the result does not agree with the formula, which included every effect.[further explanation needed] Darwin term[edit] There is one last term in the non-relativistic expansion of the Dirac equation. Because it was first derived by Charles Galton Darwin it is referred to as the Darwin term, and is given by: The Darwin term affects only the s-orbit. This is because the wave function of an electron with vanishes at the origin, hence the delta function has no effect. For example, it gives the 2s-orbit the same energy as the 2p-orbit by raising the 2s-state by 9.057×10−5 eV. The Darwin term changes the effective potential at the nucleus. It can be interpreted as a smearing out of the electrostatic interaction between the electron and nucleus due to zitterbewegung, or rapid quantum oscillations, of the electron. This can be demonstrated by a short calculation[1] Quantum fluctuations allow for the creation of virtual electron-positron pairs with a lifetime estimated by the uncertainty principle . The distance the particles can move during this time is , the Compton wavelength. The electrons of the atom interact with those pairs. This yields a fluctuating electron position . Using a Taylor expansion, the effect on the potential can be estimated: Averaging over the fluctuations gives the average potential Approximating , this yields the perturbation of the potential due to fluctuations: This is only slightly different. Another mechanism that affects only the s-state is the Lamb shift, a further, smaller correction that arises in quantum electrodynamics that should not be confused with the Darwin term. The Darwin term gives the s-state and p-state the same energy, but the Lamb shift makes the s-state higher in energy than the p-state. Total effect[edit] The total effect, obtained by summing the three components up, is given by the following expression:[2] where is the total angular momentum ( if and otherwise). It is worth noting that this expression was first obtained by A. Sommerfeld based on the old Bohr theory; i.e., before the modern quantum mechanics was formulated. The total effect can also be obtained by using the Dirac equation. In this case, the electron is treated as non-relativistic. The exact energies are given by[3] This expression, which contains all higher order terms that were left out in the other calculations, expands to first order to give the energy corrections derived from perturbation theory. However, this equation does not contain the hyperfine structure corrections, which are due to interactions with the nuclear spin. Other corrections from quantum field theory such as the Lamb shift and the anomalous magnetic dipole moment of the electron are not included. New Microsoft Office PowerPoint Presentation.png [clarification needed] See also[edit] 1. ^ Zelevinsky, Vladimir (2011), Quantum Physics Volume 1: From Basics to Symmetries and Perturbations, WILEY-VCH, ISBN 978-3-527-40979-2  p. 551 2. ^ Berestetskii, V. B.; E. M. Lifshitz; L. P. Pitaevskii (1982). Quantum electrodynamics. Butterworth-Heinemann. ISBN 978-0-7506-3371-0.  3. ^ Sommerfeld, Arnold (1919). Atombau und Spektrallinien'. Braunschweig: Friedrich Vieweg und Sohn. ISBN 3-87144-484-7.  German English External links[edit]
effb29f1ca3f0fbd
Hawking and unitarity The previous blog article about the very same topic was here. However, Hawking's semiclassical calculation leads to an exactly (piecewise) thermal final state. Such a mixed state in the far future violates unitarity - pure states cannot evolve into mixed states unitarily - and it destroys the initial information about the collapsed objects which is why we call it "information loss puzzle". A tension with quantum mechanics emerges. There have been roughly three major groups of answers that people proposed. 1. One of them is essentially dead today; it is the remnant theory. It argued that the black hole does not evaporate completely. Instead, a small light remnant with a large entropy remains after the evaporation process - and this remnant is what preserves the information. This approach is highly disfavored today because such small seeds simply should not be able to carry large entropy (because it violates holography). Moreover, this approach does not save unitarity anyway because the scenario still assumes the thermal radiation to be in a mixed state. 2. The other two general answers are obvious. One of them says that the information is lost, indeed. The qualitative features of Hawking's semiclassical calculations - the evolution into mixed states - survive in the exact analysis, too. Such an approach is popular among the General Relativity fundamentalists who believe that the fabric of spacetime is exactly what we think it is classically; causality in particular must be exact and no information can ever get out from a black hole. I formulated the argument in a way that makes it clear that it looks dumb to me - especially today when we know that topology of space may change and that black holes exist in unitary backgrounds of string theory. The Hawking process itself is an example of a violation of the strict rules of locality and causality by black hole physics! 3. The last answer, the only one that has always respected the principles of the 20th century physics, says that the information is preserved in the same way as in any other process in the world - burning books is an example. (Only later, I noticed that Hawking has independently chosen the very same example.) When we burn books, it looks as though we are destroying information, but of course the information about the letters remains encoded in the correlations between the particles of smoke that remains; it's just hard to read a book from its smoke. The smoke otherwise looks universal much like the thermal radiation of a black hole. But we know that if we look at the situation in detail, using the full many-body Schrödinger equation, the state of the electrons evolves unitarily. The same thing must hold for black holes. And the feeling that such a transfer of information is impossible because of the horizon is just an illusion; it is an artifact of the semiclassical approximation that paints the rules of locality and causality as more strict than they are in the full theory. Locality and causality are, in general, approximate emergent concepts that appear in the (semi)classical limit. The power of the full theory of quantum gravity to violate locality and causality in a subtle way is manifested whenever horizons develop, and it is responsible for the conservation of the information. Note that the conservation of the information is the only answer that can be acceptable for a physicist who treats the postulates of quantum mechanics seriously. No doubt, the postulates of quantum mechanics seem rigid and un-modifiable, while the exact degrees of freedom and terms in the Lagrangian that describe general relativity are flexible. The quantum mechanical postulates have a higher priority, and they tell us that the information must be preserved in the details of the nearly thermal Hawking radiation that remains after the black hole disappears. While Stephen Hawking has believed that the information was lost - and he has made bets of this kind - he eventually switched to our side in the summer of 2003 or 2004 (I am uncertain now). As you could hear from CNN and other major global new agencies, he officially admitted that his opinion was incorrect. The deep insights in string theory have convinced him that John Preskill was right and the bet is lost; Hawking gave an encyclopedia to Preskill as promised. Among these insights that have convinced Hawking, you find Matrix theory and especially the AdS/CFT correspondence. Gravity in asymptotically AdS spaces has an equivalent description in terms of a conformal field theory living on its boundary. This conformal field theory is manifestly unitary and has no room for destruction of the information. This answers an equivalent question about gravity, too. This brings most sane physicists to the opinion that the information is preserved and gravitational physics is not that special after all. But it does not give us a quantitative, calculable framework that would explain how does the information get out of the black holes and what do these subtle correlations that remember the initial state look like. Hawking's recent solution Hawking has announced that he had solved the problem. The main ideas of his solutions are the following ones: • The scattering S-matrix is the main "nice" observable that should be calculated in a theory of quantum gravity. (I fully agree.) • The scattering does not prevent a black hole from being formed, but such a black hole is just like any other intermediate state or resonance. (I fully agree.) • The thermal nature of the resulting radiation is a consequence of an approximation (that becomes accurate for large black holes) but there is no qualitative difference between black hole intermediate states and other intermediate states; the transition if smooth. (It was actually just me who formulated this point in this way.) • Just like in quantum field theory, the Euclidean setup combined with the Wick rotation is an essential technical tool to do the calculations; Hawking refers to Euclidean gravity as the "only sane way" to do quantum gravity. In the gravitational context, this approach was promoted and improved by Hawking and Gibbons. In fact, the Euclidean approach may be even more important in quantum gravity than it is in quantum field theory and its procedures may represent am even larger fraction of the derivations in the gravitational context. (I agree, and as far as I know, the people who disagree - such as Jacques Distler - have not offered any rational and valid arguments so far.) OK, so Hawking tells you to calculate the S-matrix by a Euclidean path integral over topologically trivial configurations (spacetimes) - those that are continuously connected to the empty spacetime. Such a process may involve a production of a large number of particles in the final state which is a hallmark of an intermediate black hole. Once you calculate the Euclidean S-matrix, you Wick rotate the results to get the amplitudes for the Minkowski signature. Note that we have only included the topologically trivial spacetimes and this is a good choice that preserves unitarity. On the second page, Hawking proceeds with some technical subtleties. He wants to allow strong gravitational fields to occur even in the initial and final states, it seems. (It does not seem necessary when one talks about the generic S-matrix elements but it is conceivable that these strong fields appear in the Euclidean spacetime anyway.) With strong gravitational fields in place, one can't meaningfully define the wavefunction at time "t" because there is no preferred diff-invariant way of slicing the spacetime. Hawking solves this by a seemingly bizarre operation. He calculates a partition sum with periodic Euclidean time instead of the transition amplitude; it is not 100% clear at this point how will he introduce the initial and final states to this setup. (Note that the Euclidean time is spacelike and it should therefore not be interpreted as a source of the usual violation of causality.) Moreover, this partition sum has a volume-extensive divergent factor. Hawking regulates this infrared problem by introducing a small negative (anti-de-Sitter-like) cosmological constant that does not change local physics of small black holes much. He obviously deforms the picture into an AdS one in order to get a background that is as well-defined as the usual AdS/CFT backgrounds in string theory. Hawking states that because we are making all measurements at infinity, we can never be sure whether a black hole is present inside or not. This looks like cheating to me; equivalently, it suggests that no true solution is being looked for. Of course that if we only work with the boundary degrees of freedom, we will see no unitarity violations and no problems associated with the black hole dynamics. It's simply because all these things are encoded in the CFT which is unitary. The true surviving question is how is this unitary description reconciled with the bulk interpretation in which a macroscopic black hole is demonstrably present and has the potential to cause information loss headaches. Hawking does not have a working convergent path integral beyond the semiclassical approximation, but let us join Hawking and pretend that this problem is absent. He computes the partition sum over geometries whose boundaries are topologically S^2 (the sphere at infinity) times an S^1 (the periodic Euclidean time) at infinity; he works in four spacetime dimensions. There are two simple spacetimes with this boundary: B^3 times S^1 is the empty flat (or anti-de-Sitter) spacetime while S^2 times D^2 is the anti-de-Sitter Schwarzschild topology. While the empty spacetime can be foliated, the S^2 times D^2 cannot because it has no S^1 factor, roughly speaking. Because it can't be foliated, you can't even define what the conservation of the information should mean in this topologically non-trivial case. The contribution to the correlators coming from the topologically trivial case are conserved as the Lorentzian time T grows; the contributions from the topologically non-trivial backgrounds decay. On page 3, Hawking confirms that he was inspired by Maldacena's hep-th/0106112 about the eternal black holes in anti de Sitter space. In that case, you also have two - actually three - geometries that fit into the S^1 times S^2 boundary: empty space, small black holes, large black holes (compared to the radius of curvature). The large black holes dominate the ensemble; they have a large negative action. Nevertheless, using the bulk techniques you may calculate that a correlator of O(x)O(y) on the boundary decays for large separations (while it has the usual flat-space behavior if x,y are nearby). Such a decrease looks much like other cases of information loss; nevertheless in this case you may argue that there is a unitary CFT behind it and the exponential decrease may be in principle reduced to repeated scattering. Maldacena also showed that the contribution of the empty spacetime does not decay and it has the right magnitude to be consistent with unitarity; Hawking argues that he strengthened this observation by having showed that the path integral over topologically trivial spacetimes only is unitary. (Again, it is not obvious whether his formal argument holds in reality because of the usual loop UV problems of general relativity.) The large black holes are not too interesting because they don't evaporate. Instead, we want to look at the small black holes. Hawking has been trying to find a Euclidean geometry corresponding to an evaporating Lorentzian black hole for years. Now he says that he failed because there is no such geometry. In the Euclidean setup, only the metrics that can be foliated - empty space and eternal black holes - should be added to the path integral. One of the main question that you must certainly ask is: Why does dynamics over topologically trivial spacetime look like the creation of a long-lived black hole with horizons in the Lorentzian signature? I believe that Hawking does not fully answer this question; he only says that "thermal fluctuations may occasionally be large enough to cause a gravitational collapse that creates a small black hole". Let me re-iterate that such a short comment is deeply unsatisfactory. What we want to understand in the first place is the bulk description of the process in which we can see that the usual long-lived black hole is there; we want to see how are the concepts of locality and causality corrected so that the information can escape. Hawking only says that this solution of the information loss puzzle is possible. We could have said the same thing just because there is a dual unitary CFT description. But the local bulk dynamical mechanisms that make these things possible remain nearly as cloudy as before. Some of Hawking's conclusions say: • There are no baby universes branching off - which is what Hawking used to think. The information is preserved purely in our Universe. • The black hole can form while remaining topological trivial because its evaporation may be viewed as a tunnelling process (Hartle-Hawking). Although this comment can't be considered to be a quantitative answer to my main question, I like it, and let me describe an analogy. Imagine quantum mechanics of a particle on a line. The classically inaccessible regions (E smaller than V) may be compared to the black hole interior. Classically, these are qualitatively different regions from the rest. However, quantum mechanically, the qualitative difference disappears because of tunnelling. All points on the line are qualitatively on equal footing. You can get there. This is why the black hole should be thought of as having a trivial topology quantum mechanically. The situation would change for an infinite inaccessible region (infinite black hole) where you can't tunnel. Let me summarize: Hawking's argument why the evolution is unitary probably works and The Reference Frame agrees with virtually all of Hawking's broader opinions, but such a solution is not much different from the observation that the dual CFT is unitary. The question why these unitary processes look like a small long-lived black hole and how the necessary correlations are created remains mostly unanswered. Hawking has lost a bet but he seems to think that he has made the critical steps to solve the information loss puzzle. While he has given the encyclopedia of baseball to John Preskill, next time he will give him the ashes from a burned book (or the nearly thermal Hawking radiation) because John Preskill can always reconstruct the information out of them. Add to Digg this Add to reddit snail feedback (3) : reader Quantoken said... Lubos said: Not so easily, Hawking would have to give not just the ashes, but also every little bit of debrises and that could be flying away, and every single photons emitted during the burning, as well as preserve the exact direction and time that the photons fly away. You need every little bit of quantum information to reconstruct the book. Hawking might as well could be easier give Preskill a time machine to allow him travel back to the moment right before the book was burned, or, just give the book. reader Olias said... Lubos, I believe there is an hidden variable contained within the end-script by Hawking. The Ashes: may be with reference to Cricket, rather than Baseball? There is also a saying in the English Language:Its Just not Cricket!..which has the meaning of, Rule's have been Broken? So in the context of say, Baseball, one can state that if a player has gained an unfair advantage, by say bending the Rule Book, English people tend to shout:It's just not Cricket! the hawking paragraph again , and one can see a little bit of Hawking 'handwaving', tinged with a famous sense of humour? reader esc said... Thanks for an accessable explaination of this situation. I caught part of the recent Discovery Channel program on this tonight in a bar with poorly-written subtitles, and needed a quick catch-up on the current state of things in this world. I can't wait for a book to come out that is as comprehensible to a casual user of high science like myself as Gleik's Chaos was back in high school, but covering more recent events.