chash
stringlengths
16
16
content
stringlengths
267
674k
e2b3f8a6fdf625ed
Archive for the ‘relativity’ Category \vec v_A, \vec v_B; v_{AB}? Read Full Post » Well, the press is all fired up about a claim of faster than light neutrinos. The claim from the OPERA experiment can be found in this paper. The paper was released on September 22nd and it has already gotten 20 blog links. Not bad for a new claim. Considering that the news organizations are happily bashing special relativity, one can always rely on XKCD to spin it correctly. Now more to the point: the early arrival time is claimed to be 60 nanoseconds. The distance between the emitter and the observer is claimed to be known to about 20 cm, certified by various National Standards bodies. A whole bunch of different systematic errors are estimated and added in quadrature, not to mention that they need satellite relays to match various timings. 60 nanoseconds is about the same as 20 meters uncertainty (just multiply by the speed of light) and they claim this to be both due to statistical errors and systematics. The statistical error is from a likelihood fit. The  systematic error is irreducible and in a certain sense it is the best guess for what the number actually is. They did a blind analysis: this means that the data is kept in the dark until all calibrations have been made, and then the number is discovered for the measurement. My first reaction is that it could have been worse. It is a very complicated measurement. Notice that if we assume all systematic errors in table 2 are aligned we get a systematic error that can be three times as big. It is dominated by what they call BCT calibration. The errors are added in quadrature assuming that they are completely uncorrelated, but it is unclear if that is so. But the fact that one error dominates so much means that if they got that wrong by a factor of 2 or 3 (also typical for systematic errors), the result loses a bit on the significance. My best guess right now is that there is a systematic error that was not taken into account: this does not mean that the people that run the experiment are not smart, it’s just that there are too many places where a few extra nanoseconds could have sneaked in.  It should take a while for this to get sorted out. You can also check Matt Strassler’s blog and Lubos Motl’s blog for more discussion. Needless to say, astrophysical data from SN1987a point to neutrinos behaving just fine and they have a much longer baseline. I have heard claims that the effect must depend on the energy of the neutrinos. This can be checked directly: if I were running the experiment, I would repeat it with lower energy neutrinos (for which we have independent data)  and see if the effect goes away then. Read Full Post » We now have a few working examples of a microscopic theory of quantum gravity, all come with specific boundary conditions (like any other equation in physics or mathematics), but otherwise full background independence. In particular, all those theories include quantum black holes, and we can ask all kinds of puzzling questions about those fascinating objects. Starting with, what is exactly a black hole? Read Full Post » Suppose you want to solve a linear partial differential equation of the form O \psi(x) = j(x),  which determines some quantity \psi(x) in terms of its source j(x). Here x could stand for possibly many variables, and the differential operator O can be pretty much anything. This is a very general type of problem, not even specific to physics. An example in physics could be the Klein-Gordon equation, or with some more bells and whistles the Maxwell equation, which determines the electric and magnetic fields. Let us replace this problem with the following equivalent one. If we find a function \psi(x,s) such that: \frac{\partial \psi}{\partial s} +O \psi =0 with the initial condition \psi(x, s=0) = j(x), and assuming the regularity condition \psi (x,s= \infty) \rightarrow 0 , then it is easy to see that the function \psi(x) = \int_0^\infty \psi(x,s) ds satisfies the original equation we set out to solve. Now, this new equation for \psi(x,s) looks kind of familiar, if we are willing to overlook a few details. If we wish, we can think about \psi(x,s) as a time dependent wave function, with the parameter s playing the role of time. The equation for  \psi(x,s)  could then be interpreted as a Schrödinger equation, with the original operator O  playing the role of the Hamiltonian. We are ignoring a few issues to do with convergence, analytic continuation, and the related fact that the Schrödinger equation is complex, and the one we are discussing is not. Never mind, these are subtleties which need to be considered at a later stage. The point is that we can now use any technique we learned in quantum mechanics to solve the original equation – path integral, canonical quantization, you name it. We can talk about the states |x\rangle and the Hilbert space they form, Fourier transform to get another basis for that Hilbert space, even discuss “time” evolution (that is, the dependence of various states on the auxiliary parameter s). We can get the state \psi(x,s) by summing over all paths of a “particle” with an appropriate worldline action and boundary conditions. Depending on the problem, we may be interested in various (differential) operators acting on \psi(x,s), and they of course do not commute, resulting in uncertainty relations. You get the picture. This technique is sometimes called first quantization, or Schwinger proper time method, or heat Kernel expansion. Whatever you call it, it has a priori nothing to do with quantum mechanics,  there are no probabilities, Planck constant or any wavefunctions in any real sense. At this point we may be discussing the financial markets, population dynamics of bacteria, or simply classical field theory. In the second pass, we can apply this idea to linear fields, generating solutions to various linear differential equations. Some of those equations are Lorentz invariant (Klein-Gordon, Dirac, Maxwell equations), but they have nothing to do with quantum mechanics, despite the original way they were referred to as “relativistic wave equations”.  Once we add spin to the game, we start having the fascinating structures of (worldline) fermions and supersymmetry (not to be confused with spacetime fermions and supersymmetry), and we are also in a good shape to make the leap from classical field theory to classical string theory. Maybe I’ll get to that sometime… Read Full Post » In our previous episodes we have discussed the notion of length and time. Now it’s time to start writing some equations. You might have noticed that the title of the post has the letter c prominently displayed. In physics letters usually stand for variables or constants in a given situation. The letter ‘a’ usually stands for acceleration, ‘F’ for force, ‘E’ for energy or electric field, ‘P’ for pressure, ‘V’ for volume, and you might have noticed that there is a pattern of naming variables in a mnemonic way after the initial of the word you are describing. If you were doing a physics alphabet, you would start with ‘a’ is for acceleration, ‘b’ is for belocity (the number of b’s you can type on your keyboard per second, not really a physics term), and ‘c’ is for cookie (you know the song). Incidentally, even though physics does not happen without coffee, cookies are also an important part of the activities that take place in a physics department. Cookie time is a time to get together and catch up on what’s going on and of course, free cookies are a must. Now, back to the letter ‘c’. It stands for the speed of light, so this post will be about the special theory of relativity. This is one of the cornerstones of modern physics. WARNING: LONG POST WITH EQUATIONS, jump to the next red piece of text for some conclusions if you want to skip the argument. Read Full Post » Get every new post delivered to your Inbox. Join 41 other followers
fb8466342a03f8f5
Take the tour × I am always dubious when I need write Schrödinger equation: do I write $\partial / \partial t$ or $d/dt$ ? I suppose it depends on the space in which it is considered. How? share|improve this question add comment 1 Answer The most general Schrödinger equation has total derivatives $$ i\hbar \frac{d}{dt}|\psi\rangle = \hat H |\psi\rangle $$ because the state vector $|\psi\rangle$ only depends on one variable, $t$. It's a complicated object that knows about the probability of anything in the given state, but this is hidden "inside" the state vector. However, if you rewrite the state vector in a given representation, e.g. as $\psi(t,x,y,z,X,Y,Z)$ for the wave function of two particles, then the dependence on $x,y,z,X,Y,Z$, the coordinates of two particles, is put on equal footing with the $t$-dependence, and therefore the $t$-derivatives have to be written as partial ones, $\partial/\partial t$, to emphasize that $x,y,z,X,Y,Z$ are kept fixed during the differentiation. $$ i\hbar \frac{\partial}{\partial t}\psi(t,x,y,z,X,Y,Z) = \hat H \psi(t,x,y,z,X,Y,Z) $$ where the Hamiltonian contains things like the kinetic energy of the first particle $$ \hat H = \dots -\frac{\hbar^2}{2m} \left( \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + \frac{\partial^2}{\partial z^2} \right)+\dots $$ and similarly the kinetic energy of the second particle $$ \hat H = \dots -\frac{\hbar^2}{2M} \left( \frac{\partial^2}{\partial X^2} + \frac{\partial^2}{\partial Y^2} + \frac{\partial^2}{\partial Z^2} \right)+\dots $$ Note that there are partial derivatives everywhere because $\psi$ is now not a "general state vector" whose information is compactified; it is a complex-valued function of many variables. share|improve this answer Well, that is what I would have said. But both in my course and in Oxford quantum course $\partial$ is used instead of $d$ even in "the most general Schrödinger equation" ... So I am still not convinced. –  Isaac Jan 14 '12 at 10:20 add comment Your Answer
795b795c2f79b903
Microscopic reversibility From Wikipedia, the free encyclopedia Jump to: navigation, search The principle of microscopic reversibility in physics and chemistry is twofold: • First, it states that the microscopic detailed dynamics of particles and fields is time-reversible because the microscopic equations of motion are symmetric with respect to inversion in time (T-symmetry); • Second, it relates to the statistical description of the kinetics of macroscopic or mesoscopic systems as an ensemble of elementary processes: collisions, elementary transitions or reactions. For these processes, the consequence of the microscopic T-symmetry is: Corresponding to every individual process there is a reverse process, and in a state of equilibrium the average rate of every process is equal to the average rate of its reverse process.[1] History of microscopic reversibility[edit] The idea of microscopic reversibility was born together with physical kinetics. In 1872, Ludwig Boltzmann represented kinetics of gases as statistical ensemble of elementary collisions.[2] Equations of mechanics are reversible in time, hence, the reverse collisions obey the same laws. This reversibility of collisions is the first example of microreversibility. According to Boltzmann, this microreversibility implies the principle of detailed balance for collisions: at the equilibrium ensemble all collisions are equilibrated by their reverse collisions.[2] These ideas of Boltzmann were analyzed in detail and generalized by Richard C. Tolman.[3] In chemistry, J. H. van't Hoff (1884)[4] came up with the idea that equilibrium has dynamical nature and is a result of the balance between the forward and backward reaction rates. He did not study reaction mechanisms with many elementary reactions and cannot formulate the principle of detailed balance for complex reactions. In 1901, Rudolf Wegscheider introduced the principle of detailed balance for complex chemical reactions.[5] He found that for a complex reaction the principle of detailed balance implies important and non-trivial relations between reaction rate constants for different reactions. In particular, he demonstrated that the irreversible cycles of reaction are impossible and for the reversible cycles the product of constants of the forward reactions (in the "clockwise" direction) is equal to the product of constants of the reverse reactions (in the "anticlockwise" direction). Lars Onsager (1931) used these relations in his well known work,[6] without direct citation but with the following remark: "Here, however, the chemists are accustomed to impose a very interesting additional restriction, namely: when the equilibrium is reached each individual reaction must balance itself. They require that the transition A\to B must take place just as frequently as the reverse transition B\to A etc." The quantum theory of emission and absorption developed by Albert Einstein (1916, 1917)[7] gives an example of application of the microreversibility and detailed balance to development of a new branch of kinetic theory. Sometimes, the principle of detailed balance is formulated in the narrow sense, for chemical reactions only[8] but in the history of physics it has the broader use: it was invented for collisions, used for emission and absorption of quanta, for transport processes[9] and for many other phenomena. In its modern form, the principle of microreversibility was published by Lewis (1925).[1] In the classical textbooks[3][10] full theory and many examples of applications are presented. Time-reversibility of dynamics[edit] The Newton and the Schrödinger equations in the absence of the macroscopic magnetic fields and in the inertial frame of reference are T-invariant: if X(t) is a solution then X(-t) is also a solution (here X is the vector of all dynamic variables, including all the coordinates of particles for the Newton equations and the wave function in the configuration space for the Schrödinger equation). There are two sources of the violation of this rule: • First, if dynamics depend on a pseudovector like the magnetic field or the rotation angular speed in the rotating frame then the T-symmetry does not hold. • Second, in microphysics of weak interaction the T-symmetry may be violated and only the combined CPT symmetry holds. Macroscopic consequences of the time-reversibility of dynamics[edit] In physics and chemistry, there are two main macroscopic consequences of the time-reversibility of microscopic dynamics: the principle of detailed balance and the Onsager reciprocal relations. The statistical description of the macroscopic process as an ensemble of the elementary indivisible events (collisions) was invented by L. Boltzmann and formalised in the Boltzmann equation. He discovered that the time-reversibility of the Newtonian dynamics leads to the detailed balance for collision: in equilibrium collisions are equilibrated by their reverse collisions. This principle allowed Boltzmann to deduce simple and nice formula for entropy production and prove his famous H-theorem.[2] Therefore, the microscopic reversibility was used to prove the macroscopic irreversibility and convergence of ensembles of molecules to their thermodynamic equilibria. Another macroscopic consequence of microscopic reversibility is the symmetry of kinetic coefficients, the so-called reciprocal relations. The reciprocal relations were discovered in the 19th century by Thomson and Helmholtz for some phenomena but the general theory was proposed by Lars Onsager in 1931.[6] He found also the connection between the reciprocal relations and detailed balance. For the equations of the law of mass action the reciprocal relations appear in the linear approximation near equilibrium as a consequence of the detailed balance conditions. According to the reciprocal relations, the damped oscillations in homogeneous closed systems near thermodynamic equilibria are impossible because the spectrum of symmetric operators is real. Therefore, the relaxation to equilibrium in such a system is monotone if it is sufficiently close to the equilibrium. 1. ^ a b Lewis, G.N. (1925) A new principle of equilibrium, PNAS March 1, 1925 vol. 11 no. 3 179-183. 2. ^ a b c Boltzmann, L. (1964), Lectures on gas theory, Berkeley, CA, USA: U. of California Press. 3. ^ a b Tolman, R. C. (1938). The Principles of Statistical Mechanics. Oxford University Press, London, UK. 4. ^ Van't Hoff, J.H. Etudes de dynamique chimique. Frederic Muller, Amsterdam, 1884. 5. ^ Wegscheider, R. (1901) Über simultane Gleichgewichte und die Beziehungen zwischen Thermodynamik und Reactionskinetik homogener Systeme, Monatshefte für Chemie / Chemical Monthly 32(8), 849--906. 6. ^ a b Onsager, L. (1931), Reciprocal relations in irreversible processes. I, Phys. Rev. 37, 405-426. 7. ^ Einstein, A. (1917). Zur Quantentheorie der Strahlung [=On the quantum theory of radiation], Physikalische Zeitschrift 18 (1917), 121-128. English translation: D. ter Haar (1967): The Old Quantum Theory. Pergamon Press, pp. 167-183. 8. ^ Principle of microscopic reversibility. Encyclopædia Britannica Online. Encyclopædia Britannica Inc., 2012. 9. ^ Gorban, A.N., Sargsyan, H.P., and Wahab, H.A. Quasichemical Models of Multicomponent Nonlinear Diffusion, Mathematical Modelling of Natural Phenomena, Volume 6 / Issue 05, (2011), 184−262. 10. ^ Lifshitz, E. M.; and Pitaevskii, L. P. (1981). Physical kinetics. London: Pergamon. ISBN 0-08-026480-8 ISBN 0-7506-2635-6 Check |isbn= value (help).  Vol. 10 of the Course of Theoretical Physics(3rd Ed). See also[edit]
220398b8647673b9
Open main menu Wikibooks β Inorganic Chemistry/Chemical Bonding/Orbital hybridization < Inorganic Chemistry‎ | Chemical Bonding Four sp3 orbitals. Three sp2 orbitals. In chemistry, hybridisation (or hybridization) is the concept of mixing atomic orbitals into new hybrid orbitals suitable for the pairing of electrons to form chemical bonds in valence bond theory. Hybrid orbitals are very useful in the explanation of molecular geometry and atomic bonding properties.[1] Atomic orbitalsEdit Orbitals are a model representation of the behavior of electrons within molecules. In the case of simple hybridisation, this approximation is based on atomic orbitals, similar to those obtained for the hydrogen atom, the only atom for which an exact analytic solution to its Schrödinger equation is known. In heavier atoms, like carbon, nitrogen, and oxygen, the atomic orbitals used are the 2s and 2p orbitals, similar to excited state orbitals for hydrogen. Hybrid orbitals are assumed to be mixtures of atomic orbitals, superimposed on each other in various proportions. For example, in methane, the C hybrid orbital which forms each C-H bond consists of 25% s character and 75% p character and is thus described as sp3 (read as s-p-three) hybridised. Quantum mechanics describes this hybrid as an sp3 wavefunction of the form N[s + (√3)pσ], where N is a normalization constant (here 1/2) and pσ is a p orbital directed along the C-H axis to form a sigma bond. The p-to-s ratio (denoted λ in general) is √3 in this example, and N2λ2 = 3/4 is the p character or the weight of the p component. In general, any two hybrid orbitals on the same atom must be mutually orthogonal. For an atom with s and p orbitals forming hybrids hi and hj with included angle  , the orthogonality condition implies the relation: 1 +  i j cos( ) = 0. The p-to-s ratio for hybrid i is  i2, and for hybrid j it is  j2. The bond directed towards a more electronegative substituent tends to have higher p-character as stated in Bent's rule. In the special case of equivalent hybrids on the same atom, again with included angle  , the equation reduces to just 1 +  2 cos( ) = 0. For example, BH3 has a trigonal planar geometry, three 120° bond angles, three equivalent hybrids about the boron atom, and thus 1 +  2 cos( ) = 0 becomes 1 +  2 cos(120°) = 0, giving  2 = 2 for the p-to-s ratio. In other words, sp2 hybrids. Hybridisation schemes can also be used to represent the electron configuration in transition metals. For example, the permanganate ion (MnO4-) has sd3 hybridisation with orbitals that are 25% s and 75% d. Types of hybridisationEdit C ↑↓ ↑↓   1s 2s 2p 2p 2p C* ↑↓ 1s 2s 2p 2p 2p As the additional bond energy more than compensates for the excitation, the formation of four C-H bonds is energetically favoured. C* ↑↓ 1s sp3 sp3 sp3 sp3 In CH4, four sp3 hybrid orbitals are overlapped by hydrogen's 1s orbital, yielding four σ (sigma) bonds (that is, four single covalent bonds) of the same length and strength.   translates into   Ethene Lewis Structure. Each C bonded to two hydrogen atoms and one double bond between them. C* ↑↓ 1s sp2 sp2 sp2 2p forming a total of three sp2 orbitals and one remaining p orbital. In ethylene the two carbon atoms form a σ bond by overlapping two sp2 orbitals and each carbon atom forms two covalent bonds with hydrogen by ssp2 overlap all with 120° angles. The π bond between the carbon atoms perpendicular to the molecular plane is formed by 2p–2p overlap. The hydrogen-carbon bonds are all of equal strength and length, which agrees with experimental data. The chemical bonding in compounds such as alkynes with triple bonds is explained by sp hybridization. In this model, the 2s orbital mixes with only one of the three p-orbitals, C* ↑↓ 1s sp sp 2p 2p resulting in two sp orbitals and two remaining p orbitals. The chemical bonding in acetylene (C2H2) consists of spsp overlap between the two carbon atoms forming a σ bond and two additional π bonds formed by pp overlap. Each carbon also bonds to hydrogen in a sigma ssp overlap at 180° angles. Hybridisation and molecule shapeEdit Hybridisation helps to explain molecule shape: Classification Main group Transition metal[5][6] • Linear (180°) • sp hybridisation • E.g., CO2 • Bent (90°) • sd hybridisation • E.g., VO2+ • Trigonal planar (120°) • sp2 hybridisation • E.g., BCl3 • Trigonal pyramidal (90°) • sd2 hybridisation • E.g., CrO3 • Tetrahedral (109.5°) • sp3 hybridisation • E.g., CCl4 • Tetrahedral (109.5°) • sd3 hybridisation • E.g., MnO4 • Square pyramidal (73°, 123°) • sd4 hybridisation • E.g., Ta(CH3)5 • Trigonal prismatic (63.5°, 116.5°) • sd5 hybridisation • E.g., W(CH3)6 Hybridisation of hypervalent moleculesEdit Valence shell expansionEdit Classification Main group Transition metal • Linear (180°) • sp hybridisation • E.g., Ag(NH3)2+ • Trigonal planar (120°) • sp2 hybridisation • E.g., Cu(CN)32− • Tetrahedral (109.5°) • sp3 hybridisation • E.g., Ni(CO)4 • Square planar (90°) • dsp2 hybridisation • E.g., PtCl42− • Trigonal bipyramidal (90°, 120°) • sp3d hybridisation • E.g., PCl5 Trigonal bipyramidal or Square pyramidal[7] • Octahedral (90°) • sp3d2 hybridisation • E.g., SF6 • Octahedral (90°) • d2sp3 hybridisation • E.g., Mo(CO)6 • Pentagonal bipyramidal (90°, 72°) • sp3d3 hybridisation • E.g., IF7 Pentagonal bipyramidal, Capped octahedral or Capped trigonal prismatic[8][6] Contrary evidenceEdit For transition metal centers, the d and s orbitals are the primary valence orbitals, which are only weakly supplemented by the p orbitals.[11] The question of whether the p orbitals actually participate in bonding has not been definitively resolved, but all studies indicate they play a minor role. In light of computational chemistry, a better treatment would be to invoke sigma resonance in addition to hybridisation, which implies that each resonance structure has its own hybridisation scheme. For main group compounds, all resonance structures must obey the octet (8) rule. For transition metal compounds, the resonance structures that obey the duodectet (12) rule[12] suffice to describe bonding, with optional inclusion of dmspn resonance structures. Classification Main group Transition metal AX2 Linear (180°) AX3 Trigonal planar (120°) AX4 Tetrahedral (109.5°) Square planar (90°) Square pyramidal[7] AX6 Octahedral (90°) Octahedral (90°) Capped octahedral or Capped trigonal prismatic[8][6] Isovalent hybridisationEdit Although ideal hybrid orbitals can be useful, in reality most bonds require orbitals of intermediate character, analogous to intermediate ionic-covalent character. This requires an extension to include flexible weightings of atomic orbitals of each type (s, p, d) and allows for a quantitative depiction of bond formation when the molecular geometry deviates from ideal bond angles. The amount of p-character is not restricted to integer values; i.e., hybridisations like sp2.5 are also readily described. Molecules with lone pairsEdit For molecules with lone pairs, the bonding orbitals are hybrids with intermediate s and p character. For example, the two bond-forming hybrid orbitals of oxygen in water can be described as sp4[13] to give the interorbital angle of 104.5°. This means that they have 20% s character and 80% p character and does not imply that a hybrid orbital is formed from one s and four p orbitals on oxygen since the 2p subshell of oxygen only contains three p orbitals. The shapes of molecules with lone pairs are: • Trigonal pyramidal • Three isovalent bonding hybrids • E.g., NH3 • Bent • Two isovalent bonding hybrids • E.g., SO2, H2O In such cases, there are two mathematically equivalent ways of representing lone pairs. They can be represented with orbitals of sigma and pi symmetry similar to molecular orbital theory or with equivalent orbitals similar to VSEPR theory. Hypervalent moleculesEdit For hypervalent molecules with lone pairs, the bonding scheme can be split into a hypervalent component and a component consisting of isovalent bonding hybrids. The hypervalent component consists of resonating bonds utilizing p orbitals. The table below shows how each shape is related to the two components and their respective descriptions. Number of isovalent bonding hybrids (marked in red) Two One - Hypervalent component Linear axis (one p orbital) Square planar equator (two p orbitals) Pentagonal planar equator (two p orbitals) Photoelectron spectraEdit One misconception concerning orbital hybridisation is that it incorrectly predicts the ultraviolet photoelectron spectra of many molecules. While this is true if Koopmans' theorem is applied to localized hybrids, quantum mechanics requires that the (in this case ionized) wavefunction obey the symmetry of the molecule which implies resonance in valence bond theory. For example, in methane, the ionized states (CH4+) can be constructed out of four resonance structures attributing the ejected electron to each of the four sp3 orbitals. A linear combination of these four structures, conserving the number of structures, leads to a triply degenerate T2 state and a A1 state.[14] The difference in energy between the ionized state and the ground state would be the ionization energy, which yields two values in agreement with experiment. Hybridisation theory vs. MO theoryEdit Hybridisation theory is an integral part of organic chemistry and in general discussed together with molecular orbital theory in advanced organic chemistry textbooks although for different reasons. One textbook notes that for drawing reaction mechanisms sometimes a classical bonding picture is needed with 2 atoms sharing two electrons [15]. It also comments that predicting bond angles in methane with MO theory is not straightforward. Another textbook treats hybridisation theory when explaining bonding in alkenes [16] and a third [17] uses MO theory to explain bonding in hydrogen but hybridisation theory for methane. 1. "It is important to recognize that the VSEPR model provides an approach to bonding and geometry based on the Pauli principle that is completely independent of the valence bond (VB) theory or of any orbital description of bonding." Gillespie, R. J. J. Chem. Educ. 2004, 81, 298–304. 4. McMurray, J. (1995). Chemistry Annotated Instructors Edition (4th ed.). Prentice Hall. p. 272. ISBN 0-13-140221-8 5. Weinhold, Frank; Landis, Clark R. (2005). Valency and bonding: A Natural Bond Orbital Donor-Acceptor Perspective. Cambridge: Cambridge University Press. pp. 381–383. ISBN 0-521-83128-8.  6. a b c Martin Kaupp Prof. Dr. (2001). ""Non-VSEPR" Structures and Bonding in d(0) Systems.". Angew Chem Int Ed Engl. 40 (1): 3534–3565. doi:10.1002/1521-3773(20011001)40:19<3534::AID-ANIE3534>3.0.CO;2-#.  11. Frenking, Gernot; Shaik, Sason, eds (May 2014). "Chapter 7: Chemical bonding in Transition Metal Compounds". The Chemical Bond: Chemical Bonding Across the Periodic Table. Wiley -VCH. ISBN 978-3-527-33315-8.  13. Frenking, Gernot; Shaik, Sason, eds (2014). "Chapter 3: The NBO View of Chemical Bonding". The Chemical Bond: Fundamental Aspects of Chemical Bonding. John Wiley & Sons. ISBN 9783527664719.  15. Organic Chemistry. Jonathan Clayden, Nick Greeves, Stuart Warren, and Peter Wothers 2001 ISBN 0-19-850346-6 17. Organic Chemistry 3rd Ed. 2001 Paula Yurkanis Bruice ISBN 0-13-017858-6
767ff85059bcd364
Talk:Path integral formulation From Wikipedia, the free encyclopedia Jump to: navigation, search WikiProject Physics (Rated C-class, High-importance) Evolution operator[edit] I believe the evolution operator is: Equivalence of formulations[edit] I believe Dyson was the one that showed the approaches to be equivalent JeffBobFrank 01:21, 18 Feb 2004 (UTC) Last paragraph[edit] The last paragraph says some contentious things. The sum-over-histories method is hardly "unpopular". The "sum-over-histories interpretation", however - that is, the attempt to elevate the sum-over-histories formalism into a physical ontology - is indeed little-known; I don't think I've ever seen it outside that paper coauthored by Sorkin. Let me quote the paper's last paragraph: "... the sum-over-histories formulation goes a long way toward taking the 'mystery' out of quantum mechanics, or at least reducing it to the mystery inherent in the notion of probability itself. No doubt that mystery is enhanced somewhat by the presence of non-positive amplitudes and references to two-way paths, but the fundamental idea... remains the same..." In my opinion this indicates the sophistical character of this sum-over-histories "interpretation". I'm reminded of a cartoon: a physicist stands at a blackboard, in front of a crowd of skeptical colleagues. In the middle step of his derivation, he has written, THEN A MIRACLE OCCURS. "See? It's all just probabilities. Of course, some of them are negative probabilities, a concept which makes no sense under either the frequentist or the subjectivist interpretation of the concept of probability; but that just shows that further research is required..." There is something to the claim that "[this is] the only form of the theory which can explain [the EPR] paradox without breaking locality". The individual paths appearing in the formalism are indeed built purely from ontologically local entities (point particles, local field values), something which is not true in any formalism which countenances, say, entangled quantum states. Nonetheless, the paper by Sinha and Sorkin (in its concluding analysis) in fact expresses some doubt as to whether sum-over-histories is local after all, given the "global character" of how the final probabilities are calculated. Wikipedia is hardly the place in which theoretical debates of this sort should be adjudicated, but I hope it's clear why I find that last paragraph somewhat problematic. I also want to emphasize again, for absolute clarity, that the sum-over-histories method is not being criticised here, because it is only an algorithm. It's the attempt to turn it into an ontology (an "interpretation") which is deeply problematic. I leave it to more experienced Wikipedians to decide what the just solution here is. Mporter 21 Feb 2004, 5.55pm AEST As a sidelight, apropos your comments about negative probabilities, you may enjoy Feynman "Negative probability" in Quantum Implications, eds Hiley and Peat, where he makes a case for allowing them, as long as such an event is not measurable/verifiable. Like having negative dollars as you add up your bills, it may be calculationally allowed as long as certain restrictions on the state are true.GangofOne 07:04, 10 Jun 2005 (UTC) Merge with "Functional integral"[edit] Should this article actually be merged with Functional integral (QFT)? While it is in principle the same subject, that article is both very specific in its application to quantum field theory (as opposed to, say, nonrelativistic single-particle QM), and is also very technical. This seems to be more the place for an introduction to the path-integral formulation. (If we do want to merge the articles, I say the other one should come here, and not the reverse, since this article has the more general title.) And I'd rather do it sooner than later. --Matt McIrvin 04:06, 27 Sep 2004 (UTC) Well, I went ahead and did it... --Matt McIrvin 06:13, 27 Sep 2004 (UTC) The material formerly in Functional integral (QFT) is now incorporated into a section here, and I've tried to write some introductory matter to make the symbols a little clearer, though the heavily mathematical part further down still needs a lot more explanatory text. I've put in an introduction and reorganized the whole page into sections and subsections; my new section on single-particle mechanics needs more development but is a start. Diagrams would be nice. I've kept the controversial section on QM interpretation at the very end; I'll let other people argue over that for now. --Matt McIrvin 07:15, 27 Sep 2004 (UTC) Attempted to NPOVify the interpretation section. --Matt McIrvin 05:35, 2 Oct 2004 (UTC) Is realy correct? Wouldn't it rather be like or is it with different H for each n ? The way I wrote it is perhaps not the best way of putting it; it needs to be more explicit. What I really wanted to get across is that in the integrand, is the function of time represented by a set of straight segments connecting the at times , and is actually the integral of the Lagrangian over that path. I suppose in practice it would end up being the product of the exponential for each little segment, but that form is further from the spirit of the thing. I probably should have abandoned the generic use of at that point... my mind's too fuzzy right now to make it better. --Matt McIrvin 00:27, 11 Oct 2004 (UTC) Also each little segment would depend on and ... --Matt McIrvin 15:01, 11 Oct 2004 (UTC) This is not a necessity, the limit inherent to integration would take care of this as , see Riemann sums). I have searched the net but didn't find anything better than stated here so I have tried some own thoughts. Starting from the approach I came up with where varies over all paths in spacetime starting from and ending in , denoting the energy four-vector and is an aproptiate measure on the set of possible paths. With the paths approximated by segments of straight lines we are likely to end up with the official thing but with an additional benefit of a clearer understanding. Alas, I am stuck on as well as on , especially in case where we have zero rest mass. Can anyone do better please? 20:05, 20 Oct 2004 (UTC) Hidden time[edit] Pavel V. Kurakin (Keldysh Institute of Applied Mathematics, Russian Academy of Sciences, me). My idea is that many-paths are physically real, but in sub-quantum (not observed by us) world. Many-paths, amplified by transactional interpretation of quantum mechanics (TIQM) by John Cramer lead me to a 3rd new idea (after 1st: many-paths and 2nd: transactions). 3 together they constitute, I believe, an original theory, letting to explain quantum superposition of states, state vector reduction and non-local correlations like EPR (see quantum entenglement). Shortly speaking, signals move in vacuum in so-called 'hidden time', which is not equivalent to our physical time. They move between all sources, which are to emit particles, and all (possible) detectors. In the simplest case we have one source and a set of possible detectors. How will a particle chose one of many detectors? It explores the space and counts how much it likes different detectors, in full accordance with Feynman many-paths. While it explores (many copies of that particle travel and explore), phisical time does not tick. Finally the source prefers some definite detector. Copies of the particle (more strictly - signals) are killed all but one. This one ultimately comes to a detector we physically see our particle at. How long can signals explore the space? Infinite time! :) -- In 'hidden' time. Physiacl time does tick (at detecting point) only when 'ultimate decision signal' comes to that detector. More accurate arguments were published this year by Keldysh Institute of Applied Mathematics, Russian Academy of Sciences in my preprint. I would be happy to know any criticism :) • The article is very good, nice references and all that although i think you have "missed" the Semi-classical expansions for the Feynmann Path-integral as could someone provide any reference to this?..thanks. —The preceding unsigned comment was added by (talkcontribs) 21:49, 9 August 2006 (UTC2) Um that's a funny idea, similar to a crazy idea of mine (which, probably, someone else had already too). However I do not like it. I am not a physicist however and I am referring to your layman summary not your paper so please forgive me if I misunderstood. What I don't like is 1. infinite zero time is essentially the same, or worse, as non-locality. Non-local theories exist [the easiest being "Everything is wave function and it's non-local"], and yours just requires a giant effort from the poor little particle. 2. You seem to have two kinds of mass in your theory, particles and detectors. Fault shared with Copenhagen ("why me worry, the measurement device is classical"). -- (talk) 09:18, 18 May 2009 (UTC) A lot of this stuff is way over my head, but the one thing I thought I understood looks wrong in this article... under the section "The path integral and the partition function", why does it say: shouldn't it be: ? At the very least to make the argument of the exponential unitless? Ed Sanville 16:52, 16 August 2005 (UTC) Right you are. Fixed. GangofOne 04:59, 17 August 2005 (UTC) NOt always... 'Edsanville' think user could be using natural units for Planck's constant or other -- 22:23, 16 February 2007 (UTC) What is the name of that interpretation?[edit] Hey all, one particular section of the article is a death trap with no leads to further information. Does anybody know the name of the interpretation referenced in the path integral in quantum-mechanical interpretation section? Terms, phrases, some scientific history, anything would be helpful. The section links to another article on the interpretations of quantum mechanics, however there seems to be no segment there that seems a continuation. Thank you, -- kanzure 14:11, 28 July 2006 (UTC) It may be: Sukanya Sinha and Rafael D. Sorkin, "A Sum-over-histories Account of an EPR(B) Experiment", Found. of Phys. Lett. 4:303-335 (1991). -- kanzure 14:56, 28 July 2006 (UTC) The article links to QFT, which is a disambiguation page. However, I'm not knowledgeable enough to tell if it should be disambiguated to quantum field theory or quantum fourier transform. Could someone please disambiguate the link? –RHolton– 03:32, 11 November 2006 (UTC) Chapmann-Kolmogorov and Feynmann[edit] It's a curious fact that hardly any book points a relationship between te so-called Chapmann-Kolmogorov equation for continous processes and Feynmann path integral formulation, in fact the C.Kolmogorov equation in differential form , is just the discretized SE or Difussion equation (imaginary time), the problem is given the Integral equation of C.K obtain the differential one and hence SE -- 22:21, 16 February 2007 (UTC) diffraction grating[edit] I think we should add in this article the interpretation of diffraction grating from the view of path integral formulation. To me, it seems to be the best argument for the case of path integrals, as it effectively explains diffraction grating easily where non-path integral explanations leave much to be desired. I'm no physicist, so I hesitate to do it myself, but if no one else rises to the challenge, I suppose I can add the section when I get my next holiday. — Eric Herboso 23:54, 23 September 2007 (UTC) Yes, you are right. Presenting it as Feynman did it with rotating arrows helps to understand it quite intuitively. See also Wikiversity:Making Sense of QM. Arjen Dijksman (talk) 21:31, 27 November 2007 (UTC) Path of minimum action always dominates the integral?[edit] In section The path integral and the partition function, it states: In the classical limit, , the path of minimum action dominates the integral, because the phase of any path away from this fluctuates rapidly and different contributions cancel. Wouldn't it be preferable to state: In the classical limit, , ...? Arjen Dijksman (talk) 21:26, 27 November 2007 (UTC) Reality of Paths[edit] The argument over whether the different paths are "real" or not is not really physical. In the Schrodinger equation, if you locate a particle at position x precisely and then a very short time later look for it at position y, you have an amplitude to find it anywhere in space. The first measurement localizes the particle, making its momentum infinitely uncertain. Does this mean that there is a "path" where the particle jumps from one point to another at very large speed? In the circumstances of this particular experiment it does. What if the particle is in a superposition of states at different positions which together have a small momentum? By linearity all the contributions to the enormous large jumps must wash out by superpositions. The phenomenon of wild paths contributing to the quantum mechanical amplitudes is independent of the formalism, it is a property of the theory. Whether the quantum amplitudes for each separate path should be thought of as "existing" is a hoary philosophical question, related to the interpretation debates which can go on with no end. I don't know if it's a good idea to bring them up here.Likebox (talk) 20:07, 14 May 2008 (UTC) Did Candlin come up with Grassmann integration?[edit] Many people reference Brezin's textbook, but it's a textbook. I found a reference to this article by Candlin in Nuovo Cimento 1956, but I do not have access to this journal, and I don't know if this is the primary source. If anyone knows, please say.Likebox (talk) 02:28, 16 May 2008 (UTC) With regards to this, Mandelstam references Candlin, as do a couple of other people, so I think it is provisionally safe to credit him, but it would be nice to do a full literature review regarding this matter, especially since Candlin seems to have fallen silent. Schwinger has a faux Grassman integration in the 50s, which comes up whenever he uses his action principle with anticommuting fields, but he doesn't give a general rule for path integration in the anticommuting case, he just piddled around until he found a consistent set of formal rules for differenting the action. Feynman has a path-oriented path integral which reproduces the statistics and is in principle equivalent to Grassmann integrals, but it's diagram/particle-path based. Brezin's account does do the whole deal, fermi coherent states and all, but he is writing it as if it is already well accepted folklore.Likebox (talk) 20:47, 5 June 2008 (UTC) I finally read his paper--- it is a beautiful, complete treatment of Grassman integration. It is strange that this person invented a classic tool and then vanished. He has no other papers that I could find, I wonder if anybody knows what happened to him?Likebox (talk) 20:48, 27 August 2008 (UTC) David John Candlin has a page now, but I don't know any more than the sketchy details provided by the Princeton University catalog of members. Hopefully someone out there does.Likebox (talk) 21:39, 27 August 2008 (UTC) Ah--- the theory/experiment disconnect. There is an active D.J. Candlin in experimental physics, who wrote 177 papers according to Spires, as part of large collaborations. Perhaps its the same person.Likebox (talk) 04:19, 28 August 2008 (UTC) Dirac Fretting about Path Integral[edit] A comment on this page was deleted which asked, if Dirac understood a heuristic version of the path-integral before Feynman then: "Can someone explain why it was that Dirac fretted about the uncertainty principle when Feynman presented his results..." This is confusing two frets. It was Bohr who fretted about the uncertainty principle when Feynman presented the diagrams somewhere or other. Dirac fretted about Unitarity. Bohr's complaint was specious, as Bohr later came to understand, but Dirac's complaint was substantive. Feynman had shown how to pass from the Canonical formalism to the Lagrangian path integral formalism only in certain special cases, that is when the Hamiltonian is quadratic in the momentum. Dirac knew that in other cases, and for a general Hamiltonian, it is difficult to define the proper generalization of the Legendre transformation which will give the right Lagrangian. His worry was that Unitarity is not obvious for a given Lagrangian which is not appropriately related to a unitary Hamiltonian, and this complaint might be the reason that Dirac did not formulate a full path integral formalism. Feynman went ahead probably because he at the time didn't appreciate the severity of the problem, or else because he had a strong physical intuition about the specific cases of quantum field theories, which are always quadratic in the field momentum.Likebox (talk) 06:29, 11 June 2008 (UTC) The pseudohistory is that Freeman Dyson showed that Feynman's path integral was equivalent to older methods. This is not accurate, Feynman showed this long before Dyson. Dyson showed that it is possible to derive Feynman diagrams from an operator expansion, which when Feynman's path integral was unfamiliar was the easiest way for a physicist to learn some of the new methods. But Dyson's methods are inferior and have been replaced by the path integral.Likebox (talk) 20:49, 19 August 2008 (UTC) I removed this comment from the article: However, if the time-sliced path integral is formulated in the phase space of the variables and , the measure of integration yields the properly normalized amplitude. The integral over all produces the correct normalization factors for the Feynman integral over all . This statement is true, but (for the case at hand of quadratic kinetic energy) it is just as true for the x-version of the path integral, using , the momenta can be integrated out. So the sentence is really just stating what the normalization choice for the path integral should be, but without motivating the choice. The choice of normalization can be motivated by formal considerations, like computing a propagator and unitarizing, but this is not very illuminating conceptually, but there is a more conceptual way. The factor of sqrt 2pi can be naturally understood as coming from the imaginary time stochastic evolution. The overall normalization of the path integral has a factor of , the ground state energy, and the overall scale of the integral in imaginary time shrinks or grows according to the amount of ground state energy, which can be adjusted by adding a constant to the Hamiltonian. The best way to state the condition that fixes the normalization is to demand that the ground state energy is zero, so that the ground state is invariant under path integral time evolution. When the ground state wavefunction has energy zero, the inner product of any wavefunction with the ground state is invariant in time. This inner product is the integral of psi, so that the total integral of psi is constant in time. This allows psi to be thought of as an imaginary time probability (when the imaginary time action is real), and the evolution is a stochastic process. With this point of view, the factors of sqrt 2pi are obvious--- they give the spreading gaussian normalization for a random walk. This connection to stochastic processes is stated in Feynman and Kac, but is often obscured in modern treatments.Likebox (talk) 20:36, 10 September 2008 (UTC) Okay--- I think I see the point of the comment--- it is pointing out that the integration measure in the x-p version is simple and universal, while in the x-version it depends on parameters in the action. This is an important point.Likebox (talk) 20:42, 10 September 2008 (UTC) Needed improvements in derivation of path integral[edit] This article would benefit greatly from an actual derivation of the path integral, which is not difficult. 1) Start with the matrix element of the time-ordered exponential, which is the time evolution operator. 2) Discretize time and rewrite the time-ordered exponential integral as a product of simple exponentials, exp(-iH(p_i,x_i) Delta t/ hbar). 3) Insert sums of complete states |x_i><x_i| integrated over each x_i in between the exp(-iH Delta t/ hbar) factors. 4) Evaluate <x_i| exp(-iH Delta t/ hbar) | x_{i-1}> by expanding the exponential to first order in Delta t, and going to the momentum representation to express <x_i|x_{i-1}> = int dp/(2pi) exp(i p(x_i-x_{i-1}))/hbar ). Similarly <x_i|p^2|x_{i-1> = int dp/(2pi) exp(i p(x_i-x_{i-1}))/hbar) p^2. Approximate x_i-x_{i-1} = \dot\x times Delta t. Re-exponentiate before doing the integral over p. By completing the square, the p integral is gaussian and gives an irrelevant normalization constant times exp(i Delta t L(x,\dot x)/hbar), where L is the Lagrangian. 5) Putting together all the factors at different x_i's and taking the limit Delta t -> 0, we get the path integral of exp( i \int dt L/hbar) = exp( i S/hbar). Sorry, I don't have time to actually make these changes to the article. The reference to "Path Integrals in Quantum Theories: A Pedagogic 1st Step" is useless; this is just a lot of hand-waving. Jcline1 (talk) 21:04, 1 January 2011 (UTC) a better picture of a path?[edit] i noticed this picture on the wiener process article and i think it would be a much more accurate and pedagogical representation of a path in the path integral formulation. In truth, the vast majority of paths look a lot more like this than like the current picture in the article. Kevin Baastalk 19:09, 16 March 2011 (UTC) Goodness, they're surely even wilder than that ! (talk) 01:47, 13 May 2015 (UTC) The currently used picture is indeed very misleading. The paths must not be smooth. (talk) —Preceding undated comment added 07:07, 15 December 2015 (UTC) First section terms[edit] This article commits a cardinal sin in the first section of not defining any of the terms in the equations. Not too bad for physics undergrads, but useless for others looking into the topic (which is surely what Wikipedia is catering for). Some knowledgeable PhD want to sort it? (talk) 09:22, 5 October 2011 (UTC) I agree, all variables and fields should be defined. — Preceding unsigned comment added by (talk) 21:32, 17 September 2014 (UTC) Question about the first formula[edit] In what is currently the first formula in the article, is \epsilon H just the product, or does it mean the amount by which H changes when time changes by \epsilon? Actually on second thought it is pretty clear that this just is multiplication by \epsilon, and in fact dividing that first equation by epsilon gives H = p. (q(t+e)-q(t))/e + L where the term (q(t+e)-q(t))/e approximates dq/dt. Assuming q' =dq/dt this is pq'+L as usual.Createangelos (talk) 13:14, 20 February 2012 (UTC) Rating as petty "mid-importance"??...[edit] Izno: you would disagree. I set the importance to "high" this time, and really couldn't care less if its "quite how importance works" - to hell with "rules" becuase I care more that Feynman rewrote quantum mechanics with path integral formulation, and a breakthrough in theoretical physics The article itself says: The path integral also relates quantum and stochastic processes, and this provided the basis for the grand synthesis of the 1970s which unified quantum field theory with the statistical field theory of a fluctuating field near a second-order phase transition. The Schrödinger equation is a diffusion equation with an imaginary diffusion constant, and the path integral is an analytic continuation of a method for summing up all possible random walks. For this reason path integrals were used in the study of Brownian motion and diffusion a while before they were introduced in quantum mechanics." So yes IT IS a very deep and important topic reaching into other parts of physics (even if it doesn't apply in other context it has relevance): special relativity, QM, classical mechanics, and optics. F = q(E+v×B) ⇄ ∑ici 11:25, 8 June 2012 (UTC) Regardless, you mistake the use of importance. Importance is not about how important it is to a given person or the world, it is about how important it is to have a good quality topic about it in the encyclopedia. Which can be informed by how important it is in the world, but is not what directly determines it. See WP:1.0. High may be the appropriate place for it. It's always good to ask the WikiProject, though I would suggest that you have a look at WP:WikiProject Physics/Quality Control#Importance scale... --Izno (talk) 12:16, 8 June 2012 (UTC) As you may see from the edit history I did look there and used that to back my points up, but then reverted becuase it was excessive point-making on my part. F = q(E+v×B) ⇄ ∑ici 12:36, 8 June 2012 (UTC) Mathematical foundation for the path integral is absent[edit] One issue that this article does not seem to address, but definitely should, is the matter of whether these path integrals are even well-defined to begin with! From a strict mathematical perspective, it is of no value whatsoever that you can carry out reasonable-looking manipulations on a formula to derive useful conclusions, if you cannot even establish that the original formula is well-defined. Until it has been established that a formula is a well-defined expression for some mathematical object, one cannot even begin to prove that it also has the properties that justify the reasonable-looking manipulations, and only when that has been done can the original work rigorously amount to anything. In the case of the Feynman path integrals, there are at least two points which are problematic: • The space of all paths is too large for a measure to be straightforwardly definable on it. • The integrand, and even a simplified version such as , couldn't be Lebesgue integrable even if a measure was given, since both the positive and negative parts of the integral are infinite ( is undefined). I'm not sure about the measure part (this might even be an open problem in mathematical physics), but the integrand issue requires that the value of the integral is considered to be a distribution (mathematics), does it not? Since a quantity being a distribution carries with it certain caveats regarding what one may do to it, this is an issue that the reader needs to be explictly warned about. (talk) 09:26, 5 March 2013 (UTC) The mathematical foundations are still unknown, in particular there is no translation invariant Borel measure on infinite dimensional spaces, i.e. does not exist. There are several approaches to this problem. The Wiener measure does exist and the Feynman "integral" can in some circumstances interpreted as some kind of analytic continuation to imaginary times. Another way is to use Feynmans approach and interpret it as a limit of oscillating integrals (this involves distribution theory as you suggested). A third way is defining it as a Fourier transform of measures, sometimes referred to as Fresnel integrals. I think the last two are identical on but the latter one will not work on manifolds. I think none of these methods work pretty well put heavy restriction on the potential energy. In the case of field theory there are the Osterwalder Schrader axioms. There was some limited success in constructing lower dimensional models but I think most people gave up. DvHansen (talk) 03:06, 12 August 2014 (UTC) Bad and weak section on "Quantum action principle"[edit] The section entitled "Quantum action principle" does not make much sense. It is undefined and unmotivated. It could have been a model of how you open your mind, creatively re-interpret numbers as operators on the fly, and brilliantly construct and interpret Hilbert spaces for them to act on as you go. It could have been a gradual emergence of crisp ideas from a genial fog. But it's not. It's just someone mumbling to himself incomprehensibly. Here is a rundown, until I run out of energy. Why do you need to know the trajectory to do the Legendre transform? When you go back and forth between L and H, it is just between two functions. There is no mention of trajectory. Perhaps the following is meant: "In quantum mechanics, it is hard to see what to do with the Lagrangian, because the motion is not over a definite trajectory." In classical mechanics, with discretization in time, What is the path p(t), q(t)? It hasn't been introduced. Are we solving for it? Are we varying it? Does it already satisfy Hamilton's equations? Are we going to do something "along" it? Why is there a discretization in time? It looks like the author knows where he is headed, because he knows how it comes out, but he hasn't stopped to tell us the goal. where the partial derivative with respect to q holds q(t + ε) fixed. I think this should be with respect to . Or maybe with respect to q(t)? But how can it make sense to say that q(t + ε) be held fixed during the differentiation when L is not even a function of q(t + ε)? It is a function of q and . The inverse Legendre transform is: Why did we drop the time-discretization? These commonplace remarks about QM are just a distraction at this point, except for the suggestion that we should interpret p and q as (possibly noncommuting) operators. This expression is entirely unmotivated at this point. Is it supposed to be familiar to us from our study of classical mechanics? Then say this, and tell us what it is called in that field. I notice that the same quantity, namely exp(iεL), appears the following section on work of Feynman. It is much easier to understand there. I also have trouble with the fact that we are trying to interpret q(t) as an operator, where q(t) is the value of position at time t of a trajectory. Aren't the position and momentum operators universal entities, independent of t, in normal quantum mechanics? How can we use the value of position at a particular time to define a position operator? I would think it would end up being an eigenvalue of that operator. Also p is written without t. Why? two states separated in time What kinds of things are these states? Are they complex numbers attached to each point in spacetime like the Feynman amplitudes in the following section? Or are they vectors in the usual Hilbert space H=L^2(R^3) that is used for Schroedinger's equation? From the text written here, I have no idea. Is the state is given by a function f(t) taking values in H? Then say this, don't keep it a secret! act with the operator corresponding to the Lagrangian What Hilbert space does this operator act on? If we are groping around trying to find one, then please make this explicit. If the multiplications implicit in this formula are reinterpreted as matrix multiplications, what does this mean? There is a problem here. Wikipedia writes reinterpreted. But the original interpretation has never been given! What is the original, classical meaning of exp(iεL) in classical mechanics that we are trying to generalize to quantum mechanics? Does it have something to do with stationary phase, Huygens principle, or geometric optics? If so, this should be stated explicitly, not left for the astute reader to literally mind-read. The first factor is Now that this clue has been given, we can guess that the Hilbert space might be L^2(R^n), born with "q" coordinates but also possessing "p" coordinates. But this raises more questions than it answers. First, why does q depend on t but p not? Second, if the t is dropped and just presented, then I can see that, if integrated with respect to q, it will do a Fourier transform. But what gives us the right to perform an integration? This is pulled out of a hat here. If we were planning on doing an integration, this should have been announced in advance. Third, putting the t back in, why can we perform an integration with respect to q(t)? This is no longer the free variable q. It is a function of the free variable t. Any integration would have to be with respect to t, but I don't think that's what Wikipedia wants us to do here. Quite possibly, it is an integration with respect to all paths q(⋅). In this case, we actually do have to integrate with respect to the value q(t) assumed by the path q at the time t, and do this for every t. But this should have been explained in advance. Only the reader already familiar with the path integral formulation could be expected to guess this at this point. In short, reading this section is an exercise in accident reconstruction. For a person who already knows the material, it might be possible to interpret the section accurately. For someone very sophisticated in mathematics and physics, but who does not already know this construction, it is very difficult. For someone who isn't as strong intellectually, but still really wants to know quantum, it's an invitation to have a mental breakdown and wake up as Deepak Chopra. (talk) 01:37, 13 May 2015 (UTC) "Strictly speaking the only question that can be asked in physics is: "What fraction of states satisfying condition A also satisfy condition B?""[edit] Where does this quote come from? There's no citation. What is it even supposed to mean? Not only is this completely meaningless "generalized" semantics mumbo jumbo, it also doesn't even strike me as true. I'm not sure how something such as measuring the speed of light would be described by this question. Unless you define "states" as something dumb such as the readings on your instruments. -- (talk) 18:19, 28 April 2016 (UTC)
229f811562780463
Worlds of David Darling Encyclopedia of Science Home > Encyclopedia of Science Schrödinger, Erwin (1887–1961) Erwin Schrodinger Erwin Schrödinger played a principal part in the mathematical development of the modern model of the atom. He developed wave mechanics from de Broglie's picture of wave-particle duality. Austrian theoretical physicist who first developed the version of quantum mechanics known as wave mechanics. In 1926, Schrödinger put into mathematical form the revolutionary idea of the French physicist Louis Victor de Broglie that the motion of material particles is guided by so-called pilot waves. The formulation of the famous Schrödinger equation put quantum theory on a strict mathematical basis, and provided the foundation for its further rapid expansion. For this work, Schrödinger shared the 1933 Nobel Prize in Physics with Paul Dirac. Eventually it was shown that Schrödinger's wave mechanics were equivalent to the matrix mechanics of Werner Heisenberg. In later years, Schrödinger concerned himself with the extension of Einstein's general theory of relativity to include electrical and magnetic phenomena. He also became interested in fundamental biology, and published a short, popular book, What Is Life? (1945). In this book, he attempted to explain the phenomena of life on the basis of purely physical concepts. Schrödinger was born in Vienna, and in 1910 received his Ph.D. from the university there. He served as a professor of theoretical physics in several universities in German and Switzerland. He was also associated for many years with the Dublin Institute for Advanced Studies. Related category
3b54643764176888
Advanced Mathematics for Engineers and Scientists/The Laplacian and Laplace's Equation From Wikibooks, open books for an open world Jump to: navigation, search The Laplacian and Laplace's Equation[edit] By now, you've most likely grown sick of the one dimensional transient diffusion PDE we've been playing with: Make no mistake: we're not nearly done with this stupid thing; but for the sake of variety let's introduce a fresh new equation and, even though it's not strictly a separation of variables concept, a really cool quantity called the Laplacian. You'll like this chapter; it has many pretty pictures in it. Graph of . The Laplacian[edit] The Laplacian is a linear operator in Euclidean n-space. There are other spaces with properties different from Euclidean space. Note also that operator here has a very specific meaning. As a function is sort of an operator on real numbers, our operator is an operator on functions, not on the real numbers. See here for a longer explanation. We'll start with the 3D Cartesian "version". Let . The Laplacian of the function is defined and notated as: So the operator is taking the sum of the nonmixed second derivatives of with respect to the Cartesian space variables . The "del squared" notation is preferred since the capital delta can be confused with increments and differences, and is too long and doesn't involve pretty math symbols. The Laplacian is also known as the Laplace operator or Laplace's operator, not to be confused with the Laplace transform. Also, note that if we had only taken the first partial derivatives of the function , and put them into a vector, that would have been the gradient of the function . The Laplacian takes the second unmixed derivatives and adds them up. In one dimension, recall that the second derivative measures concavity. Suppose ; if is positive, is concave up, and if is negative, is concave down, see the graph below with the straight up or down arrows at various points of the curve. The Laplacian may be thought of as a generalization of the concavity concept to multivariate functions. This idea is demonstrated at the right, in one dimension: . To the left of , the Laplacian (simply the second derivative here) is negative, and the graph is concave down. At , the curve inflects and the Laplacian is . To the right of , the Laplacian is positive and the graph is concave up. Concavity may or may not do it for you. Thankfully, there's another very important view of the Laplacian, with deep implications for any equation it shows itself in: the Laplacian compares the value of at some point in space to the average of the values of in the neighborhood of the same point. The three cases are: • If is greater at some point than the average of its neighbors, . • If is at some point equal to the average of its neighbors, . • If is smaller at some point than the average of its neighbors, . So the laplacian may be thought of as, at some point : The neighborhood of . The neighborhood of some point is defined as the open set that lies within some Euclidean distance δ (delta) from the point. Referring to the picture at right (a 3D example), the neighborhood of the point is the shaded region which satisfies: Note that our one dimensional transient diffusion equation, our parallel plate flow, involves the Laplacian: With this mentality, let's examine the behavior of this very important PDE. On the left is the time derivative and on the right is the Laplacian. This equation is saying that: The rate of change of at some point is proportional to the difference between the average value of around that point and the value of at that point. For example, if there's at some position a "hot spot" where is on average greater then its neighbors, the Laplacian will be negative and thus the time derivative will be negative, this will cause to decrease at that position, "cooling" it down. This is illustrated below. The arrows reflect upon the magnitude of the Laplacian and, by grace of the time derivative, the direction the curve will move. Visualization of transient diffusion. It's worth noting that in 3D, this equation fully describes the flow of heat in a homogeneous solid that's not generating it's own heat (like too much electricity through a narrow wire would). Laplace's Equation[edit] Laplace's equation describes a steady state condition, and this is what it looks like: Solutions of this equation are called harmonic functions. Some things to note: • Time is absent. This equation describes a steady state condition. • The absence of time implies the absence of an IC, so we'll be dealing with BVPs rather then IBVPs. • In one dimension, this is the ODE of a straight line passing through the boundaries at their specified values. • All functions that satisfy this equation in some domain are analytic (informally, an analytic function is equal to its Taylor expansion) in that domain. • Despite appearances, solutions of Laplace's equation are generally not minimal surfaces. • Laplace's equation is linear. Laplace's equation is separable in the Cartesian (and almost any other) coordinate system. So, we shouldn't have too much problem solving it if the BCs involved aren't too convoluted. Laplace's Equation on a Square: Cartesian Coordinates[edit] Steady state conditions on a square. Imagine a 1 x 1 square plate that's insulated top and bottom and has constant temperatures applied at its uninsulated edges, visualized to the right. Heat is flowing in and out of this thing steadily through the edges only, and since it's "thin" and "insulated", the temperature may be given as . This is the first time we venture into two spatial coordinates, note the absence of time. Let's make up a BVP, referring to the picture: So we have one nonhomogeneous BC. Assume that : As with before, calling the separation constant in favor of just (or something) happens to make the problem easier to solve. Note that the negative sign was kept for the equation: again, these choices happen to make things simpler. Solving each equation and combining them back into : At edge D: Note that the constants can be merged, but we won't do it so that a point can be made in a moment. At edge A: Taking as would satisfy this particular BC, however this would yield a plane solution of , which can't satisfy the temperature at edge C. This is why the constants weren't merged a few steps ago, to make it obvious that may not be . So, we instead take to satisfy the above, and then combine the three constants into one, call it : Now look at edge B: It should go without saying by now that can't be zero, since this would yield which couldn't satisfy the nonzero BC. Instead, we can take : As of now, this solution will satisfy 3 of the 4 BCs. All that is left is edge C, the nonhomogeneous BC. Neither nor can be contorted to fit this BC. Since Laplace's equation is linear, a linear combination of solutions to the PDE is also a solution to the PDE. Another thing to note: since the BCs (so far) are homogeneous, we can add the solutions without worrying about nonzero boundaries adding up. Though as shown above will not solve this problem, we can try summing (based on ) solutions to form a linear combination which might solve the BVP as a whole: Assuming this form is correct (review Parallel Plate Flow: Realistic IC for motivation), let's again try applying the last BC: It looks like it needs Fourier series methodology. Finding via orthogonality should solve this problem: 25 term partial sum of the series solution. was changed to in the last step. Also, for integer , . Note that a Fourier sine expansion has been done. The solution to the BVP can finally be assembled: That solves it! It's finally time to mention that the BCs are discontinuous at the points and . As a result, the series should converge slowly at those points. This is clear from the plot at right: it's a 25 term partial sum (note that half of the terms are ), and it looks perfect except at , especially near the discontinuities at and . Laplace's Equation on a Circle: Polar Coordinates[edit] Now, we'll specify the value of on a circular boundary. A circle can be represented in Cartesian coordinates without too much trouble; however, it would result in nonlinear BCs which would render the approach useless. Instead, polar coordinates should be used, since in such a system the equation of a circle is very simple. In order for this to be realized, a polar representation of the Laplacian is necessary. Without going in to the details just yet, the Laplacian is given in (2D) polar coordinates: This result may be derived using differentials and the chain rule; it's not difficult but it's a little long. In these coordinates Laplace's equation reads: Note that in going from Cartesian to polar coordinates, a price was paid: though still linear, Laplace's equation now has variable coefficients. This implies that after separation at least one of the ODEs will have variable coefficients as well. Let's make up the following BVP, letting : This could represent a physical problem analogous to the previous one: replace the square plate with a disc. Note the apparent absence of sufficient BC to obtain a unique solution. The funny looking statement that u is bounded inside the domain of interest turns out to be the key to getting a unique solution, and it often shows itself in polar coordinates. It "makes up" for the "lack" of BCs. To separate, we as usual incorrectly assume that : Once again, the way the negative sign and the separation constant are arranged makes the solution easier later on. These decisions are made mostly by trial and error. The equation is probably one you've never seen before, it's a special case of the Euler differential equation (not to be confused with the Euler-Lagrange differential equation). There are a couple of ways to solve it, the most general method would be to change the variables so that an equation with constant coefficients is obtained. An easier way would be to note the pattern in the order of the coefficients and the order of the derivatives, and from there guess a power solution. Either way, the general solution to this simple case of Euler's ODE is given as: This is a very good example problem since it goes to show that PDE problems very often turn into obscure ODE problems; we got lucky this time since the solution for was rather simple though its ODE looked pretty bad at first sight. The solution to the equation is: Now, this is where the English sentence condition stating that u must be bounded in the domain of interest may be invoked. As , the term involving is unbounded. The only way to fix this is to take . Note that if this problem were solved between two concentric circles, this term would be nonzero and very important. With that term gone, constants can be merged: Only one condition remains: on , yet there are 3 constants. Let's say for now that: Then, it's a simple matter of equating coefficients to obtain: Now, let's make the frequencies differ: Equating coefficients won't work. However, if the IC were broken up into individual terms, the sum of the solution to the terms just happens to solve the BVP as a whole: Verify that the solution above is really equal to the BC at : And, since Laplace's equation is linear, this must solve the PDE as well. What all of this implies is that, if some generic function may be expressed as a sum of sinusoids with angular frequencies given by , all that is needed is a linear combination of the appropriate sum. Notated: To identify the coefficients, substitute the BC: The coefficients and may be determined by a (full) Fourier expansion on . Note that it's implied that must have period since we are solving this in a domain (a circle specifically) where . You probably don't like infinite series solutions. Well, it happens that through a variety of manipulations it's possible to express the full solution of this particular problem as: This is called Poisson's integral formula. Derivation of the Laplacian in Polar Coordinates[edit] Though not necessarily a PDEs concept, it is very important for anyone studying this kind of math to be comfortable with going from one coordinate system to the next. What follows is a long derivation of the Laplacian in 2D polar coordinates using the multivariable chain rule and the concept of differentials. Know, however, that there are really many ways to do this. Three definitions are all we need to begin: If it's known that , then the chain rule may be used to express derivatives in terms of and alone. Two applications will be necessary to obtain the second derivatives. Manipulating operators as if they meant something on their own: Applying this to itself, treating the underlined bit as a unit dependent on and : The above mess may be quickly simplified a little by manipulating the funny looking derivatives: This may be made slightly easier to work with if a few changes are made to the way some of the derivatives are written. Also, the variable follows analogously: Now we need to obtain expressions for some of the derivatives appearing above. The most direct path would use the concept of differentials. If: Solving by substitution for and gives: If , then the total differential is given as: Note that the two previous equations are of this form (recall that and , just like above), which means that: Equating coefficients quickly yields a bunch of derivatives: There's an easier but more abstract way to obtain the derivatives above that may be overkill but is worth mentioning anyway. The Jacobian of the functions and is: Note that the Jacobian is a compact representation of the coefficients of the total derivative; using as an example (bold indicating vectors): So, it follows then that the derivatives that we're interested in may be obtained by inverting the Jacobian matrix: Though somewhat obscure, this is very convenient and it's just one of the many utilities of the Jacobian matrix. An interesting bit of insight is gained: coordinate changes are senseless unless the Jacobian is invertible everywhere except at isolated points, stated another way the determinant of the Jacobian matrix must be nonzero, otherwise the coordinate change is not one-to-one (note that the determinant will be zero at in this example. An isolated point such as this is not problematic.). Either path you take, there should now be enough information to evaluate the Cartesian second derivatives. Working on : Proceeding similarly for : Now, add these tirelessly hand crafted differential operators and watch the result collapse into just 3 nontrigonometric terms: That was a lot of work. To save trouble, here is the Laplacian in other two other popular coordinate systems: Derivatives have been combined wherever possible (not done previously). Concluding Remarks[edit] This was a long, involved chapter. It should be clear that the solutions derived work only for very simple geometries, other geometries may be worked with by grace of conformal mappings. The Laplacian (and variations of it) is a very important quantity and its behaviour is worth knowing like the back of your hand. A sampling of important equations that involve the Laplacian: • The Navier Stokes equations. • The diffusion equation. • Laplace's equation. • Poisson's equation. • The Helmholtz equation. • The Schrödinger equation. • The wave equation. There's a couple of other operators that are similar to (though less important than) the Laplacian, which deserve mention: • Biharmonic operator, in three Cartesian dimensions: The biharmonic equation is useful in linear elastic theory, for example it can describe "creeping" fluid flow: • d'Alembertian: The wave equation may be expressed using the d'Alembertian: Though expressed with the Laplacian is more popular:
1cd49a5b1b1d6feb
 The Nature of the Probability Density Function of Quantum Mechanical Analysis <!-- The Quantum Mechanical Probability Function for a Particle in a Central Potential Field as the Proportion of Time Spent at Points of Its Orbit--> San José State University Thayer Watkins Silicon Valley & Tornado Alley The Nature of the Probability Density Function of Quantum Mechanical Analysis In 1926-27 Erwin Schrödinger formulated wave mechanics which came to be the preferred formulation for quantum physics. Schrödinger cast his theory in terms of a wave function. This stemmed from his background in optics and his captivation by Louis de Brogliem's notion that particles have a wave aspect. The immediate question was what was the nature of Schrödinger's wave function. Max Born asserted that the squared magnitude of the wave function is the probability density function for the system under analysis. Neils Bohr and Werner Heisenberg concurred with this interpretation of the wave function and it became a key element of what came to be known as the Copenhagen Interpretation of quantum physics. What is argued below is that the squared magnitude of the wave function is a probability density function but not of the nature it is given in the Copenhagen Interpretation. Instead Schrödinger's time independent equation gives the proportion of the time the system spends in the the various states in its periodic cycle. The system cycles through the allowed states moving relatively slowly through an allowed state and then relatively rapidly to the next allowed state. The nature of the wave function must be of one sort for all quantum mechanical systems, so to establish the above alternative to the the Copenhagen Interpretation it suffices to establish it for one significant case. The easy case is for harmonic oscillators and this has been done in Harmonic Oscillators. The rapidly fluctuating function is the quantum mechanical probability density and the heavy line is the corresponding classical concept. The classical concept is the proportion of time spent at each possible location. It represent the probability of finding the particle at the various possible locations at any randomly specified time. There is a close relationship between a spatial average of the QM probability density and the classical concept except at the end points for the oscillator. However harmonic oscillators are not the most natural example and the case of two body interactions is used instead, but in the form of a particle moving in a potential energy function field. A Particle Moving in a Central Field with a Potential Energy Function V(r) The Hamiltonian function for such a system is H = p²/(2m) + V(r) where p is the total momentum of the particle, m is its mass and V(r) is the potential energy of the particle as a function of its distance r from the center of the central field. The total momentum is made up of the radial momentum pr and the tangential momentum pθ. These momenta are orthogonal so p² = pr² + pθ² At a macroscopic level a particle in a central field revolves about the center of the field in an elliptical orbit. That orbit is entirely in a plane. In order for the quantum mechanical (QM) analysis to satisfly the Correspondence Principle it must also be limited to a plane. The Correspondence Principle says that in order for QM analysis to be valid it must be consistent with classical analysis as the scale or energy increases to macroscopic proportions. The time-independent Schrödinger equation for the system is −(h²/2m)∇²ψ + V(r)ψ = Eψ where h is Planck's constant divided by 2π and ψ is the wave function for the particle. For stable systems the potential energy V and the total energy E are negative. To emphasize their negativity they can be written as −|V(r)| and −|E|. Thus the above equation is −(h²/2m)∇²ψ −|V(r)|ψ = −|E|ψ or, equivalently −(h²/2m)∇²ψ + (|E| −|V(r)|)ψ = 0 or, more succinctly as −(h²/2m)∇²ψ − K(r)ψ = 0 or, better yet as ∇²ψ + (2m/h²)K(r)ψ = 0 where K(r) is the kinetic energy function (E− V(r)). This last equation has a close mathematical relationship to the equation for a harmonic oscillator; i.e., (d²x/dt²) + (k/m)x = 0 where x is the displacement from equilibrium, t is time, k is the stiffness coefficient and m is mass. The displacement x oscillates sinusoidally between two extremes. The frequency is equal to the square root of (k/m) and the wavelength is inversely proportional to that quantity. The correspondences of the harmonic oscillator equation to the quantum mechanical equation are QM Model Just as the displacement oscillates back and forth between extremes over time so does the wave function ψ oscillate between extremes over space. The square of ψ oscillates between zero and maxima as seen in the previously displayed image. The kinetic energy changes relatively slowly over space and to the approximation that it is constant the spatial frequency of ψ is equal to (2mK)½/h. The peak to peak spacing of the probability density is inversely this frequency and the higher the kinetic energy the more closely are the peaks spaced. The equation for ψ can be multiplied by ψ to obtain −(h²/2m)ψ∇²ψ − K(r)ψ² = 0 or, eliminating the negativity (h²/2m)ψ∇²ψ + K(r)ψ² = 0 The quantity is ψ² is the probability density. For future reference the above equation will be referred to as the equation for probability density. Note that the critical points of ψ² occur where and hence where either ψ = 0 ∇ψ = 0 where 0 denotes the zero vector. The points where ψ² is at or near a maximum correspond to an allowed state and where ψ² is zero or near zero correpond to a disallowed state. The particle moves relatively slowly through one allowed state and relatively quickly through an adjacent disallowed state to the next allowed state. This is not quantum jumping per se but it has similarities to that notion. Consider the following vector calculus identity ∇·(ψ∇ψ) = ∇ψ·∇ψ + ψ∇·∇ψ = (∇ψ)² + ψ∇²ψ or, equivalently ψ∇²ψ = ∇·(ψ∇ψ) − (∇ψ)² In words this is that the divergence of the function times the gradient of the function is equal to the dot product of the gradient of the function with itself plus the function times the divergence of the gradient of the function. Thus the previous equation derived from the Schrödinger equation, the probability density equation, becomes (h²/2m)(∇·(ψ∇ψ) − (∇ψ)²) + K(r)ψ² = 0 Consider the gradient of the probability density ψ²; i.e., ∇ψ² = 2ψ∇ψ and therefore (∇ψ²)·(∇ψ²) = 4ψ²(∇ψ·∇ψ) which can be expressed as (∇ψ²)² = 4ψ²(∇ψ)² (∇ψ)² = (∇ψ²)²/( 4ψ²) When this expression is substituted into the probability density equation the result is (h²/2m)(∇·(ψ∇ψ) − (∇ψ²)²)/( 4ψ²) + K(r)ψ² = 0 The term (∇·(ψ∇ψ) when integrated from a maximum to an adjacent minimum or from a minimum to an adjacent maximum is zero because at minimum ψ is zero and at a maximum the gradient ∇ψ is equal to the zero vector. At a macroscopic classical level such a system as being considered would involve the particle traveling smoothly about an elliptical orbit. In order for the quantum mechanical system to asymptotically approach the classical behavior at the scale and/or energy increases it must have some semblance of an orbit. There is no dividing line between the quantum mechanical and the classical, what Werner Heisenberg called the Schnitt (cut). A chain of alternating minima and maxima may be constructed labeled by an index j, say mj and Mj for j=1, 2, …, N. This chain would constitutes an orbit path. The intervals between minima and maxima can also be labeled by an index k, say sk. The interval from mj to Mj would be k=2j and from Mj to mj+1 would be k=2j+1. This chain of intervals is roughly the particle's path. When such integrations are carried out over the chain of intervals between maxima and minima and use made of the Extended Mean Value Theorem for Integrals the result is (h²/2m)∫ds(∇ψ)²)/(4ψ²(s*) = K(r*)∫ds(ψ²) where r* and s* are some values of r and s within the interval of integration. The integral on the RHS can be represented as ψ²(s#)δk where δk is the length of the k-th interval and s# is some point in that interval. When that substitution is made and the equation is multiplied by ψ²(s*) the result is (h²/(8m))∫ds(∇ψ)²) = K(r*)ψ²(s#)ψ²(s*)δk or, after division by δk (h²/(8m))∫ds(∇ψ)²)/δk = K(r*)ψ²(s#)ψ²(s*) which may be expressed as (h²/(8m))(∇ψ(s+)²) = K(r*)(ψ(s^)²)² where (∇ψ(s+))² is the average of (∇ψ)² in the k-th interval and (ψ(s^)²)² is the square of the probability density P at some point s^ in the k-th interval. P(sj) = (h/(2m)½[∫ds(∇P(s))²)/4]½/(2K(r(sj))½ Over a wide range ∫ds(∇P(s))²) is relatively constant so the probability of being in interval sj is inversely proportional to K(r(sj))½. What is required to satisfy the Correspondence Principle is that the quantum mechanical probability function be asymptotically equivalent to the classical probability density function which is proportional to the time spent in the various locations as the scale or energy of the system increases. But QM distribution can be taken to be the corresponding quantity given the motion of the particle at the quantum level. The Classical Path of a Particle in a Central Field The energy function E = ½mv² + V(r) may be solved for the velocity v as v = [2(E−V(r))/m]½ = (2/m)½K(r)½ The time spent by the particle in an interval ds of its path length s is ds/|v|. The probability density of finding the particle at that point at a random time is proportional to 1/|v(s)| and hence to 1/(E−V(r(s)))½ which is the same as 1/K(r(s))½. When the quantities which the probabilities are proportional to are normalized all constant factors are eliminated. Thus the quantities 1/K(r(s))½ are the only determinates of the probability. This is very close to what was found for the QM probability densities. Singularities arise at the end points where the kinetic energy is zero. The Nature Of Probabilities The concept of probability is a very useful construct for explaining statistical data. There is usually a subjective nature to probability; meaning the probabilities are conditional on what is known and thus not solely a property of the system under concideration. When there is some intrinsic component to probabilities such as for dice those probabilities are embpdied in the symmetry and uniformity of the dice. In the probability density functions considered above the probabilities are embodied in the periodic cycle of the system. Nowhere except in the Copenhagen Interpretation of Quantum Mechanics are there disembodied probabilities that exist like an electric field. The squared magnitudes of the wave function which comes out of quantum mechanical analysis constitute a probability density function that represents the proportion of the time the system spends in various locations. The QM probability density function for a system does not represent some intrinsic uncertainty of the particles of the system. HOME PAGE OF applet-magic
27843109a1e602f9
Affordable Access Publisher Website Polynomial kinetic energy approximation for direct-indirect heterostructures Superlattices and Microstructures Publication Date DOI: 10.1016/0749-6036(87)90052-8 • Computer Science • Physics Abstract The effective mass approximation, in which the band structure of a semiconductor is replaced by a simple parabolic dispersion relation for electrons, has worked suprisingly well for quantum calculations of electron eigenenergies and eigenstates in semiconductor heterostructures. It can be extended by systems with spacially varying effective mass by requiring wavefunction and particle flux continuity. However, for indirect heterostructures which include materials with electron bands of different symmetry, it fails to incorporate enough physics to give correct answers. An important example where effective mass calculations are inapplicable is the AlAs GaAs system, in which the conduction band minima occur at the Г and X points, respectively. The mixture of these two types of electrons in AlAs GaAs superlattices has only been calculated using tight-binding or pseudopotential methods, which are difficult to apply to a wide range of heterostructures. We have extended the spirit of effective mass calculations to a method applicable to indirect heterostructures. To do this, we write a Schrödinger equation in which the Hamiltonian is a n th degree polynomial in the gradient operator, ▽. For any energy, there exist n (complex) plane wave solutions. For spacially varying band structures, we can write a probability conserving Schrödinger equation which has a flux operator consistent with the usual interpretation of plane wave group velocities. The requirements imposed by this Schrödinger equation on the wavefunction and its derivatives allow matching of the plane wave solutions across heterojunctions. We have applied this method to AlAs GaAs double heterostructures, where we see interesting resonance and anti-resonance behaviors. The computational speed of our method will allow complicated structures, including compositional grading and electric fields, to be modeled on microcomputers.
f79564053088c8e8
Take the 2-minute tour × I saw this video of the double slit experiment by Dr. Quantum on youtube. Later in the video he says, the behavior of the electrons changes to produce double bars effect as if it knows that it is being watched or observed. What does that mean? How is that even possible? An atom knows if it is being watched? Seriously? Probably, likely are the chances that I dint understand the video? share|improve this question migrated from philosophy.stackexchange.com Nov 8 '11 at 21:14 That video has prompted questions here before. The first half of it is pretty standard explanation of quantum mechanics for laypeople, but at some point in veers off into new age woo and silly quantum mysticism. The basic answer is that QM describes the way the universe works very accurately. It is futile to assign wacky philosophical explanations to it. The universe will do what the universe will do, and QM is simply a description of it's behavior. –  Colin K Nov 8 '11 at 22:11 Don't let Dr Quantum touch you there... He's not a real doctor. –  Mikhail Nov 9 '11 at 3:51 remember what Dr. Feynman said about QM...If you think you understand QM, then you didn't understand it! –  Vineet Menon Nov 9 '11 at 4:47 2 Answers 2 up vote 5 down vote accepted Before I attempt to answer your question it is necessary to cover some basic background, you must also forgive the length but you raise some very interesting question: There are two things that govern the evolution of a Quantum Mechanical (QM) system (For All Practical Purposes (FAPP) the election and the double-slit/Youngs apparatus you mention I will take to be a purely QM system), the time evolution of the system (governed by the Schrödinger equation) which we will denote as $\mathbf{U}$ and the State Vector Reduction or Collapse of the Wave Function $\mathbf{R}$. The Schrödinger equation describes the unitary/time evolution of the wave function or quantum state of a particle which here we will denote as $\mathbf{U}$. This evolution is well defined and provides information on the evolution of the quantum state of a system. The quantum state itself, expresses the entire weighted sum of all the possible alternatives (complex number weighting factors) that are open to the system. Due to the nature of the complex probabilities, it is possible for a QM system, like your electron traveling through the Youngs apparatus, to be in a complex superposition of multiple states (or to put it another way, be in a mixture of possible states/outcomes that the given system will allow). For your system lets assume for simplicity that there are two states $|T\rangle$ the state associated with the electron going through the [T]op ‘slit’, and $|B\rangle$ the state associated with the electron passing through the bottom ‘slit’ (for simplicity we will ignore the phase factors associated with the QM states. See here for more information about the phase factor associated with Quantum States). So, just before the electron strikes the wall it is in a superposition of states $\alpha|T\rangle + \beta|B\rangle$, where $\alpha$ and $\beta$ are complex number probabilities that represent the likely hood of the particle being in the respective states. Now, in order to determine which path/’slit’ the electron actually took (either $|T\rangle$ or $|B\rangle$) we have to make some kind of ‘observation’/measurement (as was pointed out above). This measurement is what causes process $\mathbf{R}$ to occur and subsequently the collapse of the wave function which force the superposition of states $\alpha|T\rangle + \beta|B\rangle$ to become either state $|T\rangle$ OR $|B\rangle$. It is this QM state reduction or wave function collapse caused by process $\mathbf{R}$ that invokes all the mystery and the very strange nature of QM. There are numerous paradoxes (EPR-Paradox, Schrödinger’s cat etc. see here for an overview and some background) that stem from this measurement procedure/problem. At this point I can now address your questions: “What does that mean? How is that even possible? An atom knows if it is being watched? Seriously? Probably, likely are the chances that I didn’t understand the video?” So it is the process $\mathbf{R}$ that causes this issue so you are right to ask what does it mean when someone says “it knows that it is being observed”. To answer the above I will ask one of my own questions: “Is $\mathbf{R}$ a real process?”. I ask this because there are two ways of viewing $\mathbf{R}$. Some physicists view the collapse of the wave function and the quantum superpositions of complex probabilities (the use of state vectors) as real physical properties, others do not (even Dirac, Einstein and Schrodinger himself, did not take the probabilistic view of QM as serious view of what was actually happening in reality, rather they took it as a mathematical formalism that allowed these physical processes to be predicted). If you are to deem the state vector as a real entity then you must accept the consequential blur between what happens at the quantum level and what happens at the macroscopic/large scale level. This leads to the Feynman’s multiple history view of QM where all of the possible outcomes of a QM system occur and this itself leads to the “Many-World” interpretations of QM. I for one (along with the like of Penrose, Einstein etc.) believe the current picture of QM is not complete and that there is some physical process causing the collapse of the wave function. The wave function collapse is what causes the electron to choose a QM state, and the act of observation/measurement does seem to cause this collapse. However, this give rise to the question “Is it the act of human observation/consciousness that causes this collapse?”. It is impossible to argue this is the case. To go into more depth I will have to bring in the idea of quantum entanglements, which is essentially what was described above as a superposition of two QM states. These entanglements are what “collapse” when observations/measurements are made and are what constitute $\mathbf{R}$. So the real question is what causes dis-entanglement of two superposed states. There are some very interesting theories that postulate that the state vector reduction is gravitationally reduced and not the act of any observation. These ideas also have a bearing on the question of human consciousness! These details and in depth discussion on this subject can be found in the very accessible book: “Shadows of the Mind” by Roger Penrose. I hope this was of some help. share|improve this answer The video shows that the interference pattern goes away when one tries to measure which slit the electron went through. The point is that in order to measure which slit the electron went through, one must disturb the electron (shoot some light at it, for example). And amazingly, this interaction is enough to destroy the interference pattern. In some sense, though there is still some mystery about this. One says that the measurement (which implies an interaction) collapses the wave function (which describes the electron motion). The double slit experiment is a good place to start to get into the strange world of quantum mechanics! share|improve this answer simple and clear....I guess the narrator wanted to convey the simple fact about uncertainty principle! –  Vineet Menon Nov 9 '11 at 4:48 Your Answer
82f0ecba4e2cafd1
Home Vision Energy Solutions Why Now? Latest News Research Links The Unnecessary Energy Crisis: How to Solve It Quickly T. E. Bearden, LTC, U.S. Army (Retired) Director, Association of Distinguished American Scientists (ADAS) Fellow Emeritus, Alpha Foundation’s Institute for Advanced Study (AIAS) Final Draft June 24, 2000 Web Site: The World Energy Crisis The world energy crisis is now driving the economies of the world nations There is an escalating worldwide demand for electrical power and transportation, much of which depends on fossil fuels and particularly oil or oil products.  The resulting demand for oil is expected to increase year by year. Recent sharp rises in some U.S. metropolitan areas included gasoline at more than $2.50 per gallon already. At the same time, it appears that world availability of oil may have peaked in early 2000, if one factors in the suspected Arab inflation of reported oil reserves. From now on, it appears that oil availability will steadily decline, slowly at first but then at an increasing pace. Additives to aid clean burning of gasoline are also required in several U.S. metropolitan areas, increasing costs and refinery storage and handling. The increasing disparity between demand and supply-steadily increasing demand for electricity using oil products versus decreasing world supplies of oil, with other factors such as required fuel additives-produces a dramatically increasing cost of oil and oil products. Further, newer supplies of oil must be taken by increasingly more expensive production means. Manipulative means of influencing the price of oil include (i) the ability of OPEC to increase or decrease production at will, and (ii) the ability of the large oil companies to reduce or increase the holding storage of the various oil products, types of fuel, etc. Interestingly, several large oil companies are reporting record profits {[1]}. At the same time, the burgeoning populaces of the major petroleum producers—and their increasing economic needs—press hard for an increasing inflation of oil prices in order to fund the economic benefits. As an example, Saudi moderation of OPEC is vanishing or has already vanished.  The increasing demands of the expanding Saudi Royal Family group and the guaranteed benefits to the expanding populace have overtaken and surpassed the present Saudi financial resources unless the price of OPEC oil is raised commensurately {[2]}. The Federal Reserve contributes directly to the economic problem in the U.S., since it interprets the escalating prices of goods and services (due to escalating energy prices) as evidence of inflation.  It will continue to raise interest rates to damp the economy, further damping U.S. business, employment, and trade.  The Fed has already increased interest rates six times in one year as of this date. International Trade Factors Under NAFTA, GATT {[3]}, and other trade agreements, the transfer of production and manufacturing to the emerging nations is also increasing and trade barriers are lowered. Some 160 emerging nations are essentially exempt from environmental pollution controls, under the Kyoto accords. In these nations, electrical power needs and transport needs are increasing, and will continue to increase, due to the increasing production and movement of goods and the building of factories and assembly plants.  Very limited pollution controls — if any — will be applied to the new electrical plants and transport capabilities to be built in those exempted nations. The transfer of manufacturing and production to many of these nations is a transfer to essentially “slave labor” nations. Workers have few if any benefits, are paid extremely low wages, work long hours, and have no unions or bargaining rights. In some of these nations, to pay off their debts many parents sell their children into bondage for manufacture of goods, with 12 to 14 hour workdays being a norm for the children {[4]}. In such regions the local politicians can usually be “bought” very cheaply so that there are also no effective government controls. Such means have set up a de facto return to the feudalistic capitalism of an earlier era when enormous profits could be and were extracted from the backs of impoverished workers, and government checks and balances were nil. The personal view of this author is that NAFTA, GATT, and Kyoto were set in place for this very purpose. As the transfer builds for the next 50 years, it involves the extraction of perhaps $2 trillion per year, from the backs of these impoverished laborers. It would not appear accidental that Kyoto removed the costly pollution control measures from this giant economic buildup that would otherwise have been required. The result will be increased pollution of the biosphere on a grand scale. Ironically, the Environmental Community itself was deceived into supporting the Kyoto accords and helping achieve them, hoping to put controls on biospheric pollution worldwide. In fact, the Kyoto accords will have exactly the opposite effect. Resulting World Economic Collapse Bluntly, we foresee these factors — and others {[5]}{[6] }not covered—converging to a catastrophic collapse of the world economy in about eight years. As the collapse of the Western economies nears, one may expect catastrophic stress on the 160 developing nations as the developed nations are forced to dramatically curtail orders. International Strategic Threat Aspects History bears out that desperate nations take desperate actions. Prior to the final economic collapse, the stress on nations will have increased the intensity and number of their conflicts, to the point where the arsenals of weapons of mass destruction (WMD) now possessed by some 25 nations, are almost certain to be released.  As an example, suppose a starving North Korea {[7]} launches nuclear weapons upon Japan and South Korea, including U.S. forces there, in a spasmodic suicidal response. Or suppose a desperate China — whose long-range nuclear missiles (some) can reach the United States — attacks Taiwan. In addition to immediate responses, the mutual treaties involved in such scenarios will quickly draw other nations into the conflict, escalating it significantly. Strategic nuclear studies have shown for decades that, under such extreme stress conditions, once a few nukes are launched, adversaries and potential adversaries are then compelled to launch on perception of preparations by one’s adversary.  The real legacy of the MAD concept is this side of the MAD coin that is almost never discussed. Without effective defense, the only chance a nation has to survive at all is to launch immediate full-bore pre-emptive strikes and try to take out its perceived foes as rapidly and massively as possible. As the studies showed, rapid escalation to full WMD exchange occurs. Today, a great percent of the WMD arsenals that will be unleashed, are already on site within the United States itself {[8]}. The resulting great Armageddon will destroy civilization as we know it, and perhaps most of the biosphere, at least for many decades. My personal estimate is that, beginning about 2007, on our present energy course we will have reached an 80% probability of this “final destruction of civilization itself” scenario occurring at any time, with the probability slowly increasing as time passes. One may argue about the timing, slide the dates a year or two, etc., but the basic premise and general time frame holds. We face not only a world economic crisis, but also a world destruction crisis. So unless we dramatically and quickly solve the energy crisis—rapidly replacing a substantial part of the “electrical power derived from oil” by “electrical power freely derived from the vacuum”—we are going to incur the final “Great Armageddon” the nations of the world have been fearing for so long.  I personally regard this as the greatest strategic threat of all times—to the United States, the Western World, all the rest of the nations of the world, and civilization itself {[9]} {[10]}. What Is Required to Solve the Problem? To avoid the impending collapse of the world economy and/or the destruction of civilization and the biosphere, we must quickly replace much of the “electrical energy from oil” heart of the crisis at great speed, and simultaneously replace a significant part of the “transportation using oil products” factor also. Such replacement by clean, nonpolluting electrical energy from the vacuum will also solve much of the present pollution of the biosphere by the products of hydrocarbon combustion. Not only does it solve the energy crisis, but it also solves much of the environmental pollution problem. The technical basis for that solution and a part of the prototype technology required, are now at hand. We discuss that solution in this paper To finish the task in time, the Government must be galvanized into a new Manhattan Project {[11]} to rapidly complete the new system hardware developments and deploy the technology worldwide at an immense pace. The 2003 date appears to be the critical “point of no return” for the survival of civilization as we have known it. Reaching that point, say, in 2005 or 2006 will not solve the crisis in time. The collapse of the world economy as well as the destruction of civilization and the biosphere will still almost certainly occur, even with the solutions in hand. A review of the present scientific and technical energy efforts to blunt these strategic threat curves, immediately shows that all the efforts and indeed the conventional scientific thinking) are far too little and far too late. Even with a massive effort on all of the “wish list” of conventional projects and directions, the results would be insufficient to prevent the coming holocaust. As one example, the entire hot fusion effort has a zero probability of contributing anything of significance to the energy solution in the time frame necessary. Neither will windmills, more dams, oil from tar sands, biofuels, solar cells, fuel cells, methane from the ocean bottom, ocean-wave-powered generators, more efficient hydrocarbon combustion, flywheel energy storage systems, etc. All of those projects are understandable and “nice”, but they have absolutely zero probability of solving the problem and preventing the coming world economic collapse and Armageddon. Those conventional approaches are all “in the box” thinking, applied to a completely “out of the box” problem unique in world history. The conventional energy efforts and thinking may be characterized as essentially “business as usual but maybe hurry a little bit.”They divert resources, time, effort, and funding into commendable areas, but areas which will not and cannot solve the problem. In that sense, they also contribute to the final Armageddon that is hurtling toward us {[12]}. If we continue conventionally and with the received scientific view, even with massively increased efforts and a Manhattan Project, we almost certainly guarantee the destruction of civilization as we know it, and much of the biosphere as well. Bluntly, the only viable option is to rapidly develop systems which extract energy directly from the vacuum and are therefore self-powering, like a windmill in the wind {[13] }. Fortunately, analogous electrical systems—open systems far from thermodynamic equilibrium in their exchange with the active vacuum—are permitted by the laws of physics, electrodynamics {[14]} and thermodynamics {[15]}. Such electrical systems are also permitted by Maxwell’s equations, prior to their arbitrary curtailment by Lorentz symmetrical regauging {[16]} {[17]} {20}. The good news was that the little mathematical trick by Lorentz made the resulting equations much easier to solve (for the selected “subset” of the Maxwell-Heaviside systems retained). However, the bad news is that it also just arbitrarily discarded all Maxwellian EM systems far from thermodynamic equilibrium (i.e., asymmetrical and in disequilibrium) with respect to their vacuum energy exchange. So the bad news is that Lorentz arbitrarily discarded all the permissible electrical power systems analogous to a windmill in a wind, and capable of powering themselves and their loads. All our energy scientists and engineers continue to blindly develop only Lorentz-limited electrical power systems. The good news is that we now know how to easily initiate continuous and powerful “electromagnetic energy winds” from the vacuum at will. Once initiated, each free EM energy wind flows continuously so long as the simple initiator is not deliberately destroyed. The bad news is that all our present electrical power systems are designed and developed so that they continually kill their “energy winds” from the vacuum faster than they can collect some of the energy from the winds and use it to power their loads. But the good news is that we now know how to go about designing and developing electrical power systems which (i) initiate copious EM energy flow “winds” in the vacuum, (ii) do not destroy these winds but let them continue to freely flow, and (iii) utilize these freely-flowing energy winds to power themselves and their loads. So we have already solved the first half of the energy crisis problem {[18]} {[19]}: We can  produce the necessary “EM energy wind flow” in any amount required, whenever and wherever we wish, for peanuts and with ridiculous ease. We can insure that, once initiated, the electromagnetic energy wind flows indefinitely or until we wish to shut it off. A tiny part of the far frontier of the scientific community is also now pushing hard into catching and using this available EM energy from the vacuum {[20]}. However, they are completely unfunded and working under extremely difficult conditions {[21]}. In addition, there are more than a dozen appropriate processes already available (some are well-known in the hard literature), which can be developed to produce the new types of electrical energy systems {[22]}. What Must Be Done Technically We have about two and a half years to develop several different types of systems for the several required major applications — and particularly the following: (1)  self-powering open electrical power systems extracting their electrical energy directly from the active vacuum and readily scalable in size and output, (2)  burner systems {[23]} to replace the present “heater” elements of conventional power plants, increasing the coefficient of performance (COP) {[24]} of those altered systems to COP>1.0, and perhaps to COP = 4.0, (3)  specialized self-powering engines to replace small combustion engines { [25]}, (4)  self-regenerating, battery-powered systems enabling practical electric automobiles, based on the Bedini {[26]} process, (5)  Kawai COP>1.0 magnetic motors {[27]} with clamped feedback, powering themselves and their loads, (6)  magnetic Wankel engines {[28]} with small self-powering batteries, which enable a very practical self-powering automotive engine unit for direct replacement in present automobiles, (7)  permanent magnet motors such as the Johnson {[29]} approach using self-initiated exchange force pulses {[30]} in nonlinear magnetic materials to provide a nonconservative field, hence a self-powering unit, (8)  iterative retroreflective EM energy flow systems which intercept and utilize significant amounts of the enormous Heaviside {[31]} which surrounds every electrical circuit but is presently ignored, (9)  Iterative phase conjugate retroreflective systems which passively recover and reorder the scattered energy dissipated from the load, and reuse the energy again and again { [32]}, (10)  Shoulders’ charge cluster devices {[33]} which yield COP>1.0 by actual measurement, (11)  self-exciting systems using intensely scattering optically active media and iterative asymmetrical self-regauging {[34]}{[35]} {[36]} {67}, (12)  true negative resistors such as the Kron {[37]} and Chung {[38]} negative resistors, the original point-contact transistor { [39]} which can be made into a negative resistor, and the Fogal negative resistor semiconductor, and (13)  overunity transformers using a negative resistor bypass across the secondary, reducing the back-coupling from secondary to primary and thus lowering the dissipation of energy in the primary {[40]}. What Must Be Done for Management and Organization To meet the critical 2003 “point of no return” milestone, the work must be accomplished under a declared National Emergency and a Presidential Decision Directive. The work must be amply funded, with authority—because of the extreme emergency—to utilize any available patented processes and devices capable of being developed and deployed in time, with accounting and compensation of the inventors and owners separately. As an example, two of the above mentioned devices — the Kawai engine and the magnetic Wankel engine — can be quickly developed and produced en masse. However, they have been seized by the Japanese Yakuza {[41]} {[42]} {[43]} and are being held off the world market.  The two devices are quite practical and can be developed and manufactured with great rapidity.  As an example, two models of the Kawai engine were tested by Hitachi to exhibit COP = 1.4 and COP = 1.6 respectively.  Use of these two inventions, under U.S. Government auspices, will greatly contribute to solving a significant portion of the transportation power problem, at low risk for this part of the solution.  Use of them cannot be obtained by normal civil means, due to the involvement of the Yakuza. The technical part of the project to solve the energy crisis is doable in the required time — but just barely, and only if we move at utmost speed Thanks to more than 20 years work on unconventional solutions to the problem, much of the required solution is already in hand, and the project can go forward at top speed from the outset. The remaining managing and organizing problem is to marshal the necessary great new Manhattan Project as a U.S. government project operating under highest national priority and ample funding.  The Project must be a separate Agency, operating directly under the appropriate Department Secretary and reporting directly to the President (through the Secretary) and to a designated Joint Committee of the Senate and the House. The selection of the managers and directors must be done with utmost care; else, they themselves will become the problem rather than the solution. We strongly stress that here even the most highly qualified managerial scientist may have to be disqualified because of his or her own personal biases and dogmatic beliefs.  Leaders and scientists are required who will run with the COP>1.0 ball on a wide front. The compelling authority to assign individual tasks to the National Laboratories and other government agencies is required, but under no circumstances can the project be placed under the control of the national laboratories themselves. Those laboratories such as Los Alamos National Laboratory, Lawrence Livermore National Laboratory, and Oak Ridge National Laboratory are far too committed to their entrenched Big Science projects and the resulting bias against electrical energy from the vacuum. Assigning management of the project to them would be setting the foxes to minding the hen house, and would guarantee failure.  Those agencies whose favored approaches are responsible for the present energy crisis, cannot be expected to direct an effective solution to it that is outside their managerial and scientific ansatz and totally against their institutional and professional biases. If they are allowed to direct the project, then implacable scientists, who adamantly oppose electrical energy from the vacuum from the getgo, will hamstring and destroy the project from its inception. Not only will they fiddle while Rome burns, but they will help burn it. Enormous EM Energy Flow Is Easily Extracted From the Active Vacuum There is not now and there never has been a problem in readily obtaining as much electromagnetic energy flow from the vacuum as we wish. Anywhere.  Anytime. For peanuts. Every electrical power system and circuit ever built already does precisely that {[44] }{[45] }. But almost all the vast EM energy flow that the present flawed systems extract from the vacuum is unaccounted and simply wasted. It is wasted by the conventional, seriously flawed circuits and systems designed and built by our power system scientists and engineers in accord with a terribly flawed 136-year old set of electrodynamics concepts and foundations. Specifically, it is wasted because Lorentz discarded it a century ago {45}. Since then, everyone has blindly followed Lorentz’s lead. Our electrical scientists and engineers have not yet even discovered how a circuit is powered! They have no valid concept of where the electrical energy flowing down the power line actually comes from. They do not model the interaction that provides it { [46]}, in their theoretical models and equations. This vast scientific “conspiracy of ignorance” is completely inexplicable, because the actual source of the EM energy powering the external circuits has been known (and rigorously proven)in particle physics for nearly half a century! However, it has not yet even been added into the fundamental electrical theory used in designing and building power systems. We have a scientific mindset problem of epic proportions, and scientific negligence and electromagnetics dogma of epic proportions. I sometimes refer to this as an unwitting “conspiracy of ignorance”, where I use the word “ignorance” technically as meaning “unaware”. We certainly do not intend the phrase to be pejorative. So we do not have an energy problem per se. We have an unwitting conspiracy of scientific ignorance problem. Because of its bias, our electrical scientific community also strongly resists updating the 136-year old electrodynamics foundations even though much of it is known to be seriously flawed and even incorrect {[47]}[48]}. Indeed, organized science has always fiercely resisted strong innovation.  As Max Planck {[49]} so eloquently put it, “An important scientific innovation rarely makes its way by gradually winning over and converting its opponents: it rarely happens that Saul becomes Paul.  What does happen is that its opponents gradually die out, and that the growing generation is familiarized with the deas from the beginning.”Arthur C. Clarke { [50]} expressed it succinctly for our more modern scientific community, as follows: “If they [quantum fluctuations of vacuum] can be [tapped], the impact upon our civilization will be incalculable.  Oil, coal, nuclear, hydropower, would become obsolete — and so would many of our worries about environmental pollution.”  “Don’t sell your oil shares yet — but don’t be surprised if the world again witnesses the four stages of response to any new and revolutionary development: 1. It’s crazy! 2. It may be possible — so what? 3. I said it was a good idea all along.  4. I thought of it first.” With respect to extracting and using EM energy from the vacuum, our present scientific community is mostly in Clarke’s phase 1.  A few scientists are in phase 2 but surmise that “it may perhaps be the science of the next century.” We do not have a century remaining. We have two and a half years. For nearly half a century (i) the active vacuum, (ii) the vacuum’s energetic interaction with every dipole, and (iii) the broken symmetry of the dipole {[51]} in that energetic interaction {55} have been known and proven in particle physics. These proven COP>1.0 vacuum energy mechanisms have not been incorporated into the electrodynamic theory used to design and build electrical power and transportation systems { [52] }.  We are still waiting for the “old scientific opponents” — adamantly opposed to the very notion of electrical energy from the vacuum — to “die off and get out of the way.” Hence, our present organized scientific community will strongly resist funding of a vigorous program to gather all this proven, known physics together and rapidly use it to change and update (modernize) the terribly flawed EM theory and the design of electrical power systems.  Most scientists attempting to do this research have had to proceed on their own.  They have undergone vicious and continual ad hominem attacks, lost research funds and tenure, been unable to get their papers published, and in fact risked being destroyed by the scientific community itself {21}. The bottom line is this: Left to sweet reason, because of the depth of its present bias the scientific community is totally incapable of reacting to the problem in time to prevent the destruction of civilization.  If we wish to survive, government will have to directly force the scientific community to do the job, over careers and “dead bodies” (so to speak) if necessary. But first the government itself must be motivated to do so. Only the environmental community has the clout, financial resources, and activists to motivate the government in the extremely short time in which it must be accomplished.  So it would seem that the most urgent task is to educate and wake up the environmental community. It has been “had”, and it has been “had” since the beginning. Understanding What Powers Electrical Circuits Let us cut through the scientific errors in how electrical power systems are presently viewed: Batteries and generators themselves do not power circuits. They never have, and they never will. They dissipate their available internal energy {[53]} to do one thing and one thing only: forcibly separate their own internal charges to form a “source dipole” {[54]}.  Once the dipole has been formed, the dipole directly extracts electromagnetic energy from the active vacuum {[55]}, pouring the extracted EM energy out from the terminals of the battery or generator. Batteries and generators make a dipole, nothing else. All the fuel every burned, the nuclear fuel rods ever consumed, and chemical energy ever expended by batteries, did nothing but make dipoles.  None of all that destructive activity, of itself, ever added a single watt to the power line. Once made, the dipole then extracts EM energy from the seething vacuum, and pours it out down the circuit and through all surrounding space around the circuit {56}.  A little bit of that energy flow strikes the circuit and enters it by being deflected (diverged) into the wires {57}.  That tiny bit of intercepted energy flow that is diverged into the circuit, then powers the circuit (its loads and losses){58}. All the rest of that huge energy flow around the circuit just roars on off into deep space and is wasted. The Dipole Extracts Enormous Energy from the Vacuum The outflow of EM energy extracted from the vacuum by a small dipole is enormous.  It fills all space surrounding the attached external circuit (e.g., surrounding the power lines attached to a power plant generator) {[56]}. In the attached circuits, the electrical charges on the surfaces of the wires are struck by the mere edge of the violent flow of EM energy passing along those surfaces.  The resulting tiny “intercepted” part {[57]} of the EM energy flow is deflected into the wires, very much like placing one’s hand outside a moving automobile and diverting some of the wind into the car. The deflected energy that enters the wires is the Poynting component of the energy flow. It is not the entire EM energy flow by any means, but only a very, very tiny component of it {[58]}. Only that tiny bit of the energy flow that is actually diverged into the wires is used to power the circuit and the loads. All the rest of the enormous energy flow present and available outside the circuit is just ignored and wasted. A nominal 1-watt generator, e.g., is actually one whose external circuit can “catch” only one watt of its output.  The generator’s actual total output — in the great flow which fills all space around the external circuit and is not intercepted and used — is something on the order of 10 trillion watts!> Our Scientists and Engineers Design Dipole-Destroying Systems Here is the most inane thing of all. Precisely half of the small amount of energy that is actually caught by the circuit is used to destroy the dipole! That half of the intercepted energy does not power the load, nor does it power losses in the external circuit. Instead, it is used to directly scatter the dipole charges and destroy the dipole. Our scientists and engineers have given us the ubiquitous closed current loop circuit {[59]}, which destroys the dipole faster than it powers the load. In short, the scientists and engineers design and build only those electrical power systems that “continuously commit suicide” by continuously destroying the source dipole that is extracting the vacuum energy and emitting it out along the circuit to power everything in the first place. So now, we have the real picture. However, our scientists and engineers design and build electrical power systems that only intercept and use a tiny fraction of the vast EM energy flow available.  They also only design and build systems that destroy their source dipole faster than they power their loads. If one does not destroy the dipole once it is made, it will continue to freely extract copious EM energy flow from the vacuum, indefinitely, pouring out a stupendous flow of EM energy. As an example, dipoles in the original matter formed in the Big Bang at the beginning of the universe have been steadily extracting EM energy from the vacuum and pouring it out for about 15 billion years. The energy problem is not due to the inability to produce copious EM energy flows at will — as much as one wishes, anywhere, anytime. Every dipole already does this, including in every EM power system ever built. The energy problem is due to the complete failure to (i) intercept and utilize more of the vast energy flows made available by the common dipole, and (ii) doing so without using the present inanely designed circuits. These circuits use half their collected energy to destroy the dipole that is extracting the energy flow from the vacuum in the first place! This is part of the “conspiracy of scientific ignorance” earlier mentioned. Ignoring the Vacuum as the Source of Electrical Energy in All Circuits In their conventional theoretical models, our present electrical power system scientists and engineers do not even include the vacuum interaction or the dipole’s extraction of EM energy from the vacuum. They simply ignore — and do not model — what is really powering every electrical system they build. Consequently, we reiterate that our electrical scientists have never even discovered how an EM circuit is powered — although it has been discovered and known for nearly 50 years in particle physics. All the hydrocarbons ever burned, all the water over all the dams ever built, all the nuclear fuel rods ever expended in all the nuclear power plants, added not a single watt to the power line. Instead, all that expense, effort, and pollution and destruction of the biosphere was and is necessary in order to keep adding internal energy to the generator—so that it can keep continually rebuilding its source dipole that is continually destroyed by the inane circuits that the power system scientists and engineers keep designing and building for us. It takes as much energy input to the generator to restore the dipole, as it took the circuit to destroy the dipole.  Thus all the systems our scientists and engineers design and build, require that we continually input more energy to restore the dipole, than the circuit dissipates in the load. Our technical folks thus happily design and give us systems which can and will only exhibit COP<1.0 — thus continuing to require that we ourselves steadily provide more energy to the system to continually rebuild its dipole, than the inane masochistic system uses to power its load. In short, we pay the power companies (and their scientists and engineers) to deliberately engage in a giant wrestling match inside their generators and lose. That is not the way to run the railroad! One is reminded of one of the classic comments by Churchill: “Most men occasionally stumble over the truth, but most pick themselves up and continue on as if nothing had happened.” It seems that not very many energy system scientists and engineers have “stumbled over the truth” as to what really powers their systems, and how inanely they are really designing them. Electrical Energy Required from Hydrocarbon Burning Drives the Problem The heart of the present environmental pollution problem is the ever-increasing need for electrical energy obtained from burning of hydrocarbon fuels and/or nuclear power stations. The increasing production of electrical power to fill the rising needs, increasingly pollutes the environment including the populace itself (lungs, bodies, etc.).  Almost every species on earth is affected, and as a result every year some species become extinct. Environmental pollution includes pollution of the soil, fresh and salt water, and the atmosphere by a variety of waste products. Given global warming, it also includes excess heat pollution in addition to chemical and nuclear residues. Under present procedures, the electrical energy problem is exacerbated by decreasing available oil supplies, which are believed to have peaked this year, with a projected decline from now on. But really, the electrical energy problem is due to the scientific community’s adamant defense and use of electrical power system models and theories that are 136 years old { [60]} in their very foundations. These models and theories are riddled with errors and non sequiturs, and seriously flawed. The scientific community has not even recognized the problem, much less the solution. In fact, it does not even intend to recognize the problem, even though the basis for it has been known in particle physics for nearly 50 years.  As Bunge {[61]} put it some decades ago: “…it is not usually acknowledged that electrodynamics, both classical and quantal, are in a sad state.” The scientific community has done little to correct that fundamental problem since Bunge made his wry statement. Let us put it very simply: The most modern theory today is modern gauge field theory. In that theory, freedom of gauge is assumed from the getgo. Applied to electrodynamics, this means — as all electrodynamicists have assumed for the last century or longer — that the potential energy of an EM system can be freely changed at will. In other words, in theory it costs nothing at all to increase the EM energy collected in a system; this is merely “changing the voltage”, which does not require power. In other words, we can “excite” the system with excess energy (actually taken from the vacuum), at will. For free. And the best science of the day agrees with that statement. It also follows that we can freely change the excitation energy again, at will. In short, we can dissipate that excess energy freely and at will. Without cost. Well, this means that we are free—by the laws of nature, physics, thermodynamics, and gauge field theory — to dissipate that free excess potential energy in an external load, thus doing “free work”. Since none of the systems our energy scientists and engineers build for us are doing that, it follows a priori that the fault lies entirely in their own system design and building.  It does not lie in any prohibition by nature or the laws of physics. A priori, then, the present COP<1.0 performance of our electrical power systems is a monstrosity and the direct fault of our scientists and engineers.  We cannot blame the laws of nature or the laws of physics. The present energy crisis then is due totally to that “conspiracy of ignorance” we referred to. It is maintained by the scientific community today, and it has been maintained by it for more than 100 years. This is the real situation that the environmentalists must become aware of, if they are to see the correct path into which their energies and efforts should be directed — to solve both the energy crisis and the problem of gigantic pollution of the biosphere. Outside Intervention Must Forcibly Move Energy Science Forward Unless outside intervention occurs forcibly, the scientific community’s lock-up of research funds for “in the box” energy research may result in the economic collapse of the Western World in perhaps as little as eight years. Let us examine the gist of the problem facing us. Suppose we launch a crash program to develop, manufacture, deploy, and employ the new “vacuum powered” systems.  Once the new self-powering systems are developed and ready to roll off the production lines en masse, it will require a minimum of five years worldwide to sufficiently alter the “electrical energy from oil” demand curve, so that economic collapse can be averted. In turn, this means that the new systems must be ready to roll off the manufacturing lines by the end of 2003. While this is a very tight schedule, it can be done if we move rapidly. The necessary scientific corrections along the lines indicated in this paper can be quickly applied to solve the electrical energy problem permanently and economically, given a Manhattan type project under a Presidential Decision Directive together with a Presidential declaration of a National Energy Emergency. In a paper {[62]} to be published in Russia in July 2000, this researcher has proposed some 15 viable methods for developing new “self-powering” systems powering themselves and their loads with energy extracted from the vacuum. Several of these systems can be developed very rapidly, and can be easily mass-produced. A second paper {[63]} will be published in the same proceedings, revealing the Bedini method for invoking a negative resistor inside a storage battery. The negative resistor freely extracts vacuum energy and adds it to both the battery-recharging function and the load powering function. In Bedini’s negative resistor method, the ion current inside the battery is decoupled (dephased) from the electron current between the outer circuit and the external surfaces of the battery plates. This allows the battery to be charged (with increased charging energy) simultaneously as the load is powered with increased current and voltage. At my specific request, both papers were thoroughly reviewed by qualified Russian scientists, and the premises passed successfully. A third paper {[64]} gives the exact giant negentropy mechanism by which the dipole extracts such enormous energy from the vacuum. We will further explain that mechanism below. Conventional Approaches: Too Little, Too Late It appears that the Environmental Community itself has finally realized that the present scientific approaches and research are simply too little and too late. Further, the conventional approaches are largely “in the box thinking” applied to an “out of the box problem.” We leave it to others such as Loder {[65]} to succinctly summarize the shortfalls of these present solutions. Loder, e.g., particularly and incisively explains how the problem with automobiles breaks down. In fact, no single COP>1.0 approach will be all sufficing. Several solutions, each for a different application, must be developed and deployed simultaneously. As an example, it is possible to create certain dipolar phenomena in plasmas produced in special burners, such that the dipoles extract substantial excess EM energy from the vacuum. Output of the excess energy produces ordinary excess heat well beyond what the combustion process alone will yield. Given a Manhattan type project, the inventor of that process (with already working models and rigorous measurements) could rapidly be augmented to develop a series of replacement burners (heaters). They could be used in existing electrical power plants to heat the water to make the steam for the steam turbines turning the shafts of the generators. The entire remainder of the power system, grid, etc. could be left intact. Some fuel would still be burned, but far less would be consumed in order to furnish the same required heat output. In short, a rather dramatic reduction in power plant hydrocarbon combustion could be achieved — in the present electrical power plants with minimum modification, and in the necessary time frame — while maintaining or even increasing the electrical energy output of the power systems. We believe the inventor would fully participate in a government-backed Manhattan type energy program where a National Emergency has been declared, given a U.S. government guarantee that his process, equipment, and inventions will not be confiscated {[66]}. Another process capable of quick development and enormous application is the development of point contact transistors as true negative resistors {39}. Two other processes that can be developed for massive production in less than two years are (i) the process {27}, and (ii) the magnetic Wankel process {28}. In addition, the Johnson {29} process can be developed and readied for manufacture in the same time frame, given a full-bore sophisticated laboratory team. There are other processes {[67]} {62} {63} which can also be developed rapidly, to provide major contributions in solving their parts of the present “electrical energy from hydrocarbon combustion” problem. Giant Negentropy and a Great New Symmetry Principle We now summarize some recent technical discoveries by the present author that bear directly upon the problem of extracting and using copious EM energy flows from the vacuum. Any dipole has a scalar potential between its ends, as is well known. Extending earlier work by Stoney {[68]}, in 1903 Whittaker {[69]} showed that the scalar potential decomposes into — and identically is — a harmonic set of bidirectional longitudinal EM wavepairs. Each wavepair is comprised of a longitudinal EM wave (LEMW) and its phase conjugate LEMW replica. Hence, the formation of the dipole actually initiates the ongoing production of a harmonic set of such biwaves in 4-space { [70]}. We separate the Whittaker waves into two sets: (i) the convergent phase conjugate set, in the imaginary plane, and (ii) the divergent real wave set, in 3-space. In 4-space, the 4th dimension may be taken as -ict. The only variable in -ict is t. Hence the phase conjugate waveset in the scalar potential’s decomposition is a set of harmonic EM waves converging upon the dipole in the time dimension, as a time-reversed EM energy flow structure inside the structure of time {[71] }. Or, one can just think of the waveset as converging upon the dipole in the imaginary plane {[72]} — a concept similar to the notion of “reactive power” in electrical engineering. The divergent real EM waveset in the scalar potential’s decomposition is then a harmonic set of EM waves radiating out from the dipole in all directions at the speed of light. As can be seen, there is perfect 4-symmetry in the resulting EM energy flows, but there is broken 3-symmetry since there is no observable 3-flow EM energy input to the dipole. Our professors have taught us that output energy flow in 3-space from a source or transducer, must be accompanied by an input energy flow in 3-space.  That is not true. It must be accompanied by an input energy flow, period. That input can be an energy flow in the 4th dimension, time — or we can consider it as an inflow in the imaginary plane. The flow of energy must be conserved, not the dimensions in which the flow exists. There is no requirement by nature that the inflow of EM energy must be in the same dimension as the outflow of EM energy. Indeed, nature prefers to do it the other way! Simply untie nature’s foot from the usually enforced extra condition of 3-space energy flow conservation. Then nature joyfully and immediately sets up a giant 4-flow conservation, ongoing. Enormous EM energy is inflowing from the imaginary plane into the source charge or dipole, and is flowing out of the source charge or dipole in 3-space, at the speed of light, and in all directions. In other words, nature then gladly gives us as much EM energy flow as we need, indefinitely — just for paying a tiny little bit initially to “make the little dipole.” After that, we never have to pay anything again, and nature will happily keep on pouring out that 3-flow of EM energy for us. This is the giant negentropy mechanism I uncovered, performed in the simplest way imaginable: just make an ordinary little dipole. We may interpret the giant negentropy mechanism in electrical engineering terms {[73]}. The EM energy flow in the imaginary plane is just incoming “pure reactive power” in the language of electrical engineering. The outgoing EM energy flow in the real plane (3-space) is “real power” in the same language. So the dipole is continuously receiving a steady stream of reactive power, transducing it into real power, and outputting it as a continuous outflow of real EM power. Further, there is perfect 1:1 correlation between the convergent waveset in the imaginary plane and the divergent waveset in 3-space. This perfect correlation between the two sets of waves and their dynamics represents a deterministic re-ordering of a fraction of the 4-vacuum energy. This re-ordering initiated by the formation of the dipole spreads radially outward at the speed of light, continuously. This clearly shows that (i) we can initiate reordering of a usable fraction of the vacuum’s energy at any place, anytime, easily and cheaply (we need only to form a simple dipole), and (ii) the process continues indefinitely, so long as the dipole exists, without the operator inputting a single additional watt of power. This is a very great benefit. So long as the dipole exists, this re-ordering continues and a copious flow of observable, usable EM energy pours from the dipole in all directions at the speed of light. This is the full solution to the first half of the energy crisis, once and for all. Ansatz of the Major Players To appreciate the difficulty in implementing the solution to the energy crisis, one must be aware of the characteristics of the major communities whose dynamics and interactions determine the outcome. Accordingly, we summarize our personal assessment of the present “status” and “awareness” of the various communities involved. We do that by attempting to express the overall “ansatz” of the specific community. Scientific Community For the most part, the organized scientific community varies from highly resistant to openly hostile toward any mention of extracting copious EM energy from the active vacuum. The “Big Nuclear” part of the community is particularly adamant in this respect, as witness its ferocious onslaught on the fledgling and struggling cold fusion researchers — a ferocity of scientific attack seldom seen in the annals of science {[74]} {[75]}. The scientific community also largely suppresses {[76]} or severely badgers scientists attempting to advance electrodynamics to a more modern model, suitable to the needs of the 21st century and the desperate need for cheap, clean, nonpolluting electrical power worldwide {21}. The community still applies classical equilibrium thermodynamics to the electrical part of all its electrical power systems, even though every EM system is inherently a system far from equilibrium with the active vacuum environment, and a different thermodynamics applies. Only if the system is specifically so designed — e.g., so that during the dissipation of its excitation energy it enforces the Lorentz symmetrical regauging condition — will the system behave as a classical equilibrium system. The thermodynamics of open dissipative systems is well known {[77]}. Such a system is permitted to (1) self-order, (2) self-oscillate or self-rotate, (3) output more energy than the operator inputs (the excess energy is freely received from the active environment), (4) power itself and its load simultaneously (all the energy is taken from the active environment, similar to a windmill’s operation), and (5) exhibit negentropy. Our present electrical power systems do not do these five things, even though each is an open system in violent energy exchange with the vacuum. A priori, that reveals it is the scientific model and the engineering design that are at fault. It is not any law of nature or principle of physics that prevents self-powering open electrical power systems. Instead, it is the scientific community and its prevailing mindset against extracting and using EM energy from the vacuum. Environmental Community In the past, the environmental community has been overly naïve with respect to physics, and particularly with respect to electrical physics. Its science advisors have come mostly from the conservative “in the box” scientific community. Hence, the community has failed to realize that COP>1.0 electrical power systems are normal and permitted by the laws of nature and the laws of physics.  They have no inkling that Heaviside discovered—in the 1880s! — the enormous unaccounted EM energy pouring from the terminals of any battery or generator. They are unaware that Poynting considered only the tiny component of the energy flow that enters the circuit. They are also unaware that, completely unable to explain the astounding enormity of the EM energy flow if the nondiverged (nonintercepted)Heaviside component is accounted, Lorentz {18} just arbitrarily used a little procedure to discard that troublesome Heaviside “dark” (unaccounted) component.; Lorentz reasoned that, since the huge dark energy flow component missed the circuit entirely, it “had no physical significance.” This is like arguing that none of the wind on the ocean has any physical significance, except for that small portion of the wind that strikes the sail of one’s own sailboat. It ignores the obvious fact that whole fleets of additional sailboats can also be powered by that “physically insignificant” wind component that misses one’s own sailboat entirely. Nonetheless, electrodynamicists continue to use Lorentz’s little discard trick, and try to call the feeble Poynting energy flow component caught by the circuit the entire EM energy flow connected with it. This is like arguing that the component of wind hitting the sails of one’s own sailboat, is the entire great wind on the ocean. As a result, the environmental community has failed to grasp the technical reason for the energy crisis and the increasing pollution of the biosphere. They have been deceived and manipulated into thinking that conventional organized science is giving them the very best technical advice possible on electrical power systems. The environmentalists have been and are further deceived into believing that the conventional scientific community is advocating and performing the best possible scientific studies and developments for trying to solve the energy crisis. Of major importance, the environmental community itself has been deceived as to the exact nature of the energy flow in and around a circuit, the vastness of the unaccounted energy flow (or even that any of the energy flow is deliberately unaccounted), and the fact that this present but unaccounted EM energy flow can be intercepted and captured for use in powering loads and developing self-powering systems. Worst of all, the environmental community has been deceived as to what powers every electrical load and EM circuit. They have been deceived into believing that burning all those hydrocarbons, using those nuclear fuel rods, building those dams and windmills, and putting out solar cell arrays are necessary and the best that can be done. In short, they have been smoothly diverted from solving the very problem — the problem of the increasing pollution and destruction of the biosphere — they are striving to rectify. However, their continued demonstrations in the street demonstrate that many environmentalists now suspect that much of the world’s continued policy of “the rich get richer and the poor get poorer” in international trade agreements are deliberately planned and implemented {[78]}. They perceive the implementation to the advantage of a favored financial class and the exploitation of the poorer laboring classes in disadvantaged nations. Electrical Power Community The electrical power community: and has been inane for a century Industries also acquire their own hidden agendas, when serious threats to the industries arise. As an example, a potentially serious problem arose some decades ago when it became apparent that EM radiation from power lines might detrimentally affect people or at least some people. To put it gently, a great deal of fuss and fury resulted, and a great deal of money was and is spent by the power companies (or through organizations and foundations funded by them) in EM bioeffects research. Not too surprisingly, just about the entire output of this industry-funded research “finds” that there is no problem with powerline radiation { [79]}. Those scientists such as Robert Becker {[80]}{[81]} who advocate or show otherwise, usually wind up having all their funds cut off, hounded from their jobs, and—in the case of Becker—forced to retire early. It is no different in the electrical energy science field {21}. Storage Battery Companies Battery companies are primarily of much the same outlook and ansatz as are the power companies.  They have gone to pulse charging of batteries and improved battery chemistry and materials {[82]}. They have no notion that batteries do not power circuits, but only make source dipoles — and it is the source dipole that then extracts EM energy from the vacuum and pours it out into the external circuit. Consequently, they erroneously believe that chemical energy in the battery is expended in order to provide power to the external circuit. Instead, it is expended only to continuously remake the source dipole, which the closed current loop circuit fiendishly keeps destroying faster than the load is powered. They also have not investigated deliberately dephasing and decoupling the major ion current within the battery and between the plates, from the electron current between the outside of the plates and the external circuit. Consequently they have no concept of permissible Maxwellian COP>1.0 battery-powered systems. Instead, battery companies, scientists, and engineers still believe — along with the power companies and most electrodynamicists, and the environmental community — that applying the Lorentz symmetrical regauging to the Heaviside-Maxwell equations retains all the Maxwellian systems. It does not. Instead, it arbitrarily discards all Maxwellian systems which are permitted by the laws of nature and the laws of physics to produce COP>1.0! University Community The University community mostly supports the prevailing EM view. It also suffers from the rise of common “greed” in the universities themselves. The professor now must attract external funding (for his research, and for his graduate students — and especially for the lucrative “overhead” part of the funding which goes to the University itself). The research funds available for “bidding” via submitting proposals, are already cut into “packages” where the type of research to be accomplished in each package is rigorously specified and controlled. Research on COP>1.0 systems is strictly excluded. Dramatic revision of electrodynamics is excluded. Unless the professor successfully bids and obtains packages and their accompanying funding, he is essentially ostracized and soon discharged or just “parked” by the wayside. Also, if he tries to “go out of the box” in his papers submitted for publication, his peer reviewers will annihilate him and his papers will not be published. Shortly he will effectively be blacklisted and it will be very difficult for him to have his submitted papers honestly reviewed, much less published. Again, that means no tenure, no security, and eventual release or “dead-end parking” by the university. When one looks at the “innovative” packages so highly touted, they either (1) are research focused upon some approved thing such as hot fusion—which has spent billions and has yet to produce a single watt on the power line, and cannot do so in any reasonable time before the collapse of the Western economy — or (2) use clever buzzwords for things which are actually “more of the same” and “in the box thinking” with just some new words or twists thrown in for spin control. Meanwhile, all this makes for a self-policing system, which rewards conservatism — conservative publications, conservative research, conservative thinking, conservative teaching, etc.  In short, it selects and approves electrical power system research that is “too little, too late” to solve the world energy crisis in time, and ruthlessly rejects all the rest. It also makes for a self-policing system which roots out and destroys (or parks on the sidelines) those professors, graduate students, and post-docs who — given a chance to be highly innovative and “out of the box” researchers — might upset the status quo. In short, the scientific community is itself the greatest arch foe of high innovation, just as Planck indicated. The university generally typifies and reflects that overall attitude because its outside research funds are controlled and managed by the upper echelons of the organized Big Science community and the governmental community. Government Community — Technical The technical part of the U.S. government research community is drawn from the universities, private industry, etc. It mostly reflects an even more conservative group than the universities. Again, papers published and funding are the major requirements, within given and largely accepted scientific constraints. Further, the managerial government scientists must compete for funding, annual budgets, etc. and have their own “channel” constraints from on high. At the top levels (such as NSF and NAS), cross-fertilization by the aims and perceptions of the conservative scientific community leaders is achieved. The real reasons for the violent attacks were the prestige and power of the Stealth community at the time — and because UWB radar had the implication of tracking Stealth vehicles readily. Interestingly, the arch foes of UWB at the time, today would have us believe they are “staunch experts” in the UWB field. To understand their remarkable metamorphosis, one need only recall Arthur C. Clarke’s words, quoted earlier. In the COP>1.0 EM energy field, we are still rather much at the stage where the UWB researchers started. We are still in the “violent attack, personal insults, character assassination, slander, libel, etc.” stage. Sadly, such ad hominem savagery is by scientists who themselves have no notion of how electromagnetic circuits are actually powered, and who — like ostriches — still have their heads buried in the sand back there in the 1880s when Lorentz discarded the enormous Heaviside energy flow component. Government Community — Non Technical Here we have a rather mixed situation. The nontechnical person — e.g., a Senator or a Congressperson — is operating under a distinct disadvantage. In taking the stance that much better electrical power systems can readily be achieved, he or she is in fact opposing almost the entire set of University, Government Technical, University, Power Company, Battery Company, and Organized Science communities. Further, in most cases his technical advisors are themselves from one or the other of those communities, and likely to go back into that community or those communities when the Senator or Congressperson leaves office, or even before. So the Congress and the non-technical government community at large operate at a great disadvantage. As an example, admittedly there are some very misguided unorthodox energy system inventors and scientists out there, who in the guise of furthering COP>1.0 systems actually contribute to the problem rather than to the solution. A few do not even realize that they cannot properly measure a “spiky” output with an RMS meter! Some are also more interested in selling “dealerships” and “stock” than in furthering the science of COP>1.0 systems. Few have submitted their purported COP>1 devices to rigorous testing by an independent, Government-certified test laboratory { [83]}. This “noise” seriously dilutes the unconventional scientific community’s legitimate efforts in COP>1.0 systems. By playing up such “dilution” and accenting “the crazies”, the orthodox scientific community often convinces government nontechnical managers and personnel that the unorthodox scientific COP>1.0 community is comprised only of lunatics, charlatans, stock-scam artists and misguided crank inventors. Such of course is not the case. A goodly number of reputable, skilled scientists are seriously struggling with the problems of developing COP>1.0 EM power systems and devices. A few are also struggling to develop an adequate theory of such systems. Progress is slowly being made and has been made, in spite of the harassment {[84]}. The independent assessments that Congress once enjoyed with the OTA are no more because the OTA was abolished. Now the committees, subcommittees, and individual Congresspersons and Senators are largely on their own, with their own staffs and their own technical advisors. That said, nonetheless it can be seen by savvy Senators and Congresspersons that the U.S. Ship of State is headed for a great economic bust, and probably the greatest one of all time. In Conclusion There is an even more ominous specter looming behind the shadow of the coming great economic collapse. When national economies get strained to the breaking point — with some of them failing, etc. worldwide as the price of oil escalates — the conflicts among nations will increase in number and grow in intensity. About a year or so ahead of the “Great Collapse” of the world economies, the intensity and desperation of the resulting national conflicts will have increased to the breaking point. Some 25 nations already have weapons of mass destruction (WMD) — including nuclear warheads; missile, aircraft, boat, and terrorist delivery systems; biological warfare weaponry; and other advanced weapons {9} {10}, etc. { [85]} { [86]} Any knowledgeable person knows that hostile terrorist agents are already on site here in the U.S. {[87]}, and some will have smuggled in their WMDs. It is not too difficult to surmise that some of those missing Russian “suitcase nukes” probably wound up right here in the U.S., hidden in our population centers { [88]}. Or that some of Saddam Hussein’s large stock of anthrax has been spirited into the U.S. as well.  As is well known, the threat from weapons of mass destruction is now officially recognized as the greatest strategic threat facing the U.S. It is not a matter of if the WMD weapons will be unleashed, but when. If one transposes that recognized escalating WMD threat onto the escalating economic pressures worldwide, then another factor comes into play — the dark side of the Mutual Assured Destruction (MAD) concept. We have opted (at least to date) not to defend our populace. The U.S. government has deliberately placed U.S. population centers in a defenseless situation so that their destruction is “assured” once the WMD balloon really goes up. The insanity of the MAD concept is revealed when war preparations by many nations start to be perceived — as they will be, when the conflicts intensify sufficiently and the looming economic collapse tightens the cinch on the nations of the world. Without any protection of its populace, a defending nation has to fire on perception of nuclear preparations by its adversaries, if that nation is to have even the slightest chance of surviving. At about that 2007 date when a nation sees its adversaries preparing WMD and nuclear assets for launch or use in ongoing intense conflicts, at some point that nation must pre-empt and fire massively, or accept its own “assured destruction”. The only question in MAD is whether the assured destruction shall be mutual or solitary. So one or more nations will fire, immediately moving all the rest into the “fire on perception” mode. Very rapidly, the situation then escalates to the all-out worldwide exchange so long dreaded. This massive exchange means the destruction of civilization itself, and probably much of the entire biosphere for decades or centuries. Such escalation from one or more initial nuclear firings has been shown for decades by all the old strategic nuclear studies. It is common knowledge to strategic analysts unless one engages in wishful thinking. Eerily, this very threat now looms in our not too distant future, due in large part to the increasing and unbearable stresses that escalating oil prices will elicit. So about seven years or so from now, we will enter the period of the threat of the Final Armageddon, unless we do something very, very quickly now, to totally and permanently solve the present “electrical energy from oil” crisis. This is really why we must have a National Emergency proclamation, and a Manhattan Project. Mass manufacturing, deployment, and employment of replacement electrical power systems must begin in earnest in early 2004. In my estimate, the point of no return for developing the self-powering replacement systems is about the end of 2003. If by early 2004 we do not have multiple types of vacuum-energy powered systems rolling off the assembly lines en masse, then we shall overshoot the point of no return. In that case, it matters not whether the systems then become available or not. They will then be too late to prevent the great Armageddon and the destruction of civilization. We can still meet this early 2004 production deadline. It is difficult, but it is definitely a doable at this time. We must do it, and we must do it now. Else the technology for electrical energy from the vacuum will also be “too little, too late.” In that case, not only the world economy but civilization itself will likely be destroyed — not 100 years from now, not 50 years from now, but in less than one decade from now. References and Notes [1] Tesla, Nikola, “The Problem of Increasing Human onEnergy,” Century, June, 1900 [2] “The World Bank and the G-7: Changing the Earth’s Climate for Business,” Ver. 1.1, Aug. 1997, IPS [3] Keeling et al., “Seasonal and interannual variation in atmospheric oxygen and implication for the global carbon cycle”, Nature, Vol. 358, 8/27/92, p.354 [4] Vinnikov, Science, Dec. 3, 1999, p. 1934 [5] Linden, Eugene,”The Big Meltdown,” TIME, Sept. 4, 2000, p.53 [6] Brown, Lester, et al., State of the World, Worldwatch Institute, 1999, p. 25, citing U.N. 1997 report [7] Epstein, Paul, “Is Global Warming Harmful to Health?” Scientific American, August, 2000, p.50 [8] ibid., p.57 [9] Brown, p. 26 [10] ibid., p. 25 [11] Annual Energy Outlook, DOE Energy Information Administration. EIA-X035 [12] Brown, p. 25 [13] Valone, Thomas, “Future Energy Technologies,” Proceedings of the Annual Conference of the World Future Society, 2000. [14] US DOE Energy Information Administration, Energy INFOcard, 1999 [15}. Future Energy: Proceedings of the First International Conference on Future Energy, Integrity Research Institute, 1999, CD-ROM [1]. And of course it is said to be accidental that all the manipulative measures and profit-taking happen to coincide with the large increase in demand in the U.S. during the summer vacation and tourist months. [2]. E.g., see F. Gregory Gause III, “Saudi Arabia Over a Barrel,” Foreign Affairs, 79(3), May/June 2000, p. 80-94.  Quoting, p. 82:  “Saudi oil policy is now driven primarily by the immediate revenue needs of a government struggling to maintain a welfare state designed in the 1970s — when money seemed limitless and the population was small — for a society with one of the world’s fastest-growing populations.”  Our comment is that the financial disarray of the Saudis is seen by Gause as a need to get Saudi Arabia into the World Trade Organization — in other words, into the clutches of globalization. For a resounding exposé of the WTO, see Lori Wallach and Michelle Sforza, Whose Trade Organization? Corporate Globalization and the Erosion of Democracy, published by Public Citizen Foundation  and available by order from the web at  Wallach and Sforza reveal and document the machinations of the World Trade Organization as an instrument of globalization and usurpation of national rights. The WTO is only one of many organizations prepared by the High Cabal (Winston Churchill’s term) to establish the return for much of the world to a version of the old feudal capitalism where national governments posed no checks and balances and workers had no rights or benefits. [3]. NAFTA stands for North American Free Trade Agreement, passed by Congress in 1993, creating a trade and investment region consisting of Canada, the United States, and Mexico.  GATT stands for General Agreement on Tariffs and Trade (Uruguay Round) in 1994, which created the World Trade Organization (WTO).  Other such agreements set in place to initiate world globalization financial control over nations include or have included MAI (Multilateral Agreement on Investment) and OECD (Organization for Economic Co-operation and Development) in which many of the “secret” agreements are prepared and then scurried through passage by “fast track” means where the Congress allows the President to negotiate trade agreements that are then voted on by the Congress without amendment.  Quoting Moisés Naím, “Lori’s War,” Foreign Policy, Vol. 118, Spring 2000, p. 35,  “…’fast track’ is the legislative legerdemain under which Congress allows the president to negotiate trade agreements that are then voted on without amendments.  Without it, the White House has no guarantee that lawmakers will not seek to change the terms of trade agreements reached after lengthy trade talks.”  Our comment is that there should be no such guarantee to the White House, since the Congress consists of our duly elected representatives — elected precisely for the purpose of representing the U.S. public rather than the administration.  The “fast track” ploy is one way of bypassing full Congressional discussion, examination, etc. so that the desired globalization control measures can be “sneaked through” without a rigorous examination of their provisions.  In this way, national authority and constitutional provisions can gradually be undermined by a continuing series of such sneak actions. [4]. According to the International Labour Organization, some 250 million boys and girls between the ages of five and 14 are exploited in hazardous work conditions.  Most of these children live in the developing world — although in industrialized countries such as the United States, hundreds of thousands of underage boys and girls are at work in sweatshops, farm fields, brothels, and on the street. E.g., see Sandy Hobbs, Michael Lavalette, and Jim McKechnie, Child Labor, ABC-CLIO, Inc., 1999.  For a poignant visual and verbal tour through the problem, see Russell Freedman and Lewis Hine, Kids at Work: Lewis Hine and the Crusade Against Child Labor, Houghton Mifflin, Aug. 1994.  The United Nations also has several publications on the problem and its extent. [5]. As one example, the Russian mafia, together with the GRU and KGB under its new name, are the dominant factors in Russia, Russian business, and the Russian side of relations between the U.S. and Russia. See particularly Stanislov Lunev and Ira Winkler, Through the Eyes of the Enemy: Russia’s Highest Ranking Military Defector Reveals Why Russia Is More Dangerous Than Ever, Regnery, Washington, D.C., 1998.  Quoting p. 12: “When the Soviet Union collapsed and its industries were privatized, there was only one group within Russia with the money to buy the new industries, and that was the Russian mafia.  But the mafia did more than buy the industries — it bought the government.”  Quoting p. 13: “The Cold War is not over; the new Cold War is between the Russian mafia and the United States. ” Quoting p. 14: “The Soviet Union did not collapse because of ‘reform minded leaders’ or because of the Reagan administration’s brilliantly aggressive strategy (though that strategy played a part).  The truth is that the Russian mafia caused the collapse.  Soviet ‘reform’ was nothing more than a criminal revolution.” [6]>. As another example, the Japanese Yakuza has penetrated most large Japanese corporations, including Japanese banking and to include the national Japanese bank.  E.g., see Michael Hirsh and Hideko Takayama, “Big Bang or Bust?”, Newsweek, Sept. 1, 1997, p. 44-45. Some $300 billion or more were extracted by the Yakuza from the Japanese taxpayers in a great land scandal. Japan’s banks loaned billions to Yakuza-affiliated real-estate speculators, and the Yakuza would not repay the funds.  The banks were literally too terrified to collect on the $300-600 billion in bad debt that ensnared the banking system.  E.g., when Sumitomo Bank got a little aggressive in collecting loans in Nagoya, its branch manager was killed.  For a summary of this scandal, see Brian Bremner, “How the Mob burned the Banks: The Yakuza is at the center of the $350 billion bad-loan scandal,” Business Week, Jan. 29, 1996, p. 42-43, 46-47.  The Japanese government — i.e., the taxpayers — had to absorb this enormous loss. The Yakuza have achieved the power and status of a hostile nation, operating within U.S.-Japanese corporate relations, within other nations’ relations with Japan, and within the oriental communities of foreign states.  Great influence upon the ability or inability of the U.S. government to continue its deficit financing now rests in the hands of the Yakuza.  Effectively, the Yakuza can trigger a U.S. stock market crash at will, by simply shutting off all further Japanese purchase of U.S. government deficit financing bonds. The Yakuza regard themselves as the last Samurai, still follow the old Bushido concept, and are intensely hostile to the United States for the humiliating defeat of Japan in WW II and for dropping the atomic bomb on Japan.  At the critical time in the coming economic crisis, cessation of Japanese purchase of U.S. Government bonds can and will initiate the financial coup de grace which generates the final and sudden collapse of the U.S. economy, dragging down other economies with it.  It appears that the Yakuza tested the response of the U.S. stock market to this tactic on two occasions, by simply slowing the rate of Japanese purchases of U.S. government bonds.  The immediate drops in the stock market on both occasions showed the efficacy of this financial weapon, whenever the Yakuza wish to employ it. In the U.S., the Yakuza constitute an important and growing hostile terrorist group, an intense subculture increasing in numbers, and a group biding its time prior to engaging in mass terrorism strikes.  Together with the Aum Shinrikyo, in 1990 the Yakuza leased the operational use of clandestine strategic longitudinal EM wave interferometer weapons in Russia.  They now possess some of the most powerful strategic weapons on earth (see notes 9 and 10, below). [7]>. The recent historic meetings of North and South Korean leaders, with proclamations of cooperation etc., are a healthy sign for the better.  With the former implacable North Korean dictator now dead, the new and younger leader may have less hostile outlook.  However, progress can be made only very slowly, since the Communist apparatus is still in power in the armed forces and the nation.  Only as more of the old die-hard Communist leaders die off, will real progress start to be made in materially lessening the threat posed by North Korea. That is a process requiring a generation, but at least a start has been made.  For our thesis, that progress is likely to be sufficiently slow that, while it damps the stress curves a little, it has no appreciable effect on the overall thesis of the eruption within the decade of a great conflagration involving weapons of mass destruction. [8]. Particularly see Lunev and Winkler, ibid., 1998 for the fact that Spetznatz assassination and terror teams are already deployed on site in the United States, as are their WMD weapon caches to include nuclear weapons.  A number of nations of the world have secretly deployed nuclear and biological weapons throughout the interior of their perceived enemy nations, often using diplomatic pouch privilege to bring them directly into the targeted nation.  It is called “dead man fuzing”.  The notion was an extension of the MAD concept: with weapons and teams secreted throughout a targeted nation, then the potent threat that, even if one’s own nation is destroyed, one can still destroy the foe who did it, supposedly acts as a deterrent. [9]. Also involved, there are clandestine weapons of far greater power than nuclear weapons, but most of that subject is beyond the scope of this presentation.  For some time we have informed the U.S. government of these developments, the evidence, the events, etc.  An example — current at its time of preparation — is T. E. Bearden, Energetics: Extensions to Physics and Advanced Technology for Medical and Military Applications, CTEC Proprietary, May 1, 1998, 200+ page inclosure to CTEC Letter, “Saving the Lives of mass BW Casualties from Terrorist BW Strikes on U.S. Population Centers,” to Major General Thomas H. Neary, Director of Nuclear and Counterproliferation, Office of the Deputy Chief of Staff, Air and Space Operations, HQ USAF, May 4, 1998.  Copies of a similar presentation were furnished the DoD, Senator Shelby as head of the Senate’s Intelligence subcommittee, and Congressman Weldon as head of the House’s Intelligence subcommittee efforts, as well as other U.S. government agencies and high ranking officials. [10]. The earlier clandestine asymmetrical strategic weapons were developed by the former USSR under rigid KGB and GRU control.  The first of these weapons were longitudinal EM wave interferometers; see Lunev and Winkler, ibid. 1998, p. 30: “Other instruments of destruction the Russians have had success with are seismic weapons.  Spitac and other small towns in the Transcaucasus Mountains were almost destroyed during a seismic weapons test that set off an earthquake.  This would have obvious applications on America’s west coast and other areas of the world prone to earthquakes.” These are also the weapons obliquely referred to by Defense Secretary Cohen in this statement: “Others [terrorists] are engaging even in an eco-type of terrorism whereby they can alter the climate, set off earthquakes, volcanoes remotely through the use of electromagnetic waves… So there are plenty of ingenious minds out there that are at work finding ways in which they can wreak terror upon other nations…It’s real, and that’s the reason why we have to intensify our [counterterrorism] efforts.”  Secretary of Defense William Cohen at an April 1997 counterterrorism conference sponsored by former Senator Sam Nunn.  Quoted from DoD News Briefing, Secretary of Defense William S. Cohen, Q&A at the Conference on Terrorism, Weapons of Mass Destruction, and U.S. Strategy, University of Georgia, Athens, Apr. 28, 1997.  The present author has been briefing these weapons to DoD and other government agencies for many years.  Most major weapons laboratories in various nations—including China—have now discovered longitudinal EM waves and either have such weapons or are furiously developing them.  As an example of a test by a giant strategic longitudinal EM wave interferometer, see Daniel A. Walker, Charles S. McCreery, and Fermin J. Oliveira, “Kaitoku Seamount and the Mystery Cloud of 9 April 1984,” Science, Vol. 227, Feb. 8, 1985, p. 607-611;  Daniel L. McKenna  and Daniel Walker, “Mystery Cloud: Additional Observations,” Science, Vol. 234, Oct. 24, 1986, p. 412-413.  This was a test in two modes: (a) in a cold explosion mode above the surface of the sea, creating a sudden low pressure zone above the water and accounting for the suction of water from the ocean to form the cloud, and (b) formation of a glowing spherical shell of l ight in the top of the cloud, and expanding that shell to some 400 miles diameter. The cold explosion can destroy a naval task force at sea or an armored element on the ground, as an example, or take out the personnel in fixed installations and fortified positions.  The intense shell of EM energy duds the electronics of any vehicle (aircraft, missile, satellite) passing through it, by inducing an extremely sharp pulse of electromagnetic energy arising inside the electronics, from local spacetime itself.  Hundreds of tests of these weapons have been observed. The great advantage of using longitudinal EM waves is that they readily pass right through intervening mass such as the ocean or the earth, with little attenuation. Hence an underwater nuclear submarine can be destroyed deep beneath the ocean—as witnessed by precisely that test of the first deployed Russian LW weapon to kill the U.S.S. Thresher in April 1963 off the East Coast of the United States.  The totally anomalous jamming signatures on the Thresher’s surface companion, the U.S.S. Skylark, positively reveal the nature of the weapon employed.  Kill of the Arrow DC-8 in Gander, Newfoundland was by one of these weapons, with abundant decisive signatures.  The present author published a photograph of the strike of the weapon two weeks earlier, offset from a night shuttle launch in Cape Canaveral, Florida.  This was the same weapon, being used for crew training, which destroyed the Arrow some two week later.  The TWA-800 crash off the East Coast of the U.S. was also such a shoot-down, as have been numerous others over the years, documented by the present author  At least seven nations now possess such longitudinal EM wave interferometer weapons.   Others are working furiously to develop them.  Also, even more powerful weapons of novel kind have been developed and deployed by three nations—neither of which is the United States. [11]. Proceeding conventionally, it will be 50 years before the organized scientific community will permit these emerging solutions to actually be developed and produced.  This is senseless; as the Manhattan Project in WW II showed, a newly emerging technology can go to production in four years.  Given only that neutron fission of the proper uranium isotope produced more neutrons than were input, the Manhattan Project developed operational atomic bombs of two major types in four years. An appreciable number of other “waiting areas for such development” exists in science in the literature.  However, they are not usually pushed forward into development for decades due to the continuing resistance of the scientific community to all innovations which threaten the favored projects (such as hot fusion) and favored theories.  Any “scientist in the trenches” is well aware that the progress of science is by means of a continuing massive cat and dog fight, not at all by sweet scientific reason and logic. [12]. A perhaps excessive harsh characterization of these “in the box” efforts is that they represent “psychological displacement activities” for the scientific community, the government decision makers, and perhaps even a part of the environmental community.  At best these programs represent “Look at all the good things we are doing!”.  They must further be assessed with the view that “Look at what they will not do, and what the results of expending all our efforts on them will be: catastrophic economic collapse in a decade or less.” [13]. We strongly point out that Maxwell’s equations are purely hydrodynamic equations.  There is thus a 100% correspondence to hydrodynamics and electromagnetic power systems.  Anything that can be done mechanically, or hydrodynamically with fluid flow, can be done with electromagnetic field energy flow, a priori.  It is thus a serious fault of the scientific community in proclaiming that electrical power systems with COP>1.0 are prohibited, because closed systems cannot exhibit COP>1.0.  All such arguments are evanescent, since all they state is that an open EM system far from thermodynamic equilibrium with the active vacuum is what is required.  But the classical electrodynamics (136 years old) used to design and build electrical power systems, does not even model the energy exchange between active vacuum and the system.  To put it mildly, this is a completely inexplicable aberration of the scientific mindset, and it has been such for over a century. [14]. Open EM systems far from thermodynamic equilibrium with their electrically active vacuum environment are indeed permitted by the Maxwell-Heaviside equations, prior to the arbitrary symmetrical regauging of the equations to yield simpler equations more mathematically amenable (done by Lorenz in 1867 and later by H.A. Lorentz).  The Lorentz condition requires that the system be symmetrical in its discharge of its free excitation energy.  The present closed current loop circuit ubiquitously used in power systems is designed specifically such that the system itself enforces the Lorentz symmetrical discharge of its excitation energy.  Thus one-half of the energy is discharged in the external losses and load, while one-half is discharged to destroy the source dipole actually extracting the EM energy from the active vacuum. Such design guarantees a system which destroys its intake of free electrical energy from the vacuum faster than it can use part of that energy to power the load.  I.e., it guarantees suicidal systems which can only exhibit COP<1.0.  Every electrical system ever built has been and is powered by electrical energy extracted directly from the seething vacuum, as we explain in the present paper. [15]. Such open systems far from thermodynamic equilibrium in the active vacuum exchange, rigorously are permitted to exhibit COP>1.0 and power themselves and their loads simultaneously.  By building only that subset of Maxwellian systems that forces Lorentz symmetrical regauging during discharge of the system’s excitation energy, our scientists and engineers have in fact simply discarded all those Maxwellian systems not in equilibrium with the vacuum during their excitation discharge.  In short, they simply do not build any such systems, or even design such.  The scientific and engineering communities themselves have directly produced and maintained the present horrible energy crisis and pollution of the biosphere. [16]. Ludvig Valentin Lorenz, “On the identity of the vibrations of light with electrical currents,” Philosophical Magazine, Vol. 34, 1867, p. 287-301. In this paper Lorenz gave essentially what today is called the “Lorentz symmetrical regauging”. Not much attention was paid to the earlier Lorenz work.  Later, H.A. Lorentz introduced the symmetrical regauging of the Maxwell-Heaviside equations, in its present modern form.  Lorentz’s influence was so great that symmetrical regauging — which reduced the theory to a subset and discarded all Maxwell-Heaviside systems of COP>1.0 and capable of powering themselves and a load simultaneously — was adopted and utilized.  It is still utilized ubiquitously; e.g., see [17]. Lorentz symmetrical regauging is still utilized ubiquitously, so that no self-powering systems are designed and developed by our energy scientists and engineers.  E.g., see J. D. Jackson, Classical Electrodynamics, Second Edition, Wiley, New York, 1975, p. 219-221; 811-812.  In symmetrically regauging the Heaviside-Maxwell equations, electrodynamicists assume that the potential energy of a system can be freely changed at will (i.e., that the system can be asymmetrically regauged at will).  They do it twice in succession, but carefully select two such “paired simultaneous asymmetrical regaugings” such  that the two new free force fields that emerge are equal and opposite and there is thus no net force which can be used to dissipate the free excess system energy from regauging and perform work in a load.  In short, they retain only those Maxwellian systems that foolishly oppose and strangle their own ability to freely discharge and use the free energy they first acquire (from the vacuum, by the first asymmetrical regauging).  Thereby the energy scientists arbitrarily discard all those Maxwellian systems which net asymmetrically regauge by changing their own potential energy and also producing a net nonzero force that can be used to discharge the excess free energy in a load without reservation.  Net asymmetrically regauged systems are open dissipative EM systems, freely receiving energy from their active external environment and thus permitted to dissipate the excess regauging energy in loads because they do not strangle that latter ability.  Hence the performance of the arbitrarily-excluded Maxwellian systems is not confined to classical thermodynamics, but is described by the thermodynamics of an open dissipative system.  Such systems can (i) self-organize, (ii) self-oscillate, (iii) output more energy than the operator himself inputs (the excess is freely received from the external active environment) (iv) “power” its own losses and an external load simultaneously (all the energy to operate the system and the load is received freely from the external active environment), and (v) exhibit negentropy. [18]. We can now show that enormous EM energy flow can be easily and cheaply initiated from the active vacuum, anywhere, at any time.  The basis for this was in fact discovered by Heaviside in the 1880s.  Lorentz knew of this huge energy flow component but discarded it arbitrarily, apparently to avoid being attacked and accused of being a perpetual motion advocate. See H.A. Lorentz, Vorlesungen über Theoretische Physik an der Universität Leiden, Vol. V, Die Maxwellsche Theorie (1900-1902), Akademische Verlagsgesellschaft M.B.H., Leipzig, 1931, “Die Energie im elektromagnetischen Feld,” p. 179-186. Figure 25 on p. 185 shows the Lorentz concept of integrating the Poynting vector around a closed cylindrical surface surrounding a volumetric element.  This is the procedure which arbitrarily selects only a small component of the energy flow associated with a circuit — specifically, the small Poynting component striking the surface charges and being diverged into the circuit to power it — and then treats that tiny component as the “entire” Poynting energy flow. [19]. The mathematical “trick” used by Lorentz to get rid of this easily and universally evoked giant negentropy, is still employed by electrical scientists and engineers without realizing what is actually being discarded.  For a full explanation, see T.E. Bearden, “Giant Negentropy from the Common Dipole,” Proc. IC-2000, St. Petersburg, Russia, July 2000 (in press).  A series of excellent papers by the Alpha Foundation’s Institute for Advanced Study (AIAS) have also been published, approved for publication, or submitted for consideration, in leading journals.  An example is M.W. Evans, T.E. Bearden et al., “Classical Electrodynamics without the Lorentz Condition: Extracting Energy from the Vacuum,” Physica Scripta, Vol. 61, 2000, p. 513-517.  A most formidable new AIAS paper, “Electromagnetic Energy from Curved Spacetime,” has been submitted to Optik and is in the referee process.  Two related paper giving a very solid basis for vacuum energy are M.W. Evans et al., “The Most General Form of Electrodynamics,” and “Energy Inherent in the Pure Gauge Vacuum,” both submitted to Physica Scripta and in the referee process.  The theoretical basis for extracting copious EM energy from the vacuum is now unequivocal and either has been published or is rapidly being published in leading journals. [20]. For example, see Myron W. Evans et al., AIAS group paper by 15 authors, “Classical Electrodynamics Without the Lorentz Condition: Extracting Energy from the Vacuum,” 2000, ibid.; “Runaway Solutions of the Lehnert Equations: The Possibility of Extracting Energy from the Vacuum,” Optik, 2000 (in press);—”Vacuum Energy Flow and Poynting Theorem from Topology and Gauge Theory,” submitted to Physica Scripta;—”Energy Inherent in the Pure Gauge Vacuum,” submitted to Physica Scripta;—”The Most General Form of Electrodynamics,” submitted to Physica Scripta; “The Aharonov-Bohm Effect as the Basis of Electromagnetic Energy Inherent in the Vacuum,” submitted to Optik;—”Electromagnetic Energy from Curved Spacetime,” submitted to Optik. [21]. As an example: The most critical scientist in the Western world, working on the “energy from the vacuum” approach, is Dr. Myron Evans, Founder and Director of the Alpha Foundation’s Institute for Advanced Study (AIAS).  Dr. Evans was hounded from his professorial position, has had his life threatened, has been without salary for several years, and fled to the United States for his very life.  He has some 600 papers in the hard literature, and is presently producing—in accord with Dr. Mendel Sachs’ epochal union of general relativity and electrodynamics — the world’s first engineerable unified field theory, and an advanced electrodynamics fully capable of dealing with and modeling EM energy from the vacuum. Yet, Dr. Evans lives in the United States (where he recently became a naturalized citizen) at the poverty level.  He can afford only one meal a day, has no automobile, no air conditioning, and continues epochal work under a medical condition that would stop any ordinary person less scientifically dedicated.  He continues to be vilified and viciously attacked by elements of the scientific community, even though other elements are of much assistance in publishing and reviewing his papers, etc.  It is a remarkable commentary upon the sad state of our scientific community that such a scientist and such epochal work, of tremendous importance to both the United States and all humanity, must continue in such circumstances. Meanwhile, the scientific community spends billions on vast projects of little significance in general, and of no significance at all in avoiding the coming world economic collapse and the destruction of civilization.  If this paper should fall into sympathetic hands which can obtain funding for Dr. Evans, then this author most fervently urges that such be accomplished at all speed.  The fate of most of the civilized world may well hinge upon such a simple thing, and upon such an insignificant expenditure. [22]. These are listed in M.W. Evans et al., “Classical Electrodynamics Without the Lorentz Condition: Extracting Energy from the Vacuum,” 2000, ibid. [23]. This system exists in small working prototype already, but I am under a nondisclosure agreement and cannot reveal the details of the process or the identity and location of the inventor. The system is capable of being rapidly scaled up to meet the 2003 critical milestone of “ready for mass production”.  One can expect up to a COP = 4 from this process. [24]. In an electrical power system, Coefficient of Performance (COP) may be taken as the average energy dissipated in the load divided by the average energy furnished to the system by the operator.  Or, it may be taken as the average power dissipated in the load divided by the average power dissipated in the input process.  COP can be taken across any component, several components, or the entire system.  The COP of a normal generator itself may be 0.9, for example, while when the entire system including the heater, etc. is taken into account, the system COP may be only 0.3.  For COP>1.0, excess energy must be furnished to the system by the external environment, while only part of the energy (or none of it) is input by the operator. [25]. The Kawai process, Johnson process, and the magnetic Wankel engine are ideal for this purpose. [26]. T.E. Bearden, “Bedini’s Method For Forming Negative Resistors In Batteries,” Proceedings of the IC-2000, St. Petersburg, Russia, July 2000 (in press). [27]. Teruo Kawai, “Motive Power Generating Device,” U.S. Patent No. 5,436,518.  Jul. 25, 1995.  Applying the Kawai process to a magnetic motor essentially doubles the motor’s efficiency.  If one starts with high efficiency magnetic motors of, say, COP = 0.7 or 0.8, then the new COPs will be 1.4 and 1.6.  Two Kawai-modified high efficiency Hitachi motors were in fact independently tested by Hitachi and yielded COP 1.4 and 1.6 respectively. [28]. See T.E. Bearden, “The Master Principle of EM Overunity and the Japanese Overunity Engines,” Infinite Energy, 1(5&6), Nov. 1995-Feb. 1996, p. 38-55; “The Master Principle of Overunity and the Japanese Overunity Engines: A New Pearl Harbor?”, The Virtual Times, Internet Node, Jan. 1996.  The principle of the magnetic Wankel engine is self-evident from the drawings alone. [29].Johnson, Howard R., “Permanent Magnet Motor.”  U.S. Patent No. 4,151,431,  Apr. 24, 1979; “Magnetic Force Generating Method and Apparatus,” U.S. Patent No. 4,877,983, Oct. 31, 1989; “Magnetic Propulsion System,” U.S. Patent No. 5,402,021, Mar. 28, 1995. [30]. In magnetic materials, the presence of two electrons near each other and having parallel spins results in the presence of a very strong force tending to flip the spin so that they are antiparallel. The forces between the electrons due to spin geometry are exchange forces of quantum mechanical nature.  In complex assemblies of different magnetic materials comprising a single stator or rotor magnet, the shapes and structures can be produced so that, as the rotor moves by the attracting stator and enters the usual back mmf zone, the powerful spin force is suddenly unleashed by the geometry, relative field strengths, and movement. This triggers the release of a violent pulse of magnetic field that greatly overrides the back mmf and strongly repels the rotor on out of this “gate” region where the exchange force is triggered.  Exchange force pulses may momentarily be 1,000 times as strong as the magnetic field H, or in some cases even stronger.  Evoking these responses automatically by the materials themselves, at controlled times and directions, produces the open system freely adding rotary energy from its vacuum exchanges inside the nonlinear materials.  Johnson has been able to achieve this effect consistently, opening the way for a legitimate self-powering permanent magnet motor.  We accent that the electrons involved are in direct energy exchange with the vacuum, and the exchange force energy comes from the violently broken symmetry in that vacuum exchange.  Multivalued magnetic potentials and hence nonconservative magnetic fields arise naturally in magnetic theory anyway.  However, conventional scientists exert enormous effort to eliminate such effects or minimize them — when in fact what is needed is to deliberately evoke and use them to produce systems with COP>1.0. [31]. Surrounding every dipolar EM circuit there exists a vast flow of nondiverged EM energy which misses the circuit entirely and is not presently accounted (thus “dark”) in electrical power systems and circuit theory.  Heaviside discovered it, Poynting never realized it, and Lorentz discarded it.  He discarded it because (a) he reasoned it was physically insignificant since it did nothing in the circuit, and (b) no one had the foggiest notion where such an enormous flow of EM energy—pouring from the terminals of every battery and generator—could possibly be coming from.  The trick Lorentz used to arbitrarily discard it is still used by electrodynamicists ubiquitously.  For a full background, see T.E. Bearden, “Giant Negentropy from the Common Dipole,” Proc. IC-2000 (ibid.); “On Extracting Electromagnetic Energy from the Vacuum, ” Proceedings of the IC-2000, St. Petersburg, Russia, July 2000 (in press); “Dark Matter or ?”, Journal of New Energy, 2000 (in press). [32]. Energy cannot be created or destroyed, but only changed in form.  Changing the form of energy is called “work”. When one joule of collected energy is “dissipated” to perform one joule of work, one still has one joule of energy remaining after that joule of work has been done. The energy is now just in a different form.  Scattering of energy in a resistor, e.g., is perhaps the simplest way of performing work, and known as “joule heating”. However, for a thought experiment: If the resistor is surrounded by a phase conjugate reflective mirror surface, much of the scattered energy will be precisely returned back to the resistor as re-ordered energy.  It can indeed be “reused” by again being scattered in the resistor to do work.  There is no conservation of work law in physics or thermodynamics!  If there is no re-ordering at all, then one can get only one joule of work from one joule of energy changed in form.  The remaining joule of energy in different form (as in heat) is just “wasted” from the system.  But if we deliberately use re-ordering (such as simple passive retroreflection), we can reuse the same joule of energy to do joule after joule of work, changing the form of the energy in each interaction.  Eerily, most of our scientists and engineers are aware that energy can be changed in form indefinitely without loss, but will then argue that energy cannot be recycled and reused.  The scientific prejudice against “COP>1.0″ processes and systems is so deep that many scientists are incapable of dealing with the real law of conservation of energy—which is simply that you can never get rid of any energy at all, but can only change its form.  Every joule of energy in the universe, e.g., was present not long after the Big Bang.  Since then, most of those joules of energy have each been doing joule after joule of work, for some 15 billion years. [33]. Kenneth R. Shoulders, “Energy Conversion Using High Charge Density,” U.S. Patent # 5,018,180, May 21, 1991.  See also Shoulders’ patents 5,054,046 (1991); 5,054,047 (1991); 5,123,039 (1992), and 5,148,461 (1992). See also Ken Shoulders and Steve Shoulders, “Observations on the Role of Charge Clusters in Nuclear Cluster Reactions,” Journal of New Energy, 1(3), Fall 1996, p. 111-121. [34]. For a summary of this rapidly developing field, see Diederik Wiersma and Ad Lagendijk, “Laser Action in Very White Paint,” Physics World, Jan. 1997, p. 33-37. [35]. For the early discovery, see V.S. Letokhov, “Generation of light by a scattering medium with negative resonance absorption,” Zh. Eksp. Teor. Fiz., Vol. 53, 1967, p. 1442; Soviet Physics JETP, Vol. 26, 1968, p. 835-839; “Laser Maxwell’s Demon,” Contemp. Phys., 36(4), 1995, p. 235-243.  For initiating experiments although with external excitation of the medium, see N.M. Lawandy et al., “Laser action in strongly scattering media,” Nature, 368(6470), Mar. 31, 1994, p. 436-438.  See also D.S. Wiersma, M.P. van Albada, and A. Lagendijk, Nature, Vol. 373, 1995, p. 103. [36]. For new effects, see D.S. Wiersma and Ad. Lagendijk, “Light diffusion with gain and random lasers,” Phys. Rev. E, 54(4), 1996, p. 4256-4265; D.S. Wiersma, Meint. P. van Albada, Bart A. van Tiggelen, and Ad Lagendijk, “Experimental Evidence for Recurring Multiple Scattering Events of Light in Disordered Media,” Phys. Rev. Lett., 74(21), 1995, p. 4193-4196; D.S. Wiersma, M.P. Van Albada, and A. Lagendijk, Phys. Rev. Lett., Vol. 75, 1995, p. 1739; D.S. Wiersma et al., Nature, Vol. 390, 1997, p. 671-673; F. Sheffold et al., Nature, Vol. 398, 1999, p. 206; J. Gomez Rivas et al., Europhys. Lett., 48(1), 1999, p. 22-28; Gijs van Soest, Makoto Tomita, and Ad Lagendijk, “Amplifying volume in scattering media, ” Opt. Lett., 24(5), 1999, p. 306-308; A. Kirchner, K. Busch and C. M. Soukoulis, Phys. Rev. B, Vol. 57, 1998, p. 277. [37]. A true negative resistor appears to have been developed by the renowned Gabriel Kron, who was never permitted to reveal its construction or specifically reveal its development.  For an oblique statement of his negative resistor success, see Gabriel Kron, “Numerical solution of ordinary and partial differential equations by means of equivalent circuits,” J. Appl. Phys., Vol. 16, Mar. 1945a, p. 173.  Quoting: “When only positive and negative real numbers exist, it is customary to replace a positive resistance by an inductance and a negative resistance by a capacitor (since none or only a few negative resistances exist on practical network analyzers).”  Apparently Kron was required to insert the words “none or” in that statement.  See also Gabriel Kron, “Electric circuit models of the Schrödinger equation,” Phys. Rev. 67(1-2), Jan. 1 and 15, 1945, p. 39.  We quote: “Although negative resistances are available for use with a network analyzer,…”.  Here the introductory clause states in rather certain terms that negative resistors were available for use on the network analyzer, and Kron slipped this one through the censors.  It may be of interest that Kron was a mentor of Floyd Sweet, who was his protégé.  Sweet worked for the same company, but not on the Network Analyzer project.  However, he almost certainly knew the secret of Kron’s “open path” discovery and his negative resistor.  The present author worked for several years with Sweet, who produced a solid state device (the magnetic Vacuum Triode Amplifier) with no moving parts which produced 500 watts of output power for some 33 microwatts of input power.  See Floyd Sweet and T.E. Bearden, “Utilizing Scalar Electromagnetics to Tap Vacuum Energy,” Proc. 26th Intersoc. Energy Conversion Engineering Conf. (IECEC ’91), Boston, Massachusetts, p. 370-375. [38]. Shoukai Wang and D.D.L. Chung, “Apparent negative electrical resistance in carbon fiber composites,” Composites, Part B, Vol. 30, 1999, p. 579-590.  Negative electrical resistance was observed, quantified, and controlled through composite engineering by Chung and her team.  Electrons were caused to flow backwards against the voltage, with backflow across a composite interface.  The team was able to control the manufacturing process to produce either positive or negative resistance as desired.  The University at Buffalo filed a patent application.  It first placed a solicitation to industry for developments, and offered a technical package to interested companies signing nondisclosure, then suddenly withdrew the offer.  It appears to this author that a “fix” may be in place on the development. [39]. It is common knowledge that the point-contact transistor could be manufactured to produce a true negative resistor where the output current moved against the voltage. E.g., see William B. Burford III and H. Grey Verner. Semiconductor Junctions and Devices: Theory to Practice, McGraw-Hill, New York, 1965.  Chapter 18: Point-Contact Devices.  Quoting from p. 281: “First, the theory underlying their function is imperfectly understood even after almost a century…, and second, they involve active metal-semiconductor contacts of a highly specialized nature.  …The manufacturing process is deceptively simple, but since much of it involves the empirical know-how of the fabricator, the true variables are almost impossible to isolate or study.   … although the very nature of these units limits them to small power capabilities, the concept of small-signal behavior, in the sense of the term when applied to junction devices, is meaningless, since there is no region of operation wherein equilibrium or theoretical performance is observed. Point-contact devices may therefore be described as sharply nonlinear under all operating conditions.”  We point out that the power limitation can be overcome by arrays of multiple point contacts placed closely together. [40]. It is the back coupling of the magnetic field from the secondary to the primary windings that forces the dissipation of equal energy in the primary of the transformer as is dissipated in the secondary.  If part of the return current in the secondary circuit bypasses the secondary of the transformer, the back field coupling to the primary is reduced accordingly.  Using a negative resistor as the bypass, the bypass of the current is “for free” (powered by the vacuum and a negentropic process).  Hence the result is a transformer/bypass system with COP>1.0. In that case, such a system can have a positive clamped feedback from the output of the secondary circuit, into the primary to power it, while still having energy remaining to power a load.  No laws of physics or thermodynamics are violated, once one understands how an EM circuit is actually powered.  E.g., see Bearden, “On Extracting EM Energy from the Vacuum, 2000 (ibid.). [41]. The Kawai process was seized in the personal presence of the present author and his CTEC, Inc. Board of Directors.  We had reached a full agreement with Kawai to manufacture and sell his units worldwide, at great speed.  Control of his company, his invention, and Kawai himself was taken over in our presence the next morning, and the Japanese contingent was in fear and trembling. [42]. The magnetic Wankel engine was developed and actually placed in a Mazda automobile.  The back mmf of the rotary permanent magnet motor is confined to a very small angle of the rotation.  As the rotor enters that region, a sudden cutoff of a small trickle current in a coil generates a momentary large Lenz law effect which overrides the back mmf and produces a forward mmf in that region.  The result is that one furnishes a small bit of energy to convert the engine to a rotary permanent magnet motor with no back mmf, but with a nonconservative net magnetic field.  For details, see T.E. Bearden, “The Master Principle of EM Overunity and the Japanese Overunity Engines,” Infinite Energy, 1(5&6), Nov. 1995-Feb. 1996, p. 38-55; “The Master Principle of Overunity and the Japanese Overunity Engines: A New Pearl Harbor?”, The Virtual Times, Internet Node, Jan. 1996. [43]. For a history and present status of Japanese organized crime, see Adam Johnston, “Yakuza: Past and Present,” Committee for a Safe Society, Organized Crime Page: Japan (available on the Internet). Michael Hirsh and Hideko Takayama, “Big Bang or Bust?”  Newsweek, Sept. 1, 1997, p. 44-45. [44]. As a ball-park figure for illustration, a nominal electrical circuit or power system actually extracts from the vacuum and pours out into space some 10 trillion times as much energy flow as the poorly designed “single pass” circuits intercept and utilize. [45]. However, the orthodox scientists do not know it, because they follow blindly the method introduced by Lorentz a century ago.  Lorentz arbitrarily discarded all that astounding energy flow that pours from the source dipole and misses the circuit, and retained only the tiny, tiny bit of it that strikes the circuit and enters it to power it.  Nothing at all has been done since then to capture more of that huge available energy and use it.  As a result of the ubiquitous Lorentz procedure, most electrical power system scientists and engineers are no longer aware that the huge unaccounted energy flow not striking the circuit even exists. [46]. The active vacuum interacts profusely with every electrodynamic system, but this is not modeled at all by the scientists and engineers designing and building electrical power systems.  They unwittingly design every system to enforce Lorentz symmetrical regauging during excitation energy discharge, which in effect forces equilibrium in the vacuum-system energy exchange during that dissipation.  Hence, classical equilibrium thermodynamics rigorously applies during use of the collected energy.  Such systems are limited to COP<1.0 a priori. [47]. In Nobelist Feynman’s words: “We…wish to emphasize … the following points: (1) the electromagnetic theory predicts the existence of an electromagnetic mass, but it also falls on its face in doing so, because it does n ot produce a consistent theory – and the same is true with the quantum modifications; (2) there is experimental evidence for the existence of electromagnetic mass, and (3) all these masses are roughly the same as the mass of an electron.  So we come back again to the original idea of Lorentz – maybe all the mass of an electron is purely electromagnetic, maybe the whole 0.511 Mev is due to electrodynamics.  Is it or isn’t it? We haven’t got a theory, so we cannot say. Richard P. Feynman, Robert B. Leighton, and Matthew Sands, Lectures on Physics, Vol. 2, 1964, p. 28-12.   Also: “We do not know how to make a consistent theory – including the quantum mechanics – which does not produce an infinity for the self-energy of an electron, or any point charge.  And at the same time, there is no satisfactory theory that describes a non-point charge.  It’s an unsolved problem.” Ibid., Vol. 2, 1964, p. 28-10.  In fact, “energy” itself is actually a very nebulous and inexact concept.  Again quoting: “It is important to realize that in physics today, we have no knowledge of what energy is.”  Ibid., Vol. 1, 1964, p. 4-2. [48]. E.g., a very recent AIAS paper, M.W. Evans et al., “The Most General Form of Electrodynamics,” submitted to Physica Scripta, rigorously shows just how wrong the present limited EM theory is. Quoting: “…there can be no electro-magnetic field [as such]  in the vacuum.  In other words there can be no electromagnetic field propagating in a source-free region as in the Maxwell-Heaviside theory, which is written in flat space-time using ordinary derivatives instead of covariant derivatives.”  The reason is quite simple: spacetime is active and curved.  The great John Wheeler and Nobelist Feynman, e.g., realized that EM force fields cannot exist in space.  They pointed out that only the potential for such fields existed in space, should some charges be made available so that the fields could be developed on them.  See Richard P. Feynman, Robert B. Leighton and Matthew Sands, The Feynman Lectures on Physics, Addison-Wesley, New York, Vol. I, 1963, p. 2-4. [49]. Max Planck, as quoted in G. Holton, Thematic Origins of Scientific Thought, Harvard University Press, Cambridge, MA, 1973. [50]. Arthur C. Clarke, in “Space Drive: A Fantasy That Could Become Reality” NSS … AD ASTRA, Nov/Dec 1994, p. 38. [51]. E.g., quoting Nobelist Lee:  ““…the discoveries made in 1957 established not only right-left asymmetry, but also the asymmetry between the positive and negative signs of electric charge. … “Since non-observables imply symmetry, these discoveries of asymmetry must imply observables.” [T. D. Lee, Particle Physics and Introduction to Field Theory, Harwood, New York, 1981, p. 184.] On p. 383, Lee points out that the microstructure of the scalar vacuum field (i.e., of vacuum charge) is not utilized.  Particularly see Lee’s own attempt to indicate the possibility of using vacuum engineering, in his “Chapter 25: Outlook: Possibility of Vacuum Engineering,” p. 824-828.  Unfortunately Lee was unaware of Whittaker’s profound 1903 decomposition of the scalar potential, as between the ends of a dipole, which gives a much more practical and easily evoked method for re-ordering some of the vacuum’s energy, extracting copious EM energy flows from it, and setting the stage for self-powering electrical power systems worldwide. [52]. The present author has taken the necessary first major step, by using Whittaker decomposition of the scalar potential between the poles of a dipole to reveal a simple, direct, cheap method for extracting and sustaining enormous EM energy flows from the dipole’s asymmetry in its energetic exchange with the active vacuum. [53]. The internal energy available to a generator is the shaft energy we input to it.  In large power plants this is usually by a steam turbine, and heat (from a nuclear reactor, burning hydrocarbons, etc.) is used merely to heat the water in the boiler to make steam to run the steam turbine.  Every bit of all that is just so the generator will have some internal energy made available with which it can then forcibly make the dipole.  That is all that generators (and batteries) do: Use their available internal energy to continually make the source dipole — which our engineers design the circuit to keep destroying faster than the load is powered. [54]. By “dipole” we mean the positive charges are forced to one side, and the negative charges forced to the other.  This internal “source dipole” formed by the generator or battery is electrically connected to the terminals. [55]. This has been known in particle physics for nearly 50 years.  It stems from the discovery of broken symmetry by C.S. Wu et al.  in 1957.  A dipole is known to be a broken symmetry in its violent energy exchange with the active vacuum.  Rigorously this means that some of the “disordered” EM energy received by the dipole from the vacuum, is re-ordered and re-radiated as usable, observable EM energy.  Conventional electrodynamics and power system engineering do not model the vacuum’s interaction, much less the broken symmetry of the generator or battery dipole in that continuous energy exchange. [56]. A pictorial illustration of the enormity of the energy flow through the surrounding space, and missing the external circuit entirely, is given by John D. Kraus, Electromagnetics, Fourth Edn., McGraw-Hill, New York, 1992—a standard university text.  Figure 12-60, a and b, p. 578 shows a good drawing of the huge energy flow filling all space around the conductors, with almost all of that energy flow not intercepted by the circuit at all, and thus not diverged into the circuit to power it, but just “wasted” by passing it on out into space. [57]. That is, the interception of the little “boundary layer” or “sheath” of the flow, right on the surface of the wires. [58]. Poynting never considered anything but this small little “intercepted” component of the energy flow that actually entered the circuit.  E.g., see  J.H. Poynting, “On the connexion between electric current and the electric and magnetic inductions in the surrounding field,” Proc. Roy. Soc. Lond., Vol. 38, 1985, p. 168. [59]. In technical terms, the closed current loop circuit forces the Lorentz symmetrical regauging condition during the discharge of the excitation energy collected by the circuit.  By definition, half the energy is thus used to oppose the system function (i.e., to destroy the source dipole) while the other half of the excitation energy is used to power the external losses and the load.  With half the collected energy used to destroy the free extraction of energy from the vacuum, and less than half used to power the load, these ubiquitous circuits destroy their source of free vacuum energy faster than they power their loads.  Hence, we ourselves have to steadily input shaft energy to the generators so that they can continue to reform the dipole.  In the vernacular, that is not the way to run the railroad! [60]. Maxwell’s seminal paper was published in 1864, as a purely material fluid flow (hydrodynamic) theory.  At the time, the electron and the atom had not been discovered, hence the reaction of two opposite charges (positive nuclei, negative Drude electrons) in the wire was not modeled but only one was modeled, etc. Maxwell omitted half the EM wave in the vacuum and half the energy, resulting in the omission of the EM cause and generatrix of Newton’s third law reaction from electrodynamics.  This omission is present in electrodynamics, where the third law reaction appears as a mystical effect without a known cause.  The cause and mechanism is the omitted reaction of the observed effect back upon the non-observed cause.  General relativity, e.g., does include this reaction mechanism from the effect back upon the cause.  However, electrodynamicists still omit half the electromagnetics, half the wave, and half the energy as is easily shown.  E.g., it is demonstrated in every EM signal reception in a simple wire antenna, when the resulting perturbations of both the positive nuclei and the Drude electrons are correctly attributed to their interactions with the incoming EM fields (waves) from the vacuum. [61]. Mario Bunge, Foundations of Physics, Springer-Verlag, New York, 1967, p. 176. [62]. T.E. Bearden, “On Extracting Electromagnetic Energy from the Vacuum, ” Proc. IC-2000, St. Petersburg, Russia, July 2000 (in press). [63]. T.E. Bearden, “Bedini’s Method For Forming Negative Resistors In Batteries,” Proc. IC-2000, St. Petersburg, Russia, July 2000 (in press). [65]. E.g., a good short summary is given by Dr. Theodore Loder, Institute for the Study of Earth, Oceans, and Space (EOS), University of New Hampshire, Durham, NH in his short paper, “‘Comparative Risk Issues’ Regarding Present and Future Environmental Trends: Why We Need to be Looking Ahead Now!”, prepared for the Senate Committee on the Environment and Public Works, June 1, 2000.  Certainly Dr. Loder and EOS can fully expound on the details of the biospheric pollution from the various contributing factors and processes. [66]. One need only regard the vehement attacks by the scientific community (and much of the government including national laboratories) upon cold fusion researchers, to understand why many inventors and scientists in the COP>1.0 open dissipative energy field are openly distrustful of the government and government scientists.  Further, the U.S. Patent Office is known to be under rather explicit instructions not to issue patents on COP>1.0 electrical processes and systems. [67]. E.g., the well-known Bohren experiment produces 18 times as much energy output as the operator must input.  The excess energy is extracted directly from the vacuum.  There has been no program, to my knowledge, seeking to exploit this well-proven COP>1.0 mechanism that has been in the hard science literature for some time.  See Craig F. Bohren, “How can a particle absorb more than the light incident on it?”  Am. J. Phys., 51(4), Apr. 1983, p. 323-327. Under nonlinear conditions, a particle can absorb more energy than is in the light incident on it. Metallic particles at ultraviolet frequencies are one class of such particles and insulating particles at infrared frequencies are another. For independent validation of the Bohren phenomenon, see H. Paul and R. Fischer, {Comment on “How can a particle absorb more than the light incident on it?’},” Am. J. Phys., 51(4), Apr. 1983, p. 327. [68]. G. Johnstone Stoney, “Microscopic Vision,” Phil. Mag. Vol. 42, Oct. 1896, p. 332; , “On the Generality of a New Theorem,” Phil. Mag., Vol. 43, 1897, p. 139-142; “Discussion of a New Theorem in Wave Propagation,” Phil. Mag., Vol. 43, 1897, p. 273-280; “On a Supposed Proof of a Theorem in Wave-motion,” Phil. Mag., Vol. 43, 1897, p. 368-373. [69]. E. T. Whittaker, “On the Partial Differential Equations of Mathematical Physics,” Math. Ann., Vol. 57, 1903, p. 333-355. [70]. Evans in a private communication has pointed out that Whittaker’s method depends upon the Lorentz gauge being assumed.  If the latter is not used, the Whittaker method is inadequate, because the scalar potential becomes even more richly structured.  My restudy of the problem with this in mind concluded that, for the negentropic vacuum-reordering mechanism involving only the dipole and the charge as a composite dipole, it appears that the Whittaker method can be applied without problem, at least to generate the minimum negentropic process itself.  However, this still leaves open the possibility of additional structuring. The actual negentropic reordering of the vacuum energy (and the structure of the outpouring of the EM energy 3-flow from the charge or dipole) may permissibly be much richer than given by the simple Whittaker structure alone.  In other words, the Whittaker structure used in this paper should be regarded as the simplest structuring of the negentropic process that can be produced, and hence as a lower boundary condition on the process. [71]. Time-like currents and flows do appear in the vacuum energy, if extended electrodynamic theory is utilized.  E.g., in the received view the Gupta-Bleuler method removes time-like photons and longitudinal photons. For disproof of the Gupta-Bleuler method, proof of the independent existence of such photons, and a short description of their characteristics, see Myron W. Evans et al., AIAS group paper, “On Whittaker’s F and G Fluxes, Part III: The Existence of Physical Longitudinal and Time-Like Photons,” J. New Energy, 4(3), Winter 1999, p. 68-71; “On Whittaker’s Analysis of the Electromagnetic Entity, Part IV: Longitudinal Magnetic Flux and Time-Like Potential without Vector Potential and without Electric and Magnetic Fields,” ibid., p. 72-75.  To see how such entities produce ordinary EM fields and energy in vacuo, see Myron W. Evans et al., AIAS group paper, “On Whittaker’s Representation of the Electromagnetic Entity in Vacuo, Part V: The Production of Transverse Fields and Energy by Scalar Interferometry,” ibid., p. 76-78.  See also Myron W. Evans et al., AIAS group paper, “Representation of the Vacuum Electromagnetic Field in Terms of Longitudinal and Time-like Potentials: Canonical Quantization,” ibid., p. 82-88. [72]. For a short treatise on the complex Poynting vector, see D.S. Jones, The Theory of Electromagnetism, Pergamon Press, Oxford, 1964, p. 57-58.  In a sense our present use is similar to the complex Poynting energy flow vector, but in our usage the  absolute value of the imaginary energy flow is equal to the absolute value of the real energy flow, and there is a transformation process in between.  This usage is possible because the imaginary flow is into a transducer, which takes care of transforming the received imaginary EM energy into the output real EM energy. We stress that the word “imaginary” is not at all synonymous with fictitious, but merely refers to what “dimension” or state the EM energy exists in. [73]. Unfortunately, electrical engineers use the term “power” to also mean the rate of energy flow, when rigorously the term “power” means the rate at which work is done.  We accent that we fully understand the difference, but are using the terminology common to the profession. [74]. Nobelist Prigogine experienced something very similar when he proposed his open dissipative systems, where the system operations did not lead to the conventional increasing disorder.  To say that he was subjected to the Inquisition is not an exaggeration.  Other scientists have repeatedly been subjected to intense scientific attack and suppression—including Mayer (conservation of energy), Einstein (relativity), Wegener (drifting continental plates), Ovshinsky (amorphous semiconductors), to name just a few of the hundreds who have been attacked in similar fashion.  Science does not proceed by sweet reason, but by a vicious dogfight with no holds barred.  It delights in “wolf pack” attacks upon the scientist with a new idea or discovery. [75]. And the scientific community is certainly not prepared for the notion of using time as energy, freely and anywhere.  In a sense, one can “burn time as fuel”.  Consider this: In physics, the choice of fundamental units in one’s physics model is completely arbitrary. E.g., one can make a quite legitimate physics model having only a single fundamental unit (such is already done in certain areas of physics).  E.g., suppose we make the “joule” (energy) the only fundamental unit.  It follows then that everything else — including the second and therefore time — is a function of energy.  One can utilize the second as c2 joules of energy.  Hence, the flow of time would have the same energy density as mass.  After Einstein, the atom bomb, and the nuclear reactor, of course, we are all comfortable with the fact that mass is just spatial energy compressed by the factor c2.  So we really should not be too uncomfortable at the notion that time itself is energy compressed by the factor c2.  In this case, if every second of the passage of time, we were to convert one microsecond into ordinary EM spatial energy, we would produce some 9´1010 joules of EM energy. Since that is done each second, this would give us the equivalent of the output of 90 1000-megawatt power plants.  If only 1.11% efficient, the conversion process would yield the equivalent of one 1000-megawatt power plant. In fact, it is in theory possible to do such a conversion, and we have previously indicated the various mechanisms involved.  There are also some rough experimental results that are at least consistent with the thesis.  The interested reader is referred to T.E. Bearden, “EM Corrections Enabling a Practical Unified Field Theory with Emphasis on Time-Charging Interactions of Longitudinal EM Waves,” J. New Energy, 3(2/3), 1998, p. 12-28. See also the author’s similar paper with the same title, in Explore, 8(6), 1998, p. 7-16. We believe that the real energy technology for the second half of this century is based on use of time for fuel.  The fundamental reactions and principles also enable a totally new form of high energy physics reactions, where very low spatial energy photons are the carriers (their time components carry canonical time-energy, so that the highest energy photons of all, given time-energy conversion, are low frequency photons.  These new reactions (given in the references cited) are indeed consistent with the startling nuclear transformation reactions met at low (spatial) photon energies in hundreds of successful cold fusion experiments worldwide. [76]. A classic example is given by Paul Nahin in his Oliver Heaviside: Sage in Solitude, IEEE Press, New York, 1988, p. 225.  Quoting: “J.J. Waterston’s paper on the kinetic theory of gases, in 1845, was rejected by the Royal Society of London.  One of the referees declared it to be ‘nothing but nonsense, unfit even for reading before the Society.’ … “Waterston’s dusty manuscript was finally exhumed from its archival tomb forty years later, because of the efforts of Lord Rayleigh…”  Our comment is that the same scientific attitude and resistance to innovative change prevails today.  As the French say, “Plus ça change, plus c’est la même chose!” [77]. E.g., see G. Nicolas and I. Prigogine, Exploring Complexity, Piper, Munich, 1987 (an English version is Exploring Complexity: An Introduction, Freeman, New York, 1989); Ilya Prigogine, From Being to Becoming: Time and Complexity in the Physical Sciences, W.H. Freeman and Company, San Francisco, 1980. In 1977, Prigogine received the Nobel Prize in chemistry for his contributions to nonequilibrium thermodynamics, especially the theory of dissipative structures. [78]. E.g., see, Moisés Naím, “Lori’s War,” Foreign Policy, Vol. 118, Spring 2000, p. 28-55.  See particularly Lori Wallach and Michelle Sforza, Whose Trade Organization? Corporate Globalization and the Erosion of Democracy, published by Public Citizen Foundation  and available by order from Perusal of the leading environmental activist web sites now shows a significant and rising awareness that globalization is merely the surface façade of an older, imperial, feudalistic capitalism where checks and balances established by national states are being slowly and methodically bypassed. [79]. The interested reader is referred to Andrew A. Marino, Powerline Electromagnetic Fields and Human Health, at Particularly see “Chapter 5, Blue-Ribbon Committees and Powerline EMF Health Hazards,” and “Chapter 6: Power-Industry Science and Powerline EMF Health Hazards.” Biophysicist Marino is one of the leaders in the field and has been personally involved in many skirmishes with powerline-dominated studies and findings.  As an example, quoting from Chapter 6: “Neither scientists nor the public can rely on power-industry research or analysis to help decide whether powerline electromagnetic fields affect human health because power-industry research and analysis are radically misleading.”  There are many other reports in the literature, which also show effects of EM nonionizing radiation on cells, including detrimental effects. [80]. Becker studied not just the immune system — which “heals” nothing at all, not even its own damaged cells —but also the cellular regenerative system.  He and others found, e.g., that tiny trickle currents and potentials — either steady or pulsed — placed across otherwise intractable bone fractures, would result in a rather astounding set of cellular changes which led to healing of the fracture by deposit of new bone. Eerily, Becker showed that the red blood cells coming into the area and under the EM influence, would shuck their hemoglobin and grow cellular nuclei (i.e., dedifferentiate back to an earlier cellular state).  Then these cells would redifferentiate into the type of cells that made cartilage.  Then those cells would differentiate into the type of cells that make bone, and be deposited in the fracture to “grow bone” and heal the fracture.  Incredibly, this is the only true “healing” modality in all Western medical science — which is otherwise built upon the theory of intervention rather than healing.  After the intervention (which may be quite necessary!), the body’s cellular regenerative system — or what is left of it after damage by such interventions as chemotherapy, etc. — is left entirely upon its own to restore the damage (heal the damaged cells and tissues).  Becker was twice nominated for a Nobel Prize.  However, because he also testified in court against power companies, giving testimony as an expert witness that EM radiation from power lines could indeed induce harmful conditions in some exposed people, he was suppressed and eventually forced to retire. [81]. See Robert O. Becker and Andrew A. Marino, Electromagnetism and Life, State University of New York Press, Albany, 1982.   This reference gives a nice summary of EM bioeffects from the orthodox view, current as of the publication date.  For Becker’s work with the cellular regenerative system, see particularly R.O. Becker, “The neural semiconduction control system andits interaction with applied electrical current and magnetic fields,” Proc. XI Internat. Congr. Radiol., Vol. 105, 1966, p. 1753-1759, Excerpta Medica Foundation, Amsterdam. See Becker, “The direct current field: A primitive control and communication system related to growth processes,” Proc. XVI Internat. Congr. Zool., Washington, D.C., Vol. 3, 1963, p. 179-183. [82]. For an overview of the ansatz of present battery technology, see David Linden, Editor in Chief, Handbook of Batteries, Second Edition, McGraw Hill, New York, 1995; Colin A. Vincent and Bruno Scrosati, Modern Batteries: An Introduction to Electrochemical Power Sources, Second Edition, Wiley, New York, 1997. For a process to make a battery include a negative resistor and exhibit COP>1.0, see Bearden, “Bedini’s Method For Forming Negative Resistors In Batteries,” Proc. IC-2000, St. Petersburg, Russia (in press). [83]. Such laboratories are private and professional testing companies, where the U.S. government has certified their expertise and qualifications, their testing to NIST, IEEE, and U.S. government standards, their use of calibrated instruments, and the experience and ability of their professional test engineers and scientists. Such labs are routinely and widely used by aerospace firms.  A Test Certificate from such a lab is acceptable by the courts, the U.S. Patent and Trademark Office, the U.S. government (which requires it on many contracts), and by the U.S. scientific community.  A goodly number of these laboratories are available throughout the U.S. [84]. A few struggling publications in the “new energy” field are crucial to continued progress.  The major ones are Journal of New Energy (Dr. Hal Fox, publisher), Infinite Energy (Dr. Eugene Mallove, publisher), and Explore (Chrystyne Jackson, publisher).  Independent sustaining funding for these publications is urgently needed.  We also highly commend the Department of Energy’s Transportation group for maintaining a DOE website carrying the advanced electrodynamics papers of the Alpha Foundation’s Institute for Advanced Study (AIAS).  Funding for the AIAS is also urgently needed, to continue this absolutely essential theoretical work that is placing a solid physics foundation under the program of extracting and using EM energy from the vacuum. [85]. Some recommended publications of interest are: Joshua Lederberg, Editor, Biological Weapons: Limiting the Threat, MIT Press, Cambridge, MA, 1999, with a foreword by Defense Secretary William S. Cohen; Richard A. Falkenrath, Robert D. Newman, and Bradley A. Thayer, America’s Achilles Heel: Nuclear, Biological, and Chemical Terrorism and Covert Attack, MIT Press, 1998; Wendy Barnaby, The Plague Makers: The Secret World of Biological Warfare, Vision Paperbacks, Satin Publications Ltd., London, 1999 (a most readable and educational book for the nonspecialist), U.S. Congress, Office of Technology Assessment, Proliferation of Weapons of Mass Destruction: Assessing the Risks, Government Printing Office,Washington, D.C., 1993 (a major study on WMD and the risks to the U.S., including to the U.S. civilian population); Global Proliferation of Weapons of Mass Destruction, Part I, Senate Hearing 104-422, Hearings Before the Permanent Subcommittee on Investigations of the Committee on Governmental Affairs, U.S. Senate, Oct. 31 and Nov. 1, 1995. [86]. Unfortunately, the extant unclassified references on longitudinal EM and more advanced EM weapons seem to be the publications by the present author, e.g., T.E. Bearden, “Mind Control and EM Wave Polarization Transductions, Part I”, Explore, 9(2), 1999, p. 59; Part II, Explore, 9(3), 1999, p. 61; Part III, Explore, 9(4,5), 1999, p. 100-108;—”EM Corrections Enabling a Practical Unified Field Theory with Emphasis on Time-Charging Interactions of Longitudinal EM Waves,” Journal of New Energy, 3(2/3), 1998, p.12-28;—Energetics of Free Energy Systems and Vacuum Engine Therapies, Tara Publishing, Internet node, July 1997;—Gravitobiology: A New Biophysics, Tesla Book Co., P.O. Box 121873, Chula Vista, CA 91912, 1991;—Fer-de-Lance, Tesla Book Co., 1986;—AIDS: Biological Warfare, Tesla Book Co., 1988;—Soviet Weather Engineering Over North America, 1-hour videotape, 1985;—Energetics: Extensions to Physics and Advanced Technology for Medical and Military Applications, CTEC Proprietary, May 1, 1998, 200+ page inclosure to CTEC Letter, “Saving the Lives of mass BW Casualties from Terrorist BW Strikes on U.S. Population Centers,” to Major General Thomas H. Neary, Director of Nuclear and Counterproliferation, Office of the Deputy Chief of Staff, Air and Space Operations, HQ USAF, May. 4, 1998;—”Overview and Background of KGB Energetics Weapons Threat to the U.S.,” updated Jan. 3, 1999, furnished to selected Senators and Congresspersons. [87]. As an example, for decades Castro ran guerrilla and agent training camps in Southern Mexico. Many of the graduates of those camps—trained terrorists all—have been infiltrated across the U.S. border and into the U.S., to bide their time and wait for instructions.  Some estimates are that several thousand such Castro agents alone are already on site and positioned for sabotage, poisoning of water supplies, destruction of transmission line towers, destruction of key bridges, etc.  Several other nations hostile to the U.S. are also known to have agent teams already on site within the U.S.  The new form of warfare/terrorism is to introduce the “troops” into the adversary’s nation and populace in advance, as well as weapons caches, etc.  So such preparations have definitely been accomplished within the United States, and undoubtedly some are still in progress and ongoing. [88]. ‘E.g., see Stanislov Lunev and Ira Winkler, 1998, ibid. Quoting, p. 22: “Though most Americans don’t realize it, America is already penetrated by Russian military intelligence to the extent that arms caches lie in wait for use by Russian special forces — or Spetznatz.” Another way to get a weapon into the country is to have an ‘oceanographic research’ submarine deliver the device — accompanied by GRU specialists — to a remote section of coastline. The Author Dr. Thomas Bearden (Lieutenant Colonel U.S. Army – Retired) is presently the President and Chief Executive Officer, CTEC, Inc., a Fellow Emeritus of Alpha Foundation’s Institute of Advanced Study (AIAS) and a Director of the Association of Distinguished American Scientists (ADAS).  He has a Science PhD, a MS in Nuclear Engineering, BS in Mathematics, with minor in Electronic Engineering as well as a graduate of C&GSC, U.S. Army and graduate of the U.S. Army Guided Missile Staff Officer’s Course (equivalent to MS in Aerospace Engineering). He also has graduate courses in statistics, electromagnetics and numerous missile, radar, electronic warfare, and counter-countermeasures courses. He had twenty years of active service in the U.S. Army. His field Artillery, Patriot, Hawk, Hercules, Nike Ajax, and technical research experience was followed by nineteen years of technical research in re-entry vehicles and heat shielding, computer systems, C4I, wargame analysis, simulation and analysis, EW, ARM countermeasures, and strategy and tactics.  He has spent more than 20 years personal research in foundations of electrodynamics and open EM systems far from thermodynamic equilibrium with the active environment, as well as novel effects of longitudinal EM waves on living systems and founded the beginning of a legitimate theory of permissible COP>1.0 electrical power systems. He is the author or co-author of approximately 200 papers and books and has been connected with four successful COP>1.0 laboratory prototype EM power systems. He is one of the world’s leading theorists dealing with the hard physics of over-unity energy systems and scalar weapons technology. Web site:
5666eaaa94cb4754
Table of Contents 1. Jérémie Szeftel: Alas, I missed this talk….thanks Air Canada. 1. arXiv 2. slides 2. Sebastian Herr: Small data theory for energy critical periodic NLS 1. Warm-up remarks 2. $NLS^{\pm}_5 (T^3)$ 3. New Strichartz Estimates 4. Perturbative Analysis 5. Trilinear Strichartz 6. Sketch of proof 7. Contraction estimate 8. Remarks 9. Questions 3. Benjamin Schlein: Effective evolution equations from many body quantum dynamics 1. Introduction 2. Boson Stars 3. Dynamics of Bose-Einstein Condensates 4. Adrian Constantin: Camassa-Holm 1. Physical Background 2. Emergence of Camassa-Holm Equation 3. Geometric viewpoint as a geodeisc on the diffeomorphism group 4. Integrable Sturcture 5. Claudio Muñoz: Dynamics of gKdV solitons under perturbations by potentials in front of nonlinear term 6. Mihalis Dafermos: Superradiance, trapping and decay for waves on Kerr spactimes in the general subextremal case $|a| < M$. 1. Boundedness and decay for $\square_g \psi =0$ on Schwarzschild and Kerr 2. Current state of the art for the quantitative study of $\square_g \psi = 0$ 3. Review of the main features of Kerr Spacetimes 4. Proof of integrated local energy decay. 5. Open Problems 7. Stephen Gustafson: Dynamics on near-harmonic Schrödinger and Landau-Lifschitz maps 1. Regurlarity vs. Singularity: energy critical problems 2. Equivariant Maps 3. New results: global solutions for degree 2 (LL) with $a_1 > 0$. 4. Standard “modulation theory” approach 5. A remedy for $m \leq 3 $ and its cost. 6. Conclusions 8. Ioan Bejenaru: Near soliton evolution in 2d Schrödinger Maps 1. Large Data Theory 2. Equivariant Harmonic Maps on $S^2$. 3. Basic setup for stability/instability 4. Modulation Theory 9. Frank Merle: Isolatedness of characteristic points for blow-up solutions of semilinar wave equation 1. Semilinear Wave Equation, Blowup Surface 2. Summary of Results 10. Ben Dodson: Defocusing $L^2$-Critical NLS 1. Mass-Critical NLS 2. Minimal Mass Blowup Solution Strategy 3. Galilean Invariance Observations 4. $L^2_t$ interval decomposition induction argument 5. Decomposition of nonlinearity 6. Questions 7. Postlude 11. Killip: Energy Supercritical Wave Equation in 3d 1. Introduction 2. Step 1: Minimal Criminal 3. Step 2: Minimal Criminal satisfies one of three scenarios: 4. Step 3. No finite time blowup solutions. 5. Step 4. Solutions move more slowly than light speed. 6. Step 5. $L^p$ decay. 7. Step 6. A more quantitative $L^p$ estimate. 8. Step 7. Climax $E(u) < \infty.$ 9. Step 8. Completion of Theorem 10. Questions/Comments: 12. Wilhelm Schlag: Global dynamics above the ground state energy 1. Klein-Gordon and Schrödinger Equations 2. Questions and Answers 3. Computer Simulations 4. Structures in Phase Space 5. Final State Descriptions near $Q$ 13. Jeremy Marzuola: Scattering and soliton stability in ${\dot{H}}^{-1/6}$ for quartic KdV 1. The problem 2. Previous Results 3. Function Spaces 4. Steps of Proof 5. Energy spaces 6. Nonlinear Modulation 7. Postlude 14. Sijue Wu: Global and almost global wellposedness of the two and three dimensional full water wave equations 1. Introduction 2. LWP 3. Global-in-time behavior 4. Statements 5. Normal Forms Discussion 15. Nickolay Tzvetkov: On random data nonlinear wave equations 1. Framework 2. Randomized data on $T^3$ 3. Steps in the proof 4. On the proof of the Global existence step for $s>0$ 5. Questions 16. Pierre Germain: Global existence for coupled Klein-Gordon equations with different speeds 1. General Problem: Understand global existence and scattering for nonlinear dispersive equations with very nice data. 2. NLW, $d=3$ 3. NLKG 4. Statement 5. Spacetime resonance method 6. Application to our problem 7. Questions 8. Postlude: 17. Oana Ivanovici: Dispersive Estimates on convex domains 1. Introduction 2. Applications 3. Cusp solutions hugging the boundary 4. Proof 18. Axel Grünrock: Cauchy Problem for higher order KdV and mKdV equations 1. Equations 2. Earlier Results 3. New Results 4. Questions 5. Postlude 19. Selberg: Global existence for the Maxwell-Dirac system in two space dimensions 1. Maxwell-Dirac and Dirac-Klein-Gordon 2. Results 3. 2d DKG 4. What lies behind the proof? 20. Jason Metcalfe: Long time existence for nonlinear wave equations in exterior domains 1. Problem $S$: 2. Problem $Q$: 21. Scipio Cuccagna: The Hamiltonian structure of the nonlinear Schrödinger equation and the asymptotic stability of its ground states 22. Alexandru Ionescu: Uniquness theorems in general relativity 1. Spacetimes 2. Key properties of Kerr spacetimes: 3. Postlude This page contains notes by J. Colliander taken at the workshop: I apologize for any mistakes! If any of the speakers would like me to post (or link to) their slides, please send me the file. –Jim Colliander Sebastian Herr: Small data theory for energy critical periodic NLS (joint work with Tataru and Tzvetkov) Energy critical NLS focusing or defocusing on a manifold M. Specific examples with Laplace Beltrami operator. Mostly intersted in manifolds with periodic geodesics. For example $\mathbb{T}^3$ or tori crossed with $\mathbb{R}^d$. Target is LWP. Warm-up remarks • Warm up: $M = {\mathbb{R}^d}$. Strichartz, dual Strichartz, dispersive decay $\implies$ (Cazenave-Weissler) LWP. • Non-Euclidean cases: Asymptotically Euclidean and nontrapping metrics have been studied. • Failure of sharp Strichartz estimates on torus and on sphere. • Trapping creates geometric obstructions to dispersion. • Trapping can create instabilities and failure of Strichartz estimates. • Known estimates: Strichartz with a loss of derivatives. • Available estimates have some loss. The loss obeys the scaling but it is insufficient to control the quintic nonlinearity. We end up needing an $L^4$ estimate, which is unavailable. • Our strategy is to use multilinear, scale invariant versions of Strichartz estimates to better share the derivatives. • Use almost orthogonality wrt spacetime to reduce estimates to smaller scales. • Replacements/refinements of the $X^{s,1/2}$? We use the critical function spaces $U^p, V^p$. • We will need refinements of these spaces which are sensitive to finer than dyadic frequency localizations. New Strichartz Estimates • We have the Strichartz estimates on functions supported on cubes in Fourier space • For all rectangular sets of arbitrary orientation and center, we get a better bound! • This boils down to a classical estimate (Landau 24) for counting the number of lattice points on 6d ellipsoid. Perturbative Analysis • $U^p$: Definition involving all partitions of the line using $U^p$-atoms. • These are Banach spaces which embed into $L^\infty$. • $V^p$: We need another type of space. These are functions of finite $L^p$ variation over the partitions of the line. • $U^p \rightarrow V^p_{rc} \rightarrow L^\infty$ (Embeddings) • $\| u \|{U^p{\Delta} H^s} = \| e^{-it \Delta} u \| {U^p (R; H^s})$. (Similarly wrt $V^p$, as in Ginibre’s Asterisque.) • We choose then $p=2$ and call the resulting spaces $X^s$ and $Y^s$. • Properties: $U^2_{\Delta} H^s \rightarrow X^s \rightarrow Y^s \rightarrow V^2_{\Delta} H^s$ (Embeddings) • We define restrictions to smaller time intervals…. • $X^s$ and $Y^{-s}$ have a nice duality relationship. Trilinear Strichartz • Refinement which generalizes Bourgain’s $p=6$ Strichartz estimate. Sketch of proof • Decompose the largest frequency $N_1$ annulus in cubes of the second largest frequency $N_2$. • We can replace $Y^0$ by $V^2_{\Delta} L^2$. • We deduce control on the quintic nonlinearity using the trilinear estimate. Some gain is obtained by playing with the exponent $p$ in the $U^p$ spaces, which he attributed to elementary properties of these atomic spaces. • This gain and some other slack in the other trilinear estimate allows one to sum up over the dyadic scales. I am confused at this point? Do we have some derivative slack or are thing really tight? Since we are considering an $H^1$ critical problem, there can be no slack….I discussed this with Sebastian a bit after the talk. I was confused; there is no derivative slack. • Next, there is a new localization (the rectangle decomposition). The cubes are decomposed into almost disjoint strips of a certain width. The almost orthogonality is gained from the temporal frequency! (This reminded me of the ideas from Koch-Tzvetkov and later developed by Ionescu-Kenig) Contraction estimate • It is not necessary to use the rectangles to get this estimate. For the qunitic case, we can avoid the rectangles. For the cubic NLS, by duality you have a 4-linear estimate and by Cauchy-Schwarz you are reduced to bilinear estimates. For the cubic case, it is necessary to use the rectangle decomposition. • With similar ideas, they can treat the cubic case on $R^2 \times T^2$ or $R^3 \times T$. • This involves bilinear refinements instead of cubic refinements. • small data GWP for energy critical NLS on certain manifolds where arguments of the Euclidean setting fail. • Large data is a very interesting problem. • This is the first critical result for NLS on a compact manifold. • Quintic NLS on the 3-sphere? Strichartz estimates fail but possible to control second Picard iteration. • Cubic NLS on $T^4$. • Flat waveguides? • $L^2$ critical case? Benjamin Schlein: Effective evolution equations from many body quantum dynamics Resources: Schlein’s talk at ICMP 2009, Schlein’s Zurich Lectures Consider $N$ particles moving in 3d. These particles can be described in quantum mechanics by a wave function $\Psi_N \in L^2 (R^{3N})$. The probability density $| \Psi_N (x_1, x_2, …, x_N)|^2$ represents the probability of finding particle 1 at location $x_1$ and so forth. Bosons are symmetric wrt particle interchange. Fermions are antisymmetric. We will restrict in this talk to Bosonic symmetry: For all permutations $\pi$, $$ \Psi_n (x_{\pi_1}, \dots, x_{\pi_N}) = \Psi (x_1, \dots, x_n)$$ The dynamics of the wave equation is governed by the Schrödinger equation $$ i \partial_t \Psi_N = H_N \Psi_N $$ $$ H = \sum_{j=1}^N (-\Delta{x_j} + V_{ext} (x_j)) + \lambda \sum^N V(x_i – x_j). $$ We have well-defined local dynamics. The problem is that we have way too many particles in typical physical systems. We want to find effective descriptions of the dynamics. In certain regimes, we can approximate this complicated but linear evolution using effective equations Mean Field Regime The particles interact with many other particles. The strength of each of these many interactions is small so that the effect of all of them is of order 1: $N \gg 1, \lambda \ll 1$. We will assume that $N \lambda \sim 1$. The dynamcis generated by the mean field Hamiltonian: H^{mf} = \sum (-\Delta{x_j} + V_{ext} (x_j)) + \frac{\kappa}{N} \sum^N V(x_i – x_j). We study the dynamics emerging from a product wave function: \Psi_N (x_1, \dots, x_N) = \prod_{j=1}^N \phi (x_j) Because of the interactions, we can’t expect that the product wave function remains of product form. But, in the mean field case, we might expect that $\Psi_N (t) \sim \phi(t)^{N}$. If we assume this, we obtain a self-consistent Hartree equation. Here is the heuristic step: \frac{\kappa}{N} \sum^N V(x_i – x_j) \sim \frac{\kappa}{N} \sum^{i \neq j} \int V(x_i – y) |\phi(y)|^2 dy \sim \kappa (V * |\phi(t)|^2) (xj). Reduced Densities • $$ \gamma_N (t) = |\psi_N (t)\rangle \langle \psi_N (t)|$$ • Partial traces • When we take partial traces, we lose some information. It is integrated out. However, we are only interested in the data that can be extracted based on measurements of finitely many particles. Theorem: (Under suitable assumptions on $V$). Let $\phi \in H^1 (R^3), \Psi$ a pure product wave function, $\Psi_N (t)$ the linear evolution of the many body system. The for all fixed $k \in {\mathbb{N}}, t \in R$, the reduced density matrices converge to the projectors build on the $\phi$ evolutions where $\phi$ solves the Hartree equation. • The more singular the potential, the more difficult it is to prove the theorem. • Spohn 1980: proved this for bounded $V$. • Erdös, Yau 2000: $V(x) = \pm \frac{1}{|x|}$. • Rodnianski, Schlein 2008: $V(x) = \pm \frac{1}{|x|}$, gives quantitative convergence with control by $\frac{C}{N}e^{kt}$. • The RS work was based on an approach by K. Hepp. • The approach is based on a representation of the problem on Fock space. • Coherent states and quantum field theory ideas. • Knowles, Pickl 2009: Improved to more singular potentials. • Grillakis, Machedon, Margetis 2009 I, II: Second order corrections to the mean field dynamics, giving norm convergence. Boson Stars • $N$ particle Hamiltonian $$ H = \sum^N \sqrt{1 – \Delta {xj}} – G \sum \frac{1}{|xi – xj|}$$ • $N \gg 1, G \ll 1, NG = \kappa$ • $\forall N \exists \kappa(N)>0$ (kappa critical) such that: • $\inf \frac{\langle \Psi , HN \Psi \rangle }{\| \Psi\|_2^2} = 0 ~if~ \kappa \leq \kappa(N)$ • $\inf \frac{\langle \Psi , HN \Psi }{\| \Psi\|^2} = – \infty ~if~ \kappa \geq \kappa(N)$ • Lieb, Yau proved that $\kappa(N) \rightarrow \kappa^H$ as $N \rightarrow \infty$ • Look at the corresponding effective field equation. • For $\kappa \leq \kappa^H$, we have global well-posedness. • For $\kappa \geq \kappa^H$, there exists finite time blowup solutions (Fröhlich-Lenzmann 2006). Theorem (Michelangeli-Schein 2010): Let $\phi \in H^2 (R^3)$ and form product wave function $\Psi_N$ and let $\Psi_N(t)$ evolves according to the regularized Hamiltonain (where the singularity is tamed by adding a small positive term to denominator which vanishes as $N \rightarrow \infty$). If we have $H^{1/2}$ control on the nonlinear level by constant $k$ over a time interval $[0,T]$ then we have convergence. Moreover, if the nonlinear problem explodes then the energy per particle in the linear problem also blows up. (Hypotheses were a bit strange to me….I asked about it after the talk and need to look at the paper.) Dynamics of Bose-Einstein Condensates Drop the external potential. Effective dynamics in this case is described by the Gross-Pitaevskii equation: $NLS_3^+$ The derivation of effective dynamics in this setting has only been established for the defocusing case. Adrian Constantin: Camassa-Holm Physical Background 2d water waves over a flat bed. He draws a curve above a flat bottom at $y = – h_0$ and the free surface is given by the graph $y = \eta (x,t)$. He writes the Euler equations, mass conservation, imposes reasonable boundary conditions. These are generally accepted to be the right model. I will work with one other assumption: $u_y – v_x = 0$: irrotational. There are various scales you can plug into the problem and then you can non-dimensionalize. The problem can then be written in terms of just two parameters $\epsilon$ and $\delta^2$ where: \epsilon = \frac{2}{h_0}, \delta = \frac{h_0}{\lambda} Small amplitudes $\epsilon \ll 0$ and $\delta$ is the shallowness parameter so shallow water wave theory means that $\delta$ small. Shallow water small amplitude is when $\delta \ll 1$ and $\epsilon = O(\delta^2)$. If you study this, you get KdV and BBM equations. In this regime, these model equations enjoy global existence. The nondimensional form of the KdV equation is \eta_t + \eta_x + \frac{3 \epsilon}{2} \eta \eta_x + \frac{\delta^2}{6} \eta_{xxx} = 0. Here is the emerging BBM: \eta_t + \eta_x + \frac{3 \epsilon}{2} \eta \eta_x + \delta^2 (\beta + \frac{1}{6})\eta_{xxx} – \beta \delta^2 \eta_{xxt} = 0, \beta \geq 0. KdV is completely integrable and has solitons. BBM has some nice analytic features but only 5 conserved quantities. These derivations are $O(\delta^4)$. Where does Camassa-Holm come into this business? Since all the waves that are physically reasonable, we have global existence. We would like to have a simple model that captures the phenomenon of wave breaking: $\eta$ is bounded, $|\eta_x|$ becomes unbounded in finite time. (This is described as a desirable extension in the book Linear and Nonlinear Waves, by Whitham.) Emergence of Camassa-Holm Equation Moderate amplitude (shallow water): $\epsilon = O(\delta), \delta \ll 1.$ Johnson found a path like this to see Camassa-Holm emerge. (“Unfortunately, the original derivation of that equation was not correct.” “They assume that $\epsilon$ is small and later they assume that $1/\epsilon$ is small…”) You can derive an equation for the horizontal velocity at a particular depth. If you do this derivation at depth $\frac{1}{\sqrt{2}} |h_0|$. Another equation called Degasperi-Procesi emerges when you consider this at a different depth: u_t – u_{txx} + 3k u_x + 4 u u_x = 3 u_x u_{xx} + u u_{xxx} Both of these equations are integrable! (I did not know about the D-P equation before…) Both of these equations have solution which break down, in the fashion of wave breaking described above. For CH, we have some conservation laws which implies the solution stays in $L^\infty$. Fokas and Fuchsteinner found the CH equation in a list of 12 equations that are the only completely integrable equations. It is difficult to use the integrable systems machinery to study the wave breaking. For DP, if you start with data in a nice enough space (say $H^{3/2}$) and you can then prove the solution stays in $L^\infty$. Tzvetkov Q: Can the solution be extended after the wave breaks? A: You might be able to extend the solution like shocks. But the relevance of the wave breaking event in CH, it is not clear whether the CH is a good approximation of the Euler equations. Therefore, even if the PDE theory for CH can be extended, this does not mean you have a relevant extension modeling the water wave problem. Geometric viewpoint as a geodeisc on the diffeomorphism group There is this famous paper of Arnold that shows that Euler may be viewed as a geodesic flow on the diffeomorphism group. CH and KdV can be similarly interpreted as a geodesic flow on the Bott-Virasoro algebra. This geometry thing is very nice, very appealing. However, this geometric point of view does not give a useful consequence from the viewpoint of analysis. The best result for CH is that when the solution does not change sign it stays global. This is built from Nöther’s theorem, which provides a different view on the CH equation Write ${\mathcal{D}} = [ \phi: C^\infty ~orientation~ preserving~ diffeos ]$. This is a Lie group and the tangent space at the identity is $C^\infty (S)$. We can then move this tangent plane around using Lie algebra properties by right-translating. The geodesic equation looks like $\phi_t = u(t, \phi)$ where $u \in \mathcal{D}$. • If I do this for $L^2$, I get $u_t + 3 u u_x = 0$. However, the Riemannian exponential map $exp_R$ is not a local chart. • If I do this for $H^1$ ( I believe this is referencing the Riemannian structure imposed on $C^\infty$ diffeos) we get CH. • Consider the Bott-Virasoro Algebra $Vir = C^\infty \times {\mathbb{R}}$ and you do some Bott cycle thing which looks like a diffeo flow with a twist, you get KdV. (This is a result of Olsheyenko(?) and Khesin.) • DP equation also has some interpretation this way but it is more complicated. Integrable Sturcture • CH Lax Pair. This is an isospectral problem. For CH, we have $\psi_{xx} = \frac{1}{4} \psi – \lambda m \psi, m = u – u_{xx} + k$. This is a weighted spectral problem. If $u$ solves CH, then the eigenvalues of this equation are time independent. • DP Lax Pair. $\psi_{xxx} – \psi_{x} – m z^3 \psi = 0, z \in {\mathbb{C}}, m = u – u_x +k$. When $m$ is strictly positive, we can perform certain Liouville substitutions which allow us to recast this as a regular Sturm-Liouville problem. This talk did not properly survey the literature. Instead, the talk was intended to highlight a physically relevant derivation of the CH and to show that this equation is mathematically and physically interesting. Tzvetkov asks: Is there a Miura transformation? Seems to be no….although AC appeared to me to answer a different question. Merle asks: Can you track the blowup using the integrable machinery here? We need estimates on the eigenvalues hold if $m>0$ and you can give examples which show that sign-changing $m$ breaks down the needed estimates on the eigenvalues. Ponce asks: What is the best LWP theory for CH? Answer: Kato’s theory needs $H^{3/2}$. You have existence, uniqueness and continuous dependence in $H^1$, but the continous dependence is weaker. Ponce asks: Is the peakon stable? A: Yes, this is a result of Molinet and El Dika. My computer ran out of battery…. arXiv: Muñoz on KdV arXiv: Muñoz on NLS (joint work with Igor Rodnianski) Kerr family $(0 \leq |a| \leq M)$ of metrics (in Boyer-Lindquist coordinates) g_{M,a} = – \frac{\Delta}{\rho^2}(dt – a \sin^2\theta d\phi)^2 + \frac{\rho^2}{\Delta}dr^2 + \rho^2 d\theta^2 + \frac{\sin^2 \theta}{\rho^2}(a dt – (r^2 + a^2) d\phi)^2. Here $\rho^2 = r^2 + a^2 \cos^2 \theta, \Delta = r^2 – 2 M r + a^2 = (r – r_{-})(r-r_{+}), r_{+} \geq r_{-}.$ This is a vacuum solution $(R_{\mu \nu} =0)$ and has Killing fields $\partial_t, \partial_{\phi}$. The domain of outer communications is $r > r_{+}$. The case $a=0$ is Schwarzschild 1916. The Kerr case is $a \neq 0$ and was discovered in 1963. Penrose diagram for Kerr $(0 < |a| N M)$. What is a black hole? A spacetime has a black hole if the past of this infinity is not the entire spacetime. Both Kerr and Schwarzschild are expected to be unstable and the structure of the singularity should be something in between. Penrose Diagrams (images taken from Dafermos-Rodnianski) • Penrose diagram of Schwarzschild spacetime Penrose diagram of Schwarzschild spacetime • Penrose diagram of Kerr spacetime Penrose diagram of Kerr spacetime These are natural questions from several points of view. One important application of these ideas is to address the stability properties of these solutions of the Einstein equations. 1. Boundedness in general class of $C^1$ stationay axisymmetric spacetimes [DR]. 2. “Integrated local energy decay” for exactly Kerr: 1. Slowly Rotating Case $|a| \ll M$ [DR], Tataru-Tohaneanu, Andersson-Blue 2. $|a| < M$, [DR] this talk!. 3. Pointwise-in-time decay from 1. and 2. (energy based method [DR] based on resolvent method of Tataru) We typically think of proving decay as a two step process. We first prove that some spacetime integral of energy to the future an arbitrary hyperboloidal space-like hypersurface controlled by the energy on the hypersurface. This type of result has been shown in the slowly rotating case. Once we have items 1. and 2. (boundedness and integrated local energy decay) we can use the vector field method to get pointwise-in-time decay. These methods appear to be robust and might be applicable to nonlinear problems. Review of the main features of Kerr Spacetimes 1. Red-shift (associated to the event horizon) 2. Superradiance 3. Trapping (trapped null geodesics) Two observes move in spacetime. You think of observer A emitting constant frequency signals and you imagine these being received by observer B so the frequency is shifted to the red. First discussed in 1939 by Oppenheimer-Snyder. Extremal case $a = M$: The red-shift factor at the horizon vanishes. The positivity of the surface gravity is a geometrical object underlying the red shift. Penrose diagram of Red Shift In Schwarzshild, the killing v.f. $\partial_t$ is timelike n the exterior becoming null on the horizon. Thus there is a conserved (by Nöther) non-negative definite energy by the time-like condition. The only subtlety is that the energy degenerates at the horizon. In stationary perturbations of Schwarzschild, $\partial_t$ in general becomes spacelike near the horizon. this happens already for Kerr with $0 \neq |a| \ll M$. The corresponding energy is conserved but does not have a sign. For particle motion, this leads to the so-called Penrose Process. For waves, this leads to the phenomenon of superradiance (Zel’dovich). In particular, using the conservation law associated to $\partial_t$ one cannot prove a priori boundedness, even away from the horizon. The energy radiated to null infinity might be bigger than the initial energy and this is called Superradiance. For Schwarzschild, the only trouble is near the horizon because we have a useful energy control for radiated energy to null infinity. For Kerr, we don’t have that because of the superradiance phenomenon and this creates new difficulties. We need to prove boundedness and decay everywhere, not just near the horizon. On Schwarzschild, the photon sphere $r = 3M$ has the property that it contains null geodesics. These null geodesics thus neither escape to null infinity nor to the horizon. In Kerr, the behaviour persists, but it is more complicated! It is not obviously located in physical space but can be thought of more easily inside phase space. One can concentrate energy for arbitrarily large times near trapped null geodesics. One has to capture this to prove dispersive results. In particular, pointwise-in-time decay estimates for energy must lose derivatives (Ralston). Proof of integrated local energy decay. We will only discuss the first energy. Higher order estimates require commutation with the redshift vector field, the Hawking v.f. and $\partial_t$. The method of proof will exploit energy currents. In the large $a$ case, the construction of these currents will need to frequency localized for two reasons. 1. To distinguish between non-superradiant and superradiant frequencies. 2. To degenerate at the correct value of r. A convenient way of doing both at the same time is frequency localizing via Carter’s celebrated separation of the wave equation. Kerr geometry only has two killing fields. This is not enough to separate the equation. However, there is some extra symmetry there that helps you. In view of Ricci flatness, this separability is equivalent to separability of Hamilton-Jacobi equations and the existence of a Killing tensor. These three objects are devices to extract this hidden structure. Big display….can’t keep up with that. We are studying $\square_g \psi = F$ (where $F$ arises from cutoffs) and we take $\widehat{\Psi}$ and rewrite it using some structure of an oblate spheroidal metric in the $\theta, \phi$ variables. The content of what Carter noticed is that when you do this, you can show that there is a hidden ODE lurking in this decomposition….. More big display…working pretty hard here, lots of indices….new coordinate $r^{*}$ so that things look more like the Regge-Wheeler coordinates in Schwarzschild. With this decomposition, we can identify the superradiant frequencies. The superradiant modes should be thought of as the modes which send infinite negative energy through the horizon. Completely separated energy current identies (analogues of $\nabla^\mu (T_{\mu \nu}(\psi) (y \partial_{r^{*}})^\nu)$, etc.) Lots of notation with symbols I don’t know how to make…. General Idea From the above currents, produce integral identities with positive definite underlined bulk terms and (upon summation) we get the integrated decay except in regions which can’t be handled this way, basically because these frequency ranges are associated with trapping. Kerr for small $|a|$. The constructions for all the other frequency ranges can be easily perturbed to yield positive definite bulks. The boundar terms, however, are not a priori controlled, this is the problem of superradiance. Since $|a|$ is small, this can be remedied by adding on a small amount of the redshift identity. Basically, for the small $|a|$ case, we have very little superradiance and can control it with the red shift. For large $|a|$, we need another idea. The key observation seems to be that superradiant frequencies are not trapped. You can accomodate the superradiance using this idea by using the red shift. Remark 1: In the small rotation case, the relationship between superradiance and red shift was the key idea. Remark 2: There are no trapped null geodesics which are orthogonal to $\partial_t$. This is a phenomenon identified by Alexakis-Ionescu-Klainerman in their works on uniqueness properties. Some other important results • Positive and negative cosmological case • Ohter equations, like Dirac, Maxwell instead of wave equation. (Blue, Hafner, Finster et. al) Open Problems 1. Extremal case $a=M$ (recent results of S. Aretakis) 2. Higher dimensions (Schlue, Laul-Metcalfe) 3. Other measures of decay, Strichartz, … 4. Robust additional decay 5. Maxwell equations on Kerr (Blue) (Earlier work by Blue on Schwarzschild) 6. Equations of gravitational perturbation 7. Nonlinear stability of Kerr? (w. Nakanishi, Tsai) The paper that precedes the new stuff here is posted. (30s), magnetizations $u(t,x) \in R^3$ with a constraint $|u(t,x)| = constant. The Landau-Lifschitz equation is: u_t = a_2 u \times \Delta u – a_1 u \times (u \times \Delta u), a_1 \geq 0. Broader context: • $u(\cdot, t): R^2 \rightarrow S^2.$ • energy $E(u) = \frac{1}{2} \int_{R^2} |\nabla u|2 dx$ • heat flow $u_t = \Delta u + |\nabla u|^2 u = – E’(u)$ • Schrödinger Map: $u_t = u \times \Delta u = J E’(u)$, $J$ is a complex structure. • Landau-Lifschitz is a combination of these equations • Also related to wave maps Regurlarity vs. Singularity: energy critical problems Energy is scale invariant in $R^2$. $E = \int_{R^2} |\nabla u |^2 \geq 4 \pi |degree (u)|$ iff Harmonic map. So, what can you say? Heat flow: • $E< 4 \pi \implies$ global smooth solutions Struwe 1985. • $E > 4 \pi \implies$ singularities may form, follows from Chang-Ding-Ye 92 via subsolution construction. Wave map: Schrödinger Map: Equivariant Maps Simples setting: near harmonic, equivariant maps. • $m \in Z^+$ is the degree • $(r, \theta) $ are polar coordinates • $R = {\hat{k}}\times$ (rotation about ${\hat{k}}$) 2-parameter family of harmonic maps (at minimal $E = 4 \pi m$). The energy is constrained. We work in a small energy shell above the $4 \pi m$ level. This is a restrictive class but known blowups are in this class. For heat flow case, we only have blowups with $m=1$ but for wave maps we have examples with $m \geq 1$. Theorem Gustafson-Nakanishi-Tsai 09: For $m \geq 3$, solutions are global and converge to a (nearby) harmonic map (asymptotic stability): { {\| u(t) – H^{\mu} \|{L^\infty}} } + {{a1}} E (u(t) – H^\mu) \rightarrow 0 ~(t \rightarrow \infty). • includes the pure Schrödinger map case $a_1 =0$ • also for $m=2$ heat flow ($a_2 = 0$) in a symmetry sub-class: • solutions are global and converge to a harmonic map family • the parameters can drift, eg to give infinite-time blowup: $s(t) \rightarrow 0$. In particular asymptotic stability fails. (Here $s$ is the length scale of the harmonic map.) He draws a picture. A $\delta$ neighborhood of the harmonic maps in the energy space, viewed as an infinite graph over $s>0$. For $m \geq 3$, solutions in the $\delta$ neigborhood move dynamically back down to the harmonic maps. For $m=2$, you can drift all over the place in the case of heat flow. Setting, as above with dissipations. Theorem GNT: For $m=2$, solutions are global and converge to the harmonic map family, but not to one particular harmonic map. • The harmonic map family parameter $\mu(t)$ does not have to converge in general. But it will converge if the initial perturbation has a slightly faster spatial decay: u_{x_1} – u \times u_{x_2} \in |x| L^1 \implies \mu(t) \rightarrow \mu_\infty. • For $m=1$ and only for the heat-flow, finite time blowup can occur but not for more localized perturbations. (We expect this holds for (LL) but we have no proof.) Remark: New results of Bejenearu-Tataru for $m=1$ Schrödinger map of degree 1. Harmonic mapps are unstable, stable with more localization. You should think of the $m=1$ Schrödinger map case as the most delicate. Standard “modulation theory” approach Take your solution and split it into the harmonic map piece plus a remainder. Rewrite things for this remainder term. You look at the linear part of this equation driving the remainder dynamics. Because of invariances of the equation, the linearized operator has zero modes. What would you do to kill the kernel? Choose the parameter at time $t$ so that the perturbation is orthogonal to the kernel of the linearized operator. This leads to an ODE on the parameter dynamics. It remains to get dispersive/diffusive estimates to prove that the parameter converges as time goes to infinity. Do we have these estimates? Dispersive/diffusive estimates The remainder $z$ is controlled by a derived quantity $q$, so the analysis is a bit indirect and we now fight to control this related quantity. The $L^2$ norm of $q$ measures the energy gap above $4 \pi m$. The (recast) remainder $q$ satisfies a reasonable nonlinear Schrödinger-(heat) equation. This equation has a potential which depends upon $m$. For $m>1$, we have a lower bound estimate on the potential $V$. In this case, we can get “Strichartz” estimates on $q$. • For $m \geq 4$, this standard approach works G-Kang-Tsai 08, [Guan-G-Tsai 08]. • For $m \leq 3$, the orthogonality condition is incompatible with the desired $L^2_t$-decay estimates and the standard approach fails. • For $m \leq 2$,, the orthogonality condition makes no sense. • For $m=1$, we don’t even have an $L^2$-eigenfuction but rather a resonance. • Change the orthogoanlity condition. Instead of demanding that the remainder be orthogonal to the kernel of the linearized operator, you require that the remainder be orthogonal to a localized function (unrelated to the kernel). Of course, there is a penalty for this change. The parameter dynamics ODE transforms then to involve another term and we have different parameter dynamics. This extra term is analyzed in some way. • Solution 1: “Normal form” for $m=3$. By integrating by parts a few times, we get good control on a modified quantity $[\mu (t) - (\psi^s /s | q)]$. We need control on the correction term. For $m \geq 3$, the correction basically does nothing so things work as before. • Parameter drift for $m=2$ heat flow. In this case, the “normal form” correction need not be bounded. For the heat-flow case, it is possible to simplify up to converging errors and the nonintegrability of the correction term can be exploited to drive the blowup, blowdown and oscillation properties of the scale parameter $s(t)$. • Solution 2: Take $a_1 >0$ and exploit the dissipation “We need to somehow stop pretending that the Schrödinger and heat equations are the same…” In the dissipative case, we can extract some dissipation on the correction term. (Probably, this decay is not available in the Schrödinger case.) There are some factorization tricks where the operator is recast, some Duhamel tricks…and an iteration on dyadic time intervals where the time dynamics of the parameter $\mu(t)$ are updated. What emerges is an upper bound by $\log t$ on the parameter $\mu(t)$. • Near harmonic dynamics for (LL) for degree $m \geq 3$. • For $m =2$, more complex behavior. • For $m=1$, do finite time singularities form? This is only known for the heat-flow. Ioan Bejenaru: Near soliton evolution in 2d Schrödinger Maps (joint work with Tataru) Much of this will be a deja-vu since it overlaps with Gustafson’s talk. Schrödinger map, Heisenberg model in ferromagnietism or the conservative part of the Landau-Lifschitz equation. Energy Conservation, scale invariance $s_c = n/2$ which is the threshold for the well-posedness theory. The $n=2$ case is energy critical. Main Question: Global regularity of smooth solutons? Sulem-Sulem 86 established existence of local solutions for $s> [n/2] + 2$, and this was improved to $s> [n/2] +1 $ by McGahagan. An “easier” problem is the global regularity for “small” initial data. Chang-Shatah-Uhlenbeck 00, ….Bejenaru-Ionescu-Kenig-Tataru 08 established GWP of the SM for small data in the critical Sobolev space in $n=2$. Small data now resolved. What happens for large data? Large Data Theory The dynamics depend upon the target manifold. For the sphere target, the problem is called “focusing” and for the hyperbolic target, the problem is called “defocusing”. This terminology makes good sense for wave maps but is not as explicitly understood in the case of SM. A key feature in these problems is played by the existence of solitons: $ u \times \Delta u = 0$, which are known as harmonic maps. there are no nontrivial finite energy harmonic maps for the hyperbolic target. There are nontrivial harmonic maps with finite energy for the $S^2$ target. A SM which fails to be regular at one time, bubbles like a HM. Main Conjecture: In the hyperbolic case, the problem is globally wellposed independent of the size of the data. In the spherical case, solutions emerging from data below $4 \pi$ will be globally wellposed while the problem with higher energy may develop singularities. The above conjectre is known for the harmonic map flow Eells-Sampson 64, Struwe 85, Chang-Ding-Ye 92. Singularity formation for the WM problem. Recent works RS08, KST08, RR10. There is some progress on more general targets. Equivariant Harmonic Maps on $S^2$. These are maps from the plan into the sphere. Think that the origin is mapped to the south pole. The point at infinity is mapped to the north pole. Think of the image of the positive x axis as a curve connecting the south and north pole. When you move in the domain around wrt theta one time, the curve connecting n and s pole moves around the sphere some number of times. Once you have these maps, you can fatten them up into a two parameter family of maps. Basic setup for stability/instability Define the two parameter family of $m$-equivariant harmonic maps. If you have slightly more energy, then you float around the harmonic map family. But do you stay nearby a particular harmonic map or can you float far away? If you move far, the higher derivatives do not stay under control. We want to describe the trajectory of these maps. Modulation Theory Linearize near a soliton, study the zero eigenvalue, and these solutions do not disperse. You want to get rid of this eigenvalue. There is room to do that because we have some choice about which soliton you linearize around. This approach has been developed by Gustafson-Kang-Tsai 06 and Gustafson-Nakanishi-Tsai 09. This has been pushed further recently but involves higher degree hypotheses. We have decided to concentrate on the $m=1$ case. Theorem (Bejenaru-Tataru): 1. Let $m=1$ and $\gamma \ll 1$. The for each 1-equivariant initial data $u_0$ satisfying $ \| u0 – Q(0,1)^1 \| X \leq \gamma$, there exists a unique global solution $u$ so that $u – Q(0,1)^1 \in C(R,X)$ and $ \| u_0 – Q(0,1)^1\|_{C(R,X)} \lesssim \gamma$. There exists a solution $u$ with the additional property that $ \| u(0) – Q(\alpha, \lambda)^1 \|$ …..ack……slide switched I could not keep up. It is an instability result with a large upper bound. It was not clear to me if the assertion was that this big drift really occurs but certainly this is suggested. Ionescu-Gustafson-Bejenaru conversation: localizations of the perturbations can restore the stability for the heat flow case… Background References (Incomplete) (joint work with Hatem Zaag) I want to give a talk about a series of works I have done with H. Zaag on the semilinear wave equation. Semilinear Wave Equation, Blowup Surface $$ u_{tt} = \Delta u + |u|^{p-1}u$$ Here $p>1$. Let’s collapse to dimension 1. We have initial data $(u0, u1) \in H^1 \times L^2$. Summary of the results: • Local Existence: We have local existence until a blowup time $[0,T)$. • Existence of blowup via ODE method. There is a more refined condition due to Levine: If a (not the same as mine) energy is negative then $T< \infty$. • The blowup phenomenon can be spatially localized. Therefore, as in the book of Alinhac, you can produce a blowup surface. The solution is well defined on all backwards cones behind the blowup surface. • Question: We want to understand the blowup surface. We don’t know anything about it besides that it is 1-Lipschitz. • A point is called non-characteristic on the blowup surface if the surface has smaller than slope 1 at that point so it does not touch the boundary of the light cone. Let us denote the set of characteristic points on the curve by $S$. The other points on the curve are non-characteristic and the set of such points is called $R$. Let us denote the blowup curve by $x \rightarrow T(x)$ so it is given by a graph $(x, T(x))$. Caffarelli-Friedman 85 For $u_0 \geq 0, u_1 \geq 0$ and use monotonicity of the wave flow in 1 dimension to prove that $\partial_t u \geq (1 + \delta )|\partial_x u|$ and you can prove then that no characteristic points don’t exist.  This result is a bit misleading. We tried to prove the nonexistence of characteristic points and could not do it. So, we turned our attention to proving the existence of characteristic points. Summary of Results • Existence of characteristic points. There exist initial data $(u_0, u_1)$ which has $S$ nonempty. • $S$ is isolated, $R$ is open. • $T(\cdot)$ is $C^1$ on $R$. • The only way that a characteristic point can arise is like a “hat”. $T’$ from the right and from the left are well defined. (Alinhac has examples for quasilinear equations which can blowup at all points along a line segment of slope 1.) At points $S$ we have $T’$ of slope 1 on the right and slope -1 on the left. • At points along $R$, the solution is of one sign and points in $S$ are points where the solution changes sign. Characteristic points are cusps along the graph of $T(\cdot)$. A Lyapunov functional (Antonini-Merle). He shows that the solution extends outside the light cone behind noncharacteristic points. This gives you the $T’$ well-defined (with same value from left and right) at a noncharacteristic point. The talk was hard for me to type up and explain well….Frank emphasized that the proofs are quite intricate and not presentable in a linear fashion. Ben Dodson: Defocusing $L^2$-Critical NLS Mass-Critical NLS $$ i u_t + \Delta u = \mu |u|^{4/d}u, u(0,x)= u_0 (x), x \in R^d$$ We concentrate on the defocusing case where $\mu =1$. This equation conserves the quantities • $M(u(t)) = \int |u(t,x)|^2 dx$ • $E(u(t)) = \frac{1}{2} \int |\nabla u(t,x)|^2 dx + \frac{\mu d}{2(d+2)} \int|u(t,x)|^{\frac{2(d+2)}{d}} dx$ Strichartz Pairs $(p,q): \frac{2}{p} = d( \frac{1}{2} - \frac{1}{q} ), d \geq 3, p \geq 2.$ $$ A(m) = \sup [ {{\| u \|}{L^{2(d+2)/(d)}} } (R \times R^d): {{\| {u0} \|}_{L^2}} =M ] Minimal Mass Blowup Solution Strategy Theorem (Tao-Visan-Zhang 08): If $u(t,x)$ is a minimal mass blowup solution on $I$ then $\exists ~x(t), \xi(t): I \rightarrow R^d, ~ N(t): I \rightarrow (0, \infty)$ and u(t,x) = \frac{1}{(N(t))^{d/2}} e^{i x \cdot \xi(t)} Q_t ( \frac{x – x(t)}{N(t)}) where $Q$ changes with time but ranges only in a precompact set. For any $\eta > 0, ~ \exists C(\eta) < \infty$ such that \int_{|x- x(t)| > \frac{C(\eta)}{N(t)}} |u(t,x)|^2 dx < \eta, \int_{|\xi- \xi (t)| > {C(\eta)}{N(t)}} |{\widehat{u}} (t,\xi)|^2 d\xi < \eta. Theorem (Killip-Tao-Visan): To prove GWP it suffices to exlude three scenarios: 1. $N(t) \sim t^{-1/2}, ~t \in (0, \infty)$, 2. $N(t) =1 , ~ t \in (-\infty, \infty)$, 3. $N(t) \leq 1, {\liminf }_{{t \rightarrow \pm \infty}} N(t) =0, ~ t \in (-\infty, \infty).$ Then he writes and doesn’t really explain…… 1. $\int_{1}^{\infty} N(t)^3 dt < \infty$ 2. $\int_{-\infty}^{\infty} N(t)^3 dt < \infty$ Collapse to $d=3$ for now. Theorem (CKSTT 04): Interaction Morawetz Estimate $$ \| u(t) \|^4_{{L^4 (J \times R^3)}} \lesssim {{\| u \|^3_{L^\infty (L^2)}}} {\| u \|_{{L^\infty(H^{1})}}} $$ He quotes some estimates from KVZ linking time integrated (over slabs) powers of $N(t)$ with Strichartz size on same slabs. On LWP time intervals $J_k$ (defined by diagonal Strichartz norm of size $\epsilon$), we have $N(t_1) \sim N(t_2)$ on $J_k$. Galilean Invariance Observations Using Duhamel formula, he claims that the galilean invariance $\xi(t)$ does not move too rapidly. This allows him to localize things near the frequency center and in this way tames the galilean invariance. Planchon-Vega paper on interaction Morawetz describes why the interaction Morawetz estimate is galilean invariant. All these expressions involve galilean invariant right sides and left sides. He then explains that the Morawetz Action leading to the interaction estimate is galilean invariant. This allows him to claim that $ i \partial_t (Iu) + \Delta (Iu) = |Iu|^{4/3} Iu + [|Iu|^{4/3} (Iu) - I(|u|^{4/3}u)]$ enjoys some nice control (if we could ignore the error term in square brackets). So, we turn our attention to the error term. For $N \leq CK$, we have $$\| P_{{|\xi – \xi(t)| > N }} u(t) \|{{L^2(L^6)}} \lesssim (\frac{K}{N})^{1/2} \rho(N) where $\rho(N) \leq 1$ with $\lim {N \rightarrow \infty} \rho(N) = 0.$ $L^2_t$ interval decomposition induction argument Bust up $[-T, T]$ into small intervals on which we have good Strichartz control and….not clear what he is doing to me right here…. Sort the intervals into good and bad intervals where a bad interval is where $N(J_k) \geq \frac{\eta_1 (d) N}{2}$. He makes a crude estimate on the bad intervals and pays for them by adding up their contributions. On the good intervals, he changes the organization of the decomposition. $$ \eta_1 (d) N \geq \sum_{J_k \subset G_j} N(J_k) \geq \frac{\eta_1 (d) N}{2} or $G_j$ lies to the left of a bad interval or $G_j$ is ont he end of $[-T,T]$. This allows him to claim that the number of $G_j$ is bounded by $C(d) \frac{K}{N}$. ….not clear to me….but hopefully it will be after I work some more. Decomposition of nonlinearity He expands the nonlinearity wrt the decomposition around the moving frequency center $\xi(t)$ and the moving spatial center $x(t)$. He dismisses some parts of the nonlinearity based on the induction hypothesis, and the smallness in $L^2$ on the frequency regime far from the moving galilean center. The bad term that remains needs further study. • Colliander: What are the main new ideas beyond the works of Killip, Tao, Visan and Zhang? ### • Galilean invariance taming trick. • Barely slipping under the wire. • Induction argument using $L^2_t$… • Colliander: And for lower dimensions? • The critical spaces of Koch-Tataru $U^p, V^p$. • Harder work on the decomposed nonlinearity due to the absence of the endpoint Strichartz estimate in $d=2$. • $d=1$ is easier than $d=2$, which is a nightmare. I had a nice conversation with Fabrice Planchon who reported having a longer discussion with Dodson in June. Fabrice suggested that the new elements are the galilean invariance trick, the induction argument exploiting the $L^2_t$ control on left and right side of the Duhamel estimate (only available in $d \geq 3$) and the role played by the time integrals of powers of $N(t)$. Technical difficulties in 3d emerge because the nonlinearity is not multilinear and the analysis there would be simpler if it were. In 2d, we have a nicer nonlinearity but the absence of the endpoint Strichartz estimate in 2d obstructs the $L^2_t$ induction argument. Dodson in fact uses the double endpoint! This conversation made me think that it might be a nice exercise to try to revisit the 2d argument under the false assumption that the forbidden endpoint Strichartz estimate and following the 3d strategy. Alternatively, there might be a streamlined (but incomplete) proof which exposes the strategy more cleanly if we assume somehow that the nonlinearity in the 3d case were multilinear. Killip: Energy Supercritical Wave Equation in 3d Background References $$ u_{tt} – \Delta u + u^7 = 0, ~ u: R \times R^3 \rightarrow R$$ $ E(u^\lambda) = \lambda^{-1/3} E(u)$ so that the energy does not control the small scale behavior. This is very alarming. $$\cal{E} (t) = {{\| u (t) \|}^2_{{\dot{H}^{7/6}}}}+ {{\| u_t \|}_{\dot{H}}^{1/6}}.$$ Theorem (Killip-Visan 2010): $\cal{E} (0 )< \infty $ then • $\cal{E}(t)$ diverges. • $u(t) – u^{\pm} (t) \rightarrow 0$ as $t \rightarrow \pm 0$, where $u^{\pm}$ is a solution of the linear wave equation. Radial case was done by Kenig-Merle. Minimal blowup solutions have good spatial decay properties. This is really the main point of their work and ours. Two essential points in the KM work: • Radial Sobolev embedding: ${\dot{H}}^{7/6} \ni u \implies |u| \leq r^{-1/3}$. • $r u(r)= u_{out} (t-r) + u_{in} (t-r)$. • If the solution is small intially ${\cal{E}}(0) < \eta$ then scattering holds. Scattering is equivalent to the finiteness of some spacetime Strichartz $L^{12} (R \times R^3)$. Step 1: Minimal Criminal Keraani first proved the existence of minimal blowup solutions and there were used by Kenig-Merle. At each moment of time, this object has certain localization properties. It is frequency localized on a characteristic frequency scale $N(t)$ and is spatially localized near $x(t)$ at the Heisenberg dual scale $\frac{1}{N(t)}$. Here $N(t)$ defines a multiplier which captures 99% of the norm. Riesz’ interpretation of the Arzela-Ascoli theorem shows this object is precompact. This is why we call these objects almost periodic. Ionescu: What does minimal mean? Answer: Samllest ${\| {\cal{E}} (t)\|}{L^\infty{t}}.$ I can apply symmetries and subsequential limits to these minimal objects. Step 2: Minimal Criminal satisfies one of three scenarios: 1. $N(t) =1$ soliton-like 2. $N(t) \geq 1, N(t) \rightarrow \infty$ as $t \rightarrow \infty.$ 3. Finite time blowup. Step 3. No finite time blowup solutions. How could blowup occur? The norm lives on smaller and smaller sets. By finite speed of propagation, we can deduce that there is a point where concentration occurs. Suppose we have a minimal blowup solution. We then look at the backwards light cone. Outside the light cone, $u=0$ by minimality. We know that $u$ has finite 7/6 norm and it lives on a small set. But this means that the energy must go to zero and this means the solution is actually the zero solution so is not a finite time blowup solution. Soliton and Cascade Solutions have finite energy. Step 4. Solutions move more slowly than light speed. $ | x (t) – x(\tau)| \leq (1-\delta)|t – \tau|, ~ |t – \tau| \geq 1.$ We prove this using the energy-flux identity. He draws a forward light cone. There is no energy at the apex. We know that the energy inside the ball defined by the light cone at time $T$ is bounded by $T^{1/3}$. Energy can come into the cone but nothing can go out due to light speed. This tells us that \int_0^T \int_{|x| =t} |u|^8 dS dt \lesssim T^{1/3}. • This argument works well if $N(t)$ is not changing too fast. For varying $N(t)$, this can be shown to violate speed of propagation. • There are some other variations to get this nailed down. Step 5. $L^p$ decay. We are worried that our super smooth function does not decay fast enough. $\dot{H}^{7/6} \rightarrow L^9$ (embedding), but we can actually prove that it is in $L^6$. At any time, we can represent $u$ using a Duhamel formula: u(0) = – \int_0^\infty \frac{\sin(t|\nabla|)}{|\nabla|} u^7 (t) dt. You can use the energy-flux identity to turn this into the $L^6$ control. How? You split low into high and low frequencies. We are only afraid of the very low frequencies. $(u_l + u_h)^7$ so $u_l$ is small and some interpolations give you the control. Why? There can be no other term at null infinity since we would be wasting stuff and this would not be minimal. \int_{|x – x(t)| \geq R} |u|^8 dx \leq R^{-\gamma}. Split the time interval $[0, \infty]$ into $[0, R/3]$. We can then set up a geometric bootstrap. Everything is fine on the short time interval. If we look far into the future, we get smallness in $L^\infty$ and can then interpolate against the $L^6$ control to get the target $L^8$ estimate. We gain regularity. Write the $H^1$ norm as an inner product: $\langle \nabla u(0), \nabla u(0) \rangle + \langle u_t, u_t \rangle$. Now play with Double Duhamel. We have a kernel decay like $|t – s|^{-1}$ which will not converge when integrated over $dt ds$. We would actually need $|t – s|^{-2 – \epsilon}.$ We introduce a Whitney ball. We decompose $R^3$ in Whitney balls w.r.t. the origin. The negative powers of $R$ gained above from the quantitative decay estimate allows us to sum over the Whitney balls. This shows the energy is finite, after a lot of bookkeeping. How do we use this to wrap things up and prove the theorem. Step 8. Completion of Theorem • No Soliton: $\frac{x}{|x|} \cdot p$ leads to the Morawetz identity which implies the estimate: $$ \int \int \frac{|u(t,x)|^8}{|x|} dx dt \lesssim E(u).$$ This kills the soliton. • No Cascade: Using the Whitney balls slack, we can in fact get tightness: $$ \int \langle x – x(t) \rangle^\epsilon [|\nabla u|^2 + |u_t|^2] dx \leq \infty.$$ Nakanishi: Do you have the same result if you have bounded critical Sobolev norm hypothesis is only true in one direction of time? Killip: If this nemesis existed, then I can time translate it to create a nemesis that I have just shown can not > exist. So, I believe this relaxed hypothesis can be made with the same conclusion. Colliander: Peter Pang (an undergraduate at U. Toronto) has recently numerically simulated this problem in the radial case and observed that the critical Sobolev norm remains bounded and is not monotone in time. Colliander: Can you relax the bounded critical norm hypothesis to one with very slow, say logarithmic, growth and maintain the scattering conclusion? Killip: This makes my head spin. The minimal object approach, a la Kenig-Merle, is not amenable to this relaxation. It might be possible to approach this with the (more quantitative) gopher strategy of CKSTT. Wilhelm Schlag: Global dynamics above the ground state energy (joint work with Kenji Nakanishi NLW, NLS) Klein-Gordon and Schrödinger Equations $$u_{tt} – \Delta u + u = u^3, R^{1+3}$$ $$ i \partial_t \psi + \Delta \psi + |\psi|^2 \psi = 0, R^{1+3}$$ LWP in $H^1$. $T_* (\| u(0)\|_{\cal{H}}) >0 $ where $\cal{H} = H^1 + L^2$. $E(u) = \int \frac{1}{2}(|\nabla u|^2 + |u_t|^2 + u^2) – \frac{1}{4}|u|^4 dx.$ If $E<0$ then you have finite time blowup. Scattering set: $S_+ = [(u_0, u_1) \in \cal{H}: T_* = \infty, \| u \|_{ST} < \infty]$ $S_+$ is open, path connected, $S_+$ contains a small ball $B_\delta (0)$. Questions and Answers 1. $S_+ $ bounded in $ \cal{H} $. 2. $\partial S_+$: Is this smooth or very rough? 3. What is the dynamics of solutions on the boundary? 4. Does $\partial S_+$ separate regions of global existence versus finite time blouwp? Recall $\exists ~ Q >0$ satisfying $-\Delta Q + Q = Q^3$. Theorem (Nakanishi-Schlag 2010): (Radial Case for now) • $S_+$ is unbounded. • $\partial S_+ \cap [(u_0, u_1) \in \cal{H}: E(u_0, u_1) < E(Q, 0) + \epsilon^2] Trichotomy: If you are slightly above Q, you either • Scatter to $Q$. • Scatter to 0. • blowup. Computer Simulations (done with R. Donninger) These were beautiful and provoke lots of ideas and wonder. Structures in Phase Space $S_+ \cap surface$. Take $(Q+ Af, Bg), (Af, Bg)$, here with $f,g$ are radial fixed functions. Here $A,B$ are parameters and we draw a rectangle in (A,B) space and we color based on (numerical) GWP vs. Blowup. $K(u) = \int | \nabla u|^2 + u^2 – u^4$ $ PS_{\pm} = [ (u_0, u_1) \in {\cal{H}}: E(u_0, u_1) < E (Q,0), K(u) \geq 0 (for +)]$ and $K(u) < 0$ for -. PS denotes the Payne-Sattinger (1978) sets. What is up with these sets? $-\Delta \phi + \phi = \phi^3, ~ J’ (\phi) = 0$ where $J(\phi) = \int \frac{1}{2} (|\nabla \phi|^2 + \phi^2) – \frac{1}{4} \phi^4 dx.$ $ K(\phi) = \langle J’(\phi) | \phi \rangle = 0.$ $\partial_{\lambda}|_{\lambda = 0} J(e^\lambda \phi) = K(\phi).$ Find the minimal height of the potential well. You do some mountain pass work. $\inf [J(\phi): \phi \in H^1, \phi \neq 0, K(\phi) = 0)] = J(Q) = \inf [ J(\phi) - \frac{1}{4}K(\phi): \phi \in H^1, \phi \neq 0, K(\phi) \leq 0]$ Cor: $PS_{\pm}$ are invariant under the flow. • $PS_{+} \implies$ global existence. • $PS_{-} \implies$ finite time blowup. $K(\phi) \geq 0 \implies K(\phi) \gtrsim \min (1, \| \phi \|_{H^1}^2 )$ Cor: $Q$ is unstable. ….as usual, Wilhelm is fast….deductions are rapid fire. Ibrahim-Nasmoudi-Nakanishi proved that you not only have global existence in $PS_{+}$, but using the Bahouri-Gerard-Kenig-Merle compensated compactness machinery, you actually have scattering. Final State Descriptions near $Q$ Theorem (Nakanishi-Schlag): ${\cal{Hrad}}^\epsilon = [ (u0, u1) \in {\cal{Hrad}} : {\cal{E}} (u0, u1) < J(Q) + \epsilon^2 ].$ Then, this set is a disjoint union of 9 nonempty sets. $\| (u, u_t) – (\pm Q, 0) \|_{\cal{H}} < C\epsilon.$ • -: Scatter, Trapped by $\pm Q$, Finite time blowup • +: Scatter, Trapped by $\pm Q$, Finite time blowup (Choptuik and Bizon have explored similar pictures in studying the GR setting.) ack….too fast for me to type….grazing solutions…penetrating solutions…..exit mechanism….and now he is speeding up…..mind like a ferrari….beautiful phase space portraits (joint work with H. Koch) The goal is to outline the ideas in this work. The problem $$ \partial_t \psi + \partial_x (\partial_x^2 \psi + \psi^4) = 0 $$ with initial data $\psi_0$. Quartic KdV is the first integer power gKdV that is not completely integrable. Also, we use multilinear estimates. small data case: $\| \psi0 \|_{{\dot{H}}^{-1/6}} \ll 1$. $\psi0 = Q_{c} (x-x_0) + v_0, \| v0 \|_{{\dot{H}}^{-1/6}}.$ • Scattering and GWP for small data (Yes) • Scattering and Asymptotic stability (Yes) • Existence of inverse wave operators (Almost) Previous Results • Pego-Weinstein 1994, Asymptotic stability with exponential weights. • Martel-Merle 2001-…, Asymptotic stability in energy space $H^1$ in a moving reference frame. • Virial Identities • Monotonicity properties • Côte 2006, Constructs multiple soliton solutions for gKdV. • Grünrock 2005, Multilinear estimates. • Tao 2006, Asymptotic stability in $H^1 \cap {\dot{H}}^{-1/6}$. Function Spaces I don’t want to construct spaces in as much detail as done in the paper here. The convergence in the wave operators takes place in a Besov refinement of ${\dot{H}}^{-1/6}$. $(U^p, V^p)$ These spaces are nicely presented in a paper by Hadac-Herr-Koch 2009. Tataru, Koch-Tataru. Steps of Proof • Improved linear estimates, there are many linear equations meriting detailed study. • Airy $(\partial_t + \partial_x^3) \psi = 0.$ • The $u$ problem: $(\partial_t u + \partial_x ({\cal{L}} u))=0.$ • The $v$ problem: $(\partial_t v + ({\cal{L}} \partial_x v))=0.$ • Refined Kato smoothing estimates for Airy • ${\cal{L}} = (-\partial_x^2) + c – p Q_c^{p-1})$ Refined (weighted) elliptic estimates for $\cal{L}$ • Virial identities (Martel-Merle) for the $v$ problem $\implies$ energy spaces for the linear evolution. First Result: • $P_{Q’}^{\perp} \psi = \psi – \frac{\langle \psi, Q’ \rangle}{\langle Q’, Q’ \rangle} Q’$ • ${\tilde{P}}_{Q’}^{\perp} \psi = \psi – \frac{\langle \psi, Q \rangle}{\langle Q, {\tilde{Q}} \rangle} {\tilde{Q}}$ where ${\tilde{Q}} = x \cdot Q’ + \frac{2}{3} Q.$ • $\cal{L} (\partial_x Q) = 0$ • $\partial_x (\cal{L} Q’) = 0$ • $\partial(\cal{L} \tilde{Q}) = Q’$ • Variable coefficient operators (small modulations) • $U, V$ spaces/Littlewood-Paley. • Multilinear Estimtes • Rely heavily upon the $L^6$ estimate: $\| u \|{L^6{t,x}} \leq \| |D|^{-1/6} u \|_{L^2}.$ • Bilinear Estimate…long expression hard to read…. • Example: $$\| \partial (v1 v2 v3 v4) \| ({{\dot{Y}}^{-1/6}{\infty, T}}) \leq c \prod{j=1}^4 \| vj \| ({{\dot{X}}^{-1/6}_{\infty, T}}).$$ • Full nonlinear problem requires delicate modulation. If you do so, you can’t close the multilinear estimates. Instead, we only require orthogonality asymptotically, rather than at all times. • More multilinear estimates involving $Q, {\tilde{Q}}, Q’$. • GWP for small data/scattering in scaling spaces • Inverse wave operators. Energy spaces Virial identity for the $v$-problem: $\eta (x) = -\frac{5}{3} \frac{Q’}{Q}$. $$ – \frac{d}{dt} \int \eta(x) v^2 dx + c \| [sech]^2 (\frac{3}{2} x) v \|_{H^1}^2 \leq 0.$$ So, we have some monotone decrease in this weighted space. $$\partial_t \langle v, Q’ \rangle = \langle {\cal{L}} (\partial_x v), Q’ \rangle $$. Kato Smoothing: • $\gamma_0 (x) = 1 + \int_{-\infty}^x (1 + |y|^2)^{-(1+\epsilon)/2} dy.$ • $\gamma_\mu = \gamma_0 (\mu^{-1} (x – \mu^{-2} t))$ $ \frac{d}{dt} \int \gamma_\mu u^2 dx + \int (\gamma_\mu)’ (u_x^2 + \frac{1}{3 \mu^2} u^2) dx \leq 0.$ $ \partial_t \langle {\cal{L}}^{-1}v, v\rangle = 0.$ $$ E(v) = \int \gamma (x) (v_x^2 + v^2) dx + \lambda_E\int \eta(x) v^2 dx + \Lambda_E \langle {\cal{L}}^{-1} v , v\rangle We define then our “natural” Energy spaces. • $X^s = L^\infty H^s \cap L^2 H^{s+1}_{\sqrt{\gamma’}}$ • $Y^1 = L^1 H^1 + L^2 \sqrt{\gamma’} L^2$ We then build spacetime function spaces using the $U^2, V^2$ spaces (defined in S. Herr’s talk) based on these structures and the cubic dispersion relation….and not the linearized equation for the $v$ equation…..chalk coming too fast for me to write down…..ack. Nonlinear Modulation $\psi (x,t) = Q_{c(t)} (x – y(t)) + w(x,t)$ $ \partial_t w + \partial_x (\partial_x^2 w + 4 Q^3 w) = \frac{\dot{c}}{c} {\tilde{Q}}(x-y) + ({\dot{y}} – c) (Q_c)’ (x-y) – \partial_x ( O(w^2)). $ Usually, we choose w $\perp Q, Q’$ through choice of $c, y$. $ \frac{{\dot{c}}}{c} \langle (Q_c) , (\tilde{Q}c ) \rangle = \langle w, (Qc ) \rangle.$ $ (\dot{y} – c^2) \langle (Q_c)’, (Q_c)’ \rangle = – \kappa < w, (Q_c)’>$ We then calculate: \frac{d}{dt} \langle w, Q \rangle + \langle w, Q \rangle = O (w^2), \frac{d}{dt} \langle w, Q’ \rangle + \kappa \langle w, Q’ \rangle + O (w^2) = 0. With this structure and the formalism of Tao, and some careful work, we can put it all together. I had a nice follow-up conversation with Raphaël Côte. I wondered whether there were similar small data and remainder-atop-soliton scattering results for low power KdV equations. He pointed out that “clean” scattering does not hold in the small data case for the low power gKdV equations. Instead, there are modified scattering statements for data satisfying certain weighted conditions proved by Hayashi and Naumkin It is perhaps reasonable to expect corresponding statements about the error term in the asymptotic stability results around (multi)solitons. However, this is open for study. We are looking at the middle of the ocean. Let’s imagine infinite depth and no boundary. We have gravity pointing odwn and the density of the air is 0 and the density of the water is 1. We assume the water is inviscid, incompressible, irroational, surface tension is zero. The interface is called $\Sigma (t)$. The motion of the fluid is described by the Euler equation ${v_t} – v \cdot \nabla v = (0, -1) – \nabla P$ in the interior $\Omega(t)$. We also have $div v = 0, curl v =0$, ….ack slide changed. G.I. Taylor (1949) linearized about the flat interface and found that air above water is stable but water above air is unstable. LWP for arbitrary data [S. Wu 1997 (2d) 1999 (3d)]: Local existence in Sobolev spaces under the right Taylor stability condition. Earlier Results: * Beal, How, Lowegrub 1992 formulated the Taylor sign condition: $ -\frac{\partial P}{\partial n} \geq c_0 > 0#. * Nalimov 1974 infinte depth * Yoshihara 1982. The work has been extended in many directions. Iguch 2001, Ogawa and Tani 2002, Ambrose and Masmoudi 2005, Lannees 2005, Christodoulu and LIndblad 2003, Lindblad 2005, Coutand and Skholler 2005, Zhang and Zhang, Shatah and Zhang. Global-in-time behavior What is the global in time behavior of the solution of the water wave equation? We will focus on small and smooth data. This is reasonable since it is known that 90% of the waves on the ocean are smaller than 2m? I’d like to know the reference for this 90% claim. Maybe this is done using satellite data? Perhaps this remark motivates a probabilistic Cauchy theory which explains the infrequency of rogue waves? ….slides are changing fast….I can’t keep up so I will listen and make remarks wehn I can. Quadratic interaction is too strong so the key idea is to use a change of variable which recasts the problem with a cubic nonlinearity. A natural setting for studying 3D water wave is the Clifford Algebra and use Clifford analysis. The difficulties in 3D are that there is no Riemann mapping, the Clifford Algebra is noncommutative, products of analytic functions in 3D are not analytic. We find that in the 3D problem there is also a special structure allowing us to recast the problem so that quadratic problems disappear and the nonlinearity is cubic and higher orders in nature. It is not purely cubic, there are some quadratic terms but we can handle those as though they are cubic. Theorem: (2D) Assume initial wave is of small height, initial velocity is also $\epsilon$ small. Assume we have finitely many derivatives of f and g are in $L^2$. Then, there is a unique solution on a time interval $[0,e^{c/\epsilon}]$. During this time, the solution remains smooth and small. Theorem: (3D) We assume less here. Suppose initial condition given as a graph. For data with small steepness (no smallness condition on the height) and possibly with infinite energy but also with small velocity on the interface, then the solution is uniquely defined and global-in-time, remains smooth and small. It seems like we have a better result in 3D. But, in my opinion, these two results are equivalent, they are of equal strength: equally good/equally bad. We can view the 2D case inside the 3D problem and in that view we have an infinite energy 3D case. Maybe we can prove the 2D result under the small steepness condition. Famous picture of Rogue wave with a ship in foreground. Rogue waves are vastly massive waves (30m). Often appear in perfectly clear weather, wtithout warning. It’s exact causes are still unknown. Possible causes? Diffractive focusing (effect from caostline)? Focusing of currents? Nonlinear effects? We are avoiding wind and boundaries so we want to understand whether nonlinear effects can be explained as the source of rogue waves. I am confused. The 3D result says that initial waves given as a graph over the bottom with small steepness remain small and smooth forever. So, this result does not explain or speak to the rogue wave phenomenon. Of course, it suggests that large initial steepness is required for a rogue wave to form within this model of the ocean. Again, this situation seems ripe to me for a probabilistic study of the Cauchy problem? “Once you get the algebra part right, the analysis part just goes through without complication.” We only need to know the fluid motion on the fluid interface. We therefore try to reduce the Euler equation to an equation on the fluid interface. This removes the difficulty of the free boundary. Normal Forms Discussion The technical discussion seems to revolve around making a bilinear change of dependent variable with the goal of killing off the cubic terms. It doesn’t work….but when working in the right coordinate system with the right quantities, the nonlinearity of the 2D water wave equation is cubic and higher orders. Nickolay Tzvetkov: On random data nonlinear wave equations Background References (joint work with Nicolas Burq) Let $(M,g)$ be a Riemannian manifold of dimension $d=3$ with $\partial M = \phi$. We consider the cubic wave equation (\partial_t^2 – \Delta_g) u + u^3 = 0 with initial data $(u0, u1) \in H^s \times H^{s-1}$. $H^{1/2}(M)$ is the critical space for this problem. He sometimes denotes the problem with (*). Theorem (deterministic theory): • The problem (*) is locally well-posed in $H^s \times H^{s-1}, ~ s \geq 1/2$ and globally for $s \geq 1$. • The problem (*) is ill-posed in $H^s \times H^{s-1}, ~ s \in (0, 1/2).$ 1. For example, $\exists ~ (u_n (t))$ sequence of smooth solutions of (*) such that the initial data goes to zero in $H^s \times H^{s-1}$. But, $ \| (u_n(t), \partial_t u_n (t)) \|{L^\inftyT_ ; H^s \times H^{s-1}} = + \infty, ~\forall T>0.$ (inspired by work of Christ-Colliander-Tao) 2. Moreover, $\exists$ a single data $(u0, u1) \in H^s \times H^{s-1}$ such that $\forall ~T>0$, (*) has no solution in $L^\infty ([0,T]; H^s \times H^{s-1})$ satisfying the finite propagation speed. (instantaneous blowup inspired by work of Lebeau) On $R^3$, there are refined global results for $s \in [3/4, 1]$ are due to Kenig-Ponce-Vega, Gallagher-Planchon, Bahouri-Chemin, Roy, …. Probably this can be transported to the torus (using finite propagation speed) but this is not written. OPEN QUESTION Question: Can one still prove some form of well-posedness for $s< \frac{1}{2}$? Idea: Yes, by randomizing the data. • We have a general method to do this locally in time Burq-Tzvetkov 2008. • A very particular method for globally in time [Burq-Tzvetkov 2008](( “Random data Cauchy theory for supercritical wave equations II : A global existence result”)), exploiting invariant measures a la Bourgain. Goal for today: General method for globally in time. We can skip this invariant measure business. But, if we are only PDE people, there is a method which allows us to globalize without relying upon the invariant measure aspects. Randomized data on $T^3$ Starting from $(u0, u1) \in H^s \times H^{s-1}$ we form their Fourier series $$ u0 = \sum_{n \in Z^3} c^0_n e^{i n \cdot x}$$ (same for u1) and we define u_0^\omega = \sum_n g_n^0 (\omega) c_n^0 e^{i n \cdot x} with natural hypotheses on the random variables to ensure the data stays real valued. He also decomposes the $g(\omega)$ in real and imaginary parts is a system of i.i.d. random variables with a joint distribution $\mu$ satisfying $\exists ~c>0, ~ \forall ~ \gamma >0, \int_{-\infty}^{\infty} e^{\gamma x} d\mu (x) \leq e^{c \gamma^2}$. • Gaussians: $d \mu (x) = e^{-x^2/2} \frac{dx}{2\pi}$ • Bernoulli: $d \mu (x) = \frac{1}{2}( \delta_{-1} + \delta_1)$ The gaussians generate a dense set in $H^s$. Bernoulli does not but leaves the data on the same sphere in $H^s$. Theorem: Let $M = T^3, (u0, u1) \in H^s \times H^{s-1}, ~ s \in [0,1]$. Then (*), with data $(u_0^\omega, u_1^\omega)$ is globally well-posed almost surely in $\omega$. Consider the probability measure $\rho$ on $H^s \times H^{s-1}$ defined by the map: $ \omega \rightarrow $(u0^\omega, u1^\omega)$. Every function gives a different measure, so I have many measures. Theorem (again): There exists a set $\Sigma$ such that $\rho (\Sigma) =1$ and such that $\forall ~ (v_0, v_1) \in \Sigma$ there is a unique global solution of (*) with data $(v0, v1)$ such that (u, u_t) \in [Free ~Evolution ~of~ (v_0, v_1)] + C(R; H^1 \times L^2). In addition, the solution satisfies the finite propagation speed and, moreover, if we denote by $\Phi(t)$ the constructed flow we have the following properties: 1. $\Phi (t) (\Sigma) = \Sigma$ 2. $\forall (v_0, v_1) \in \Sigma$, $\| \Phi(t)(v_0, v-1)\|_{H^s \times H^{s-1}} \lesssim \langle t \rangle^{1-s/s +}, s>0.$ (Remark: The implicit constant here is a random variable.) 3. Measure same thing in $L^2 \times H^{-1}$ and we get the bound $e^{c t^2}$. Steps in the proof 1. Global existence step. (inspired by Paley and Zygmund) 2. Construction of the set $\Sigma$. (inspired by the invariant measure consideration by Bourgain) 3. Control on the flow for $s>0$. (inspired by the high/low frequency decompositon a la Gallagher-Planchon and by recent work by Colliander-Oh) 4. Control on the flow for $s=0$. Here the analysis degenerates. (inspired by the work of Yudovich on the Euler equations) “We can say that we have developed a probabilistic version of the Yudovich argument.” Large deviation estimates. Consider $\square_g u_{lin}^\omega$ with the randomized data $(u0^\omega, u1^\omega)$. For $s>0, ~\delta > 0, ~\exists c>0, ~ \forall \lambda \geq 0$ we have the large deviation estimate p ( \omega: \| \langle t \rangle^{-\delta} u_{lin}^\omega \|_{L^\infty (R \times T^3)} > lambda ) \leq \frac{1}{c}e^{-c \lambda^2}. Of course, this is much better than what we can get from Strichartz. We look for solutions as $u = u_{lin}^\omega + v$ and we study $\square_g v + (v + u_{lin}^\omega)^3 = 0$ with zero initial data. We have the energy $E(v) = \frac{1}{2} \int |\nabla v|^2 + |v_t|^2 + \frac{1}{4}\int v^4 dx. We then calculate $\frac{d}{dt} E(v) = \int \partial_t v (v^3 – (v + u_{lin}^\omega)^3).$ We are lucky because the $v^3$ terms cancel and by Gronwall we have global existence for $\omega$’s of big probability. Then, we make some intersections and do some measure theory to finish. This argument gives exponential control. We revisit the analysis using the high/low frequency truncation ideas to improve to polynomial control. Remark: We can prove similar results for ANY manifold by using a randomization due to Lebeau. Schlein: How is the set $\Sigma$ invariant? Tzvetkov: The set \Sigma is of the form random orbit of the data plus smooth functions. Since the smooth functions have zero measure, we can throw them into \Sigma. Ionescu: How do you see in the analysis that you are studying the defocusing question? Tzvetkov: In the Gronwall business, we used the sign. Background References: We will assume the Cauchy data are small, smooth and localized. We will further restrict the problem to semilinear wave and Klein-Gordon equations in dimension 3. NLW, $d=3$ • $\square u = |u|^{p-1}u$. • Above the Strauss exponent $ p > 1 + \sqrt{2}$. • At the Strauss exponenent, finite time blowup was shown by [John-Schaeffer] • $\square u = |u_t^2 – |\nabla u|^2$. • Null form structure observed by Christodoulu and Klianerman gives global existence. • $ \square u = |u_t|^2.$ • finite time blowup [John] • $\partial_t^2 u^i – c_i \Delta u^i = \sum Q^i_{jk} (Du^j, Du^k)$ • Global existnce if $Q^i_{jk}$ is a null form. [Yokoyama, Ohta, Katayama, Sogge, Metcalfe, ….] • $\partial_t^2 u – Delta u + u = |u|^{p-1}u. • For $p>2$ (the Strauss exponenet), you have global existence [Strauss]. • $\partial_t^2 u – Delta u + u = Q(u,u)$ or $Q(Du, Du)$. • global existence [Klainerman], [Shatah] • What about different propagation speeds? $\partial_t^2 u^i – c_i \Delta u^i + u^i = \sum Q^i_{jk} (u^j, u^k)$ • This case has some difficulties and my new result addresses this issue. All the results I quoted have been provd using the vector field method. How does this work? You find a bunch of vector field $(\Gamma_i)$ which commute with the linear part of the equation. Then you estimate $\Gamma^\alpha u.$ The method does not apply to KG with different speeds. You don’t have sufficiently many commuting vector fields to treat the multiple speed KG case. There were some other methods used for these problems. In particular, Shatah used a normal forms method. Christodoulu used a change of variables method but most of the theory has been built on the vector field method. NLKG with different speeds is a toy model for Euler-Maxwell, provided you restrict to high frequencies and ignore certain things. \partial_t^2 u – \Delta u + u = Q(u,v), ~ \partial_t^2 v – c^2 \Delta v + u = P(u,v) with some initial data for the two equations. (No derivatives in the quadratic nonlinearities.) Assume that the data has some $L^2 $ weighted (power 1 ) control and is small enough and we also have $H^N$ smallness with a big enough N. Then there eists a global solution which furthermore scatters in $H^N \times H^{N-1}$ which means the nonlinear evolution converges to a linear solution as time goes to infinity. The vector field method does not apply. Instead, we use a spacetime resonances method which we have applied to the water wave problem and to the NLS equation. This is a new instance where we can apply this method. The method was developed in collaboration with Shatah and Masmoudi. Spacetime resonance method For the sake of exposition, consider $i \partial_t u + P(D) u =u^2$ emerging from data $u0$. Let $f(t) = e^{-it P(D)} u(t)$ and consider this new unknown function instead of $u$. Write the Duhamel formula for $\hat{f}$. What you find is that \hat{f} (t, \xi) = \hat{u_0} (\xi) + \int_0^t \int e^{i s [P(\xi + \eta) - P(\xi) - P(\eta)]} \hat{f} (\eta, s) \hat{f} (\xi – \eta, s) d\eta ds. We have a problem if the phase is stationary either in s. What can save us is the oscillations. This is what we call time resonances. Or, if the phase is stationary in $\eta$ and this is what we call space resonances. Of course, the worst situation is when we have stationarity in both senses and this is what we call spacetime resonances. • If the phase factor (redenoted as) $\phi \neq 0$ an integration by parts in $s$ and push the nonlinearity to cubic. This is just the normal forms method seen on the Fourier side. • If $\partial_\eta \phi \neq 0$ you can integrate by parts in $\eta$ and you gain an $s$ in the denominator which is “always pleasant when you are trying to prove global eistence.” This is the vector field method seen in Fourier space. He draws two graphs where $\phi$ and where $\partial_\eta \phi$ vanish on the $\xi, \eta$ Cartesian product. We use pseudo-product operators Coifman-Meyer to decompose in the $(\xi, \eta)$ space. \mathcal{F} ( B_{m(\eta, \xi)}) (f,g) (\xi)= \int m(\eta, \xi) \hat{f} (\eta) \hat{g}(\xi – \eta) d \eta. Physical meaning • Time resonances are “standard resonances” in the dynamical systems sense. • Space resonances are when waves of different frequency move with the same group velocity (….not really explained) Application to our problem You get a lot of different phase functions: • $\phi (\xi, \eta) = \langle \xi \rangle_l \pm \langle \eta \rangle_m \pm \langle \xi – \eta \rangle_n$ where $\langle x \rangle_\alpha = \sqrt{1 + \alpha^2 x^2}$ and $l,m,n$ are chosen among the two possibilities: 1 and $c$. • Look at the place where both $\phi$ and where $\partial_\eta \phi$ vanish. • Sometimes this set is empty. • Sometimes this set has the form $[ |\xi | = R, \eta = \lambda \xi]$ for real numbers $R, \lambda$. • Actually, such a set is generic for interactions between waves with a dispersion relation $p(|\xi|)$ which depends only on the frequency size. Thus, the method can be applied to other settings. He redraws the graph of the zero level sets for $\phi$ and $\partial_\eta \phi$. He then excises around the point where these sets intersect using a cutoff using a pseudo-product operator with a symbol $m$ which is increasingly singular along the set of simultaneous vanishing. This is a bit annoying because there are no general estimates for such pseudo-products. We would need to estimate the boundedness of $B_m: L^p \times L^p \rightarrow L^r$ where $m$ is singular along $[ |\xi | = R, \xi = \lambda \eta]$ for real parameters $R, \lambda$. The Coifmann-Meyer calculus requires nicer properties on $m$. In contrast, there is work by Lacey-Thiele on the bilinear hilbert transform which does have a singularity in the 2-multiplier but does not apply to our case. We use that you are at the Strauss exponent so that rough estimates are enough to succeed. In the theorem, we need to assume that resonances are separated. Look at the spacetime resonance set $\cal{R} = [\phi = 0] \cap [\partial_\eta \phi = 0]$ and project onto $\xi$, which I call “outcome frequencies”. If you project onto $\eta, \xi – \eta$ you get what I call “source frequencies”. We need to assume that $[outcome] \cap [source] = \phi$. This is generically true for different speeds $c$. In particular, we have this property true for all but a discrete set of speeds $c$. There is alast point wihich is a bit problematic: Spacetime resonaces at $\infty$. $\phi, \partial_\eta \phi \rightarrow 0$ at $\infty$. To overcome this difficulty, we rely upon the high regularity $H^N$ hypothesis using Strichartz estimates. We then separate the analysis into low and high frequencies. Koch: Gain from modulation versus gain from bilinear estimate. Dualize the argument and you can recast as a condition on the nonvanishing of $\partial_\xi \phi$. After the talk, I learned from Pierre that he had written an expository article on the spacetime resonances method. T Oana Ivanovici: Dispersive Estimates on convex domains (joint work with Fabrice Planchon) Consider a domain $\Omega$ of dimension $d \geq 2$. We consider the wave equation $\partial_t^2 u – \Delta u = 0$ with initial data and vanishing on $\partial \Omega$. Consider, for point of reference versus later statements, the situation where $\Omega = R^d$. Take $u_0 = \delta_a$ and $u_1 = 0$. Then the solution is given by the Green’s function u_{a, R^d} (t,x) = \int \cos (t |\xi|) e^{i \xi \cdot (x-a)} d \xi. Dispersive Estimates: \| \psi (h D_t) u_{a, R^d} \|_{L^\infty} \leq C(d) h^{-d} \min (1, (\frac{h}{|t|})^{\frac{d-1}{2}}). We are interested in the case where $\partial \Omega \neq 0$. We must confront reflected waves, glancing rays and waves which travel along the boundary. Let $\Omega$ be a strictly convex domain. In particular, we will consider $\Omega$ to be the Friedlander domain. $\Omega = [(x,y) \in R^d: x>0, y \in R^{d-1}]$ with the associated Laplacian $\Delta = \partial_x^2 + (1+x) \Delta_y$. This is very close to the laplacian on the disk. Then, she draws the half space and describes the bicharacteristics as a bunch of circles bouncing along the floor. Theorem: Take $a>0$ small so that $(a,0) \in \Omega$ (in the interior but close to $\partial \Omega$). $\exists ~T>0$ such that $\exists ~ C>0$ such that $\forall ~ h \in (0,1]$ we have \| \psi (h D_t) u (t, x, y) \|_{L^\infty (\Omega)} \leq C(d) h^{-d} \min (1, (\frac{h}{|t|})^{\frac{d-2}{2} + \frac{1}{4}}). The way to study this is to consider the set of points you can reach from the point $a$ upon traveling for time $T$. The method of proof involves a decomposition of the data in terms of wave packets which hit the boundary a certain number of times. The worst packets are localized in small cones that are almost parallel to the boundary. …rapid discussion of some frequency localzations…lots of glancing rays pictures….subsequent reflections are denoted by $u_j$. Each reflection involves a loss of 1/6 derivative and there can be manyreflections ccumulating until a total loss of 1/4 derivative. After that there will be no more regularity loses. $\implies$ Spectral projector and Strichartz estimates. Smith and Sogge studied similar problems using a reflection across the boundary idea. For dimensions $d \geq 3, the reflection method does not have a chance to get optimal regularity losses. First, you don’t see the dispersion tangential to the boundary. Also, their study only captures the loss from one reflection but does not resolve the accumulated losses. The loss of 1/4 derivative happens at a special time after many reflections. Works by Blair-Smith-Sogge are improved in this work. She draws some Strichartz diagrams and shows that her new dispersive estimate implies a wider range of valid Strichartz exponents. We will soon see that the only possible losses are 1/6 or 1/4. Cusp solutions hugging the boundary This result was announced at a conference in Evian by G. Lebeau. Lebeau explained the geometrical features of the argument but the analytical details were not written down. Fabrice and I are writing those down…. To demonstrate the loss, she writes the boundary and draws data that looks like a cusp. u_0 (x,y) = \int e^{i \frac{\eta}{h} 9\frac{\xi^3}{3} + (x-a)\cdot \xi + y} \psi (\xi) \phi (\eta) d\xi d\eta. The wave starts localized within $a$ of the boundary. AFter some time $t \sim 2 \sqrt{a}$ the cusp is upside down wrt boundary and then at the time $t = 4 \sqrt{a}$ the cusp reappears and the singularities only appear at these specific locations near the boundary. The situations is studied with $a \thicksim h^{1/2}$. For $a$ smaller than this power, we would not be able to repeat the construction for many reflections. It will degenerate. The caustic in this case is the line sliding along the boundary passing through the cusps. along the caustic, the intensity of light is much brighter. At points along the caustic, oscillatory integrals don’t enjoy good bounds. u_h (z) = \frac{1}{h^{1/2}}\int e^{\frac{i}{h} \phi (z, \xi)} \sigma (z, \xi, h) d\xi, \xi \in R Everyone knows that the number and degeneracy of the critical points of the phase function control the asymptotics of this guy as $h \rightarrow 0$. Degenerate critical points: Let $k$ denote the *order of the caustic of u_h$ be defined by $\inf_{k’} [k’: \| u_h \| \sim O (h^{-k’})]$ Example 1: Let $\phi_F (z, \xi) = \frac{\xi^3}{3} + z_1 \xi + z_2.$ Here $z_1 = -\xi^2, ~z_2 = – \frac{2}{3} \xi^3$. So we have a fold. This type of phase function corresponds to $k = 1/6$. She draws a sideways parabola and projects it down onto a cusp. Cusp type integral: $\phi_C (z,\xi) = \frac{\xi^4}{4} + z_1 \frac{\xi^2}{2} + z_2 \xi + z_3.$ (This has order 1/4) (Pearcy-type integral) • $\partial \phi:~ z_2 + 2 z_1 \xi + \xi^3 = 0 • $\partial^2 \phi: ~ 2 z_1 + 3 \xi^2 = 0 • $\partial_\eta (\eta \phi_c ): ~ z_3 _ \xi z_1 + z_2 \frac{\xi^2}{2} + \frac{\xi^4}{4}=0. * Swallowtail:* $\phi_s (z, \xi) = \frac{\xi^5}{5} + z_1 \frac{\xi^3}{3} + z_2 \frac{\xi^2}{2} + z_3 \xi + z_4.$ We have a degenerate critical point of order 4…..ack….I am running out of battery and this is really nice stuff… Axel Grünrock: Cauchy Problem for higher order KdV and mKdV equations I am interested in the question of optimal local well-posedness. Background References KdV hierarchy Lax 1968 introduced the hierarchy of higher order KdV equations. \partial_t u + \partial_x G_j (u) = 0 We will refer to this as (hoKdV-j), the higher order KdV equation. $$ \langle Gj (u), v \rangle = \frac{d}{d\epsilon} Hj (u+\epsilon v)|_{\epsilon = 0} Hj (u) = \int P_j (u, \partial_x u, \dots, \partial_x^j u) dx. These are the Hamiltonians of KdV. • $P_{-1} (u) = u$ • $P_0 = – \frac{1}{2} u^2$ • $P_1 (u) = – \frac{1}{2} u_x^2 – u^3$ The iteration procedure then defines the hierarchy: • $G_1 (u) = u_{xx} – 3 u^2 \implies u_t + \partial_x^3 u = 6 u u_x$ • $ u_t + \partial_x^5 + 5 \partial_x ( \partial_x^2 u^2 – (\partial_x u)^2 – 3u^3)=0$ • $ u_t + \partial_x^7 u – 7 \partial_x (\partial_x^4 u^2 – 2 \partial_x^2 (\partial_xu )^2 _ (\partial_x^2 u)^2 – 10 u \partial_x (u \partial_x u + 5 u^4) = 0.$ • …. We can thus define some general structure of the higher order KdV equations based on rank properties where $rank_{KdV} = degree + \frac{1}{2}~ derivatives ~in~ x = j+2$. We find that $|\rho| = 2 (j-k) + 3.$ For all the equations in the hierarchy, we have the same scaling critical regularity of $s_c = – \frac{3}{2}$. There is a second shared property for all the equations in the hierarchy. The Hamiltonians in the KdV hierarchy are all in involution with respect to the Poisson bracket: $$ { H_k, H_l } := \langle G_k (u), \partial_x G_l (u) \rangle, ~\forall k, l \geq -1.$$ We can therefore calculate that \frac{d}{dt} H_k (u) = \langle G_k (u), \partial_t u \rangle = – \langle G_k (u), \partial_x G_l (u) \rangle = 0. mKdV hierarchy A similar tower or hierarchy of equations may be built around the mKdV equation using the Miura map: $v \rightarrow v_x + v^2$. Sequence of ${\tilde{H}}j (v) = H(j-1) (v_x + v^2)$. This spawns ${\tilde{G_j}}(v)$ by writing \partial_t v + \partial_x {\tilde{G}}_j (v) = 0 which we denote by (homKdV-j). What can we say about the structure of the nonlinear terms in the mKdV hierarchy of equations. The rank condition for KdV hierarchy is transferred via the Miura map into a rank condition for the mKdV hierarchy. • nonlinear terms in mKdV hierarchy are all odd in $v$, so no quadratic terms. • $|l| = 2 (j-k) + 1$ • We thus find that the mKdV hierarchy enjoys a joint scaling invariance corresponding to $s_c = – \frac{1}{2}$. Earlier Results • 1979 Saut: Existence of persistent solutions of hoKdV-j and homKdV-j in $H^j$ using the energy method which works equally well in the periodic or nonperiodic setting. • 1993 Ponce: hoKdV-2, LWP in $H^s (R)$ provided that $s> \frac{7}{2}$ and, combining the LWP result with conservation laws, he obtained GWP for $s \geq 4$. • 2008 Kwon: LWP for hoKdV-2 for $s> \frac{5}{2}$ and GWP for $s\geq 3$ using a refined Energy method developed by Koch-Tzvetkov for treating Benjamin-Ono. • 1993/4 Kenig-Ponce-Vega: $\exists ~s_0 = s_0 (j)$ and $m – m(j)$ such that $\forall ~ s \geq s_0$, hoKdV-j is LWP in $H^s (R) \cap L^2 (|x|^m dx)$ • Corresponding results for homKdV-j. It was remarked there that the weights are not necessary for treating the cubic and higher power cases. • 1995 Linares: homKdV-2 is GWP in $H^s (R)$ provided $s \geq 2$. • 2008 Kwon: LWP improved down to $s\geq – 3/4$ and thus GWP in $H^1$. • 2008 Pilod: Without the weights in the data spaces, one has ill-posedness in the hoKdV-j hierarchy, ~$\forall ~j \geq 2$. In particular, he showed that the flow map can not be $C^2, ~ \forall s\in R$. The argument involves an interaction between high and very low frequencies. Higher order Sobolev regularity is not beneficial at all. Killip: Is there a contradiction here with the positive result of Kwong vs. Pilod? Grünrock: Kwon uses energy methods so obtains continuous dependence, not $C^2$ dependence of the flow map. New Results Data spaces: $\| f \|{\hat{H}s^r} =\| \langle \xi \rangle^s \hat{f} \|{L^{r’}\xi}, ~ \frac{1}{r} + \frac{1}{r’} = 1.$ Here $1 < r \leq 2. We have $H^{s,r} \subset {\hat{H}}_s^r. Spacetime spaces: $ \| u \|{X{s,b}^{r,p}} = \| \langle \xi \rangle^s \langle \tau – \phi (\xi) \rangle^b \hat u\|{L^{r’}\xi (L^{p’}\tau)}.$ Here we have $\phi (\xi) \sim \xi^{2j + 1}$. What are the crucial estimate we need that will lead to local well-posedness? Ingredients (tools) 1. Smoothing estimates • linear: $\| D_x^{\frac{2j-1}{3r} u \|{L^r{tx}} \lesssim \| u \|{X^r{0b}}$ if $b > \frac{1}{r}, ~ \frac{4}{3}< r \leq 2$ ~(fails for $r \leq \frac{4}{3}$.) • triliner estimates with the same gain order (up to $\epsilon$). • bilinear refinement: For $b > \frac{1}{p}, ~ 1 < r \leq r_{1,2} \leq p \leq 2, ~ \frac{1}{r} + \frac{1}{p} = \frac{1}{r_1} + \frac{1}{r_2},$ $$ \| M_{j,p} (u,v) \|{{\hat{L}^rx {\hat{L}}^p_x}} \lesssim \| u \|{X{0b}^{r_1, p}} \| u \|{X{0b}^{r_2, p}$$ We have an increasing gain of regularity with these estimates of gain $D_x^{\frac{2j}{p}}$ in the parameter $\frac{1}{r}$ or $\frac{1}{p’}$, respectively. 2. Resonance relation $(k=2)$. $$ \sum_{i=0}^2 \langle \tau_i – \xi_k^{2j+1} \rangle \gtrsim |\xi \xi_1 \xi_2| \times ( \xi_1^{2(j-1) } + \xi_2^{2(j-1)})$$. We have a gain: $D_x^{\frac{2j+1}{p’} -}$ since $\langle \tau_0 – \xi_0^{2j+1} \rangle^{b – 1 – \epsilon = \frac{1}{p’}}$. This gain is decreasing in $\frac{1}{r}$ or $\frac{1}{p’}$, respectively. homKdV-j: He expresses the LWP results in the $(\frac{1}{r}, s)$ as lines leaving the vertical $s$ axis and all passing through the point $(1,0)$. The results on $H^s$-scale for $j \geq 3$ are new. We obtain GWP in $H^s$ for $s \geq [\frac{j+1}{2}]$ (integer part). Thus, the use of $\hat{H}^{s,r}$ spaces lead to new insights. Moreover, the results converge toward a nice statement which identifies a common joint space $\hat{L}^1$ which contains finite measures and which contains $L^1$. Unfortunately, the result at that endpoint is not yet established. For KdV, he draws a similar picture. The lines do not appear to converge. we are far away from finding a joint space. Tataru: C^2 vs. mereley continuous dependence properties? Staffilani: Periodic case? Grünrock: No, I don’t have results there. Colliander: $NLS_3$ in $\hat{L}^1$? For me, fantastically interesting conversations with Koch, Grünrock, Tataru and Vega. • OPEN: Is there a space of functions wherein each equation in the mKdV hierarchy is GWP? • OPEN: The space ${\hat{L}}^1$ appears to be a natural candidate given the visual description Axel gave of his results. • Corresponding questions about cubic NLS in one space dimension? L. Vega points out that ${\hat{L}}^1$ can not do the job because of nonuniqueness results for NLS evolution emerging from the Dirac mass. • NLS has galilean invariance; mKdV does not so perhaps there is some hope for mKdV in ${\hat{L}}^1$? • I will ask Boris Khesin about whether the integrable hierarchy of equations containing cubic NLS is exposed nicely somewhere. It might be interesting to try and carry out an analogous study of the NLS hierarcy. (joint work with Piero d’Ancona) The Maxwell-Dirac system (MD): $$ (-i \alpha^\mu \partial_\mu + M \beta) \psi = A_\mu \alpha^\mu \psi$$ $$ \square A_\mu = -\alpha \langle \alpha_\mu \psi , \psi \rangle$$ $B = \nabla \times A, ~ E = \nabla A_0 – \partial_t A.$ We are interested in evolution starting from data $\psi_0, E_0, B_0$ satisfying the constraints $\nabla \cdot E_0 = |\psi_0|^2, ~ \nabla \cdot B_0 = 0.$ We are using the Lorenz gauge condition: $\partial_\mu A^\mu = 0$. 2d: $\alpha^0 = I, ~ \alpha^1 = \sigma^1, ~ \alpha^2 = \sigma^2, ~\beta = \sigma^3$ where the $\sigma$’s are the Pauli matrices and the $\alpha$’s are called the Dirac matrices. He decomposes the electric field into divergence free and curl free parts. We can then write $E_0 = E_0^{df} + \Delta^{-1} \nabla (|\psi_0|^2)$. We are restricting the motion to take place in the $x^1, x^2$ plane so the magnetic field must be perpindicular to that plane. All fields are independent of $x^3$. $A = (A_1, A_2, 0)$ and $B = (0, 0, \partial_1 A_2 – \partial_2 A_1)$. Given the initial consitions on the $E, B$ fields and the Lorenz gauge condition, we can specify the initial data for the potential $A$. Maxwell-Dirac and Dirac-Klein-Gordon DKG: $(-i \alpha^\mu \partial_\mu + M \beta) \psi = \phi \beta \psi$….ack too fast. Best reference for this is the paper of Glassey-Strauss 1979. • Energy – no sign • Charge: $\int |\psi (t,x)|^2 dx = const.$ • Scale invariant regularity: $\psi_0 \in {\dot{H}}^{d-3/2}, ~ E_0, ~ B_0 \in {\dot{H}}^{d-2/2}$. • MD is critical is charge critical in 3d • charge subcritical in 2d and 1d. Of course, we would like to prove global regularity. A natural strategy is to prove low regularity LWP and then exploit conservation laws. However, this is not so clear yet….. • 1d MD GWP: 1973 Chadam • 3d MD global regularity for small data: 1993 Georgiev • 3d MD stationary solutions: Esteban, Georgiev, Séré 1996 EGS • 2d DKG GWP: Grunrock and Pecher • 2d MD GWP: [d’Ancona and Selberg 2010](( “Global well-posedness of the Maxwell-Dirac system in two space dimensions”))) Are there stationary solutions for 2d MD? Are there other obstructions to decay/scattering? Are there size thresholds for the 3d MD setting. I should study the [EGS] works…. Local theory in 3d: • Gross 1966 • Bournaveas 1996 • Masmoudi and Nakanishi 2004 • d’Ancona, Foschi, Selberg: Complete null structure of DKG 2007 and MD 2010 and almost optimal LWP. 2d DKG Charge class data and $(\phi, \phi_t) \in H^{1/2} \times H^{-1/2} (R^2)$. • LWP known for such data. • To get the global result, we need to control $D(t)$, which is his notation for the $H^{1/2} \times H^{-1/2}$ size of the evolving solution $(u(t), \partial_t u(t))$. Theorem (Grunrock-Pecher 2010): 2d DKG is LWP up to time $T>0$ s.t $T^{1/2} [1 + D(0)] \leq \epsilon$. Moreover, $\sup_{|t| \leq T } D(t) \leq D(0) + C T^{1/2}$ with $C$ dependent on charge constant. The globalizing procedure follows a general argument introduced by Colliander, Holmer and Tzirakis 2008. How does it go? • $T^{1/2} [1 + D(0)] = \epsilon/2$ • $T^{1/2} \sim \frac{1}{D(0)}$ • You iterate $n$ steps and accumlate errors until you grow until $n C T^{1/2} \sim D(0)$. This develops the solution onto a time interval of size $nT \sim 1$ so you have advanced the solution to a local interval whose length only depends upon the charge. Therefore, you can iterate this process to make it go global. We want to apply this procedure to do the same for MD. Theorem (d’Ancona and Selberg 2010): 2d MD is LWP up to time $T>0$ s.t. $T^{1/2} [1 + D_T (0] \leq \epsilon$ where $\epsilon$ depends upon the chage constant. Moreover, $$\sup_{|t| \leq T} D_T (t) \leq D_T (0) + C T^{1/2} \log (\frac{1}{T}).$$ Corollary: 2d MD is GWP. The iteration procedure is more involved than the CHTz scheme due to a logarithmic loss. There is an intermediate iteration which reduces matters to a harmonic series! This was exposed nicely so I watched it without typing….. What lies behind the proof? • LWP, Subcriticality • Growth estimate for EM field Key Points • Null structore of nonlinear terms • Refined bilinear estimates needed to exploit the structure • Subcriticality is crucial He went on to describe the null structure and ideas in extracting the required quantitative slack in the local theory to run the globalization scheme. (many years of collaboration with Sogge and Nakamura) At the quadratic level, all that work only with derivative terms and not terms involving the solution itself. Let $K$ be a compact obstacle with $C^\infty$ boundary. We want to solve, in dimensions 3 and 4, Problem $S$: $$\square u = |u|^p$$ in the exterior of $K$ with vanishing neumann condition and small initial data. Let’s call this problem $S$. Let’s assume here that $K$ is nontrappling Problem $Q$: $$ \square u = Q(u, u’, u”) $$ with vanishing Dirichlet boundary conditions and with $K$ starshaped. There are issues that make it difficult to work with the Klainerman vector field, especially the boosts and the scaling vector fields. In this work, we will only work with $Z = [\partial_i, \Omega_{jk}= x_j \partial_k - x_k\partial_j]$. Localized Energy Estimate: $$ [\log(2+T)]^{-1/2} \| \langle x \rangle^{-1/2} u’ \|{L^2{tx}} \lesssim \| u’(0)\|{2} + \int0^T \| \box u(s, \cdot)\|_2 ds. A weighted Sobolev inequality R^{(n-1)/2} \| h \|{L^\infty (\frac{R}{2}< |x| < R) \lesssim \| Z^{\leq \frac{n+2}{2}} h \|{L^2 (\frac{R}{4}< |x| < 2R)}. $$ [\log(2+T)]^{-1/2} \| \langle x \rangle^{-1/2} Z^{\leq 10} u’ \|{L^2{tx}} \lesssim \epsilon + \int_0^T \| Z^{\leq 10} (\partial_t u)^2\|{L^2} dx \lesssim \epslion + \| \langle x \rangle^{-1/2} Z^{\leq 10}u \|^2{L^2_{tx}}. Problem $S$: $p> p_c$ where $p_c > 0$ solves $(n-1) p_c^2 – (n+1) p_c – 2 = 0$: • $n=3 \implies p_c = 1 + \sqrt{2}$ • $n=4 \implies p_c =2$. Theorem (Hidono-Metcalfe-Smith-Sogge-Zhou): $n=3,4; ~p_c < p < \frac{n+3}{n-1}, \gamma = \frac{n}{2}- \frac{2}{p-1}.$ $$\sum_{|\alpha | \leq 2} ( \| Z^\alpha u(0, \cdot)\|{\dot{H}}^\gamma + \| Z^\alpha \partial_t u(0, \cdot)\|{\dot{H}}^{\gamma -1} ) < \epsilon $$ $\implies global existence. Problem $Q$: • No boundary case • $n=3: \frac{c}{\epsilon^2}$ is the life-span (Lindblad and Hörmander) • $n=4: exp(\frac{C/\epsilon})$ is the life-span (Lindblad and Hörmander) • $(\partial^2_u Q)(0,0,0)$ (this kills $u^2$ terms, but we are considering here the startshaped boundary.) • $n=3:$ (in progress with a student John Helms) • $n=4: \infty$ is the lifetime (Metcalfe-Sogge) Abstract: In this paper we prove that ground states of the NLS which satisfy the sufficient conditions for orbital stability of M.Weinstein, are also asymptotically stable, for seemingly generic equations. Here we assume that the NLS has a smooth short range nonlinearity. We assume also the presence of a very short range and smooth linear potential, to avoid translation invariance. The basic idea is to perform a Birkhoff normal form argument on the hamiltonian, as in a paper by Bambusi and Cuccagna on the stability of the 0 solution for NLKG. But in our case, the natural coordinates arising from the linearization are not canonical. So we need also to apply the Darboux Theorem. With some care though, in order not to destroy some nice features of the initial hamiltonian. (This talk relates to the talk of Schlag.) It seems to me this talk is also closely related to the talks of Marzuola and Muñoz. We study the nonlinear Schrödinger equation: $$ i u_t = -\Delta u + V(x)u + \beta (|u|^2) u, ~ in R^3$$ Results for this work do not work for $ i u_t = -\Delta u -|u|^{p-1} u$ with $p < 1 + \frac{4}{n}$. We assume existence of a family of ground states. When they are gound states the look like you expect but he also had a graph involving nodes and I didn’t understand… Notions of stability: 1. linear stability (i.e. Weinstein’s sufficient hypotheses for orbital stability) • Only for ground states? 2. orbital stability 3. asymptotic stability • $\lim_{t \rightarrow +\infty} \| u(t,x) – e^{i \theta(t)} \phi_{\omega+} (x) – e^{it\Delta} (h+)(x)\|{H^1x} = 0$ 4. CONJECTURE: 1. $\iff$ 2. $\iff$ 3. 5. Theorem: 1. $\implies$ 3. generically. Specifically, we prove nonlinear Fermi golden rule (terminology introduced by Soffer and Weinstein, Buslaev and Perlman used different terminology): a. Some key coefficients are $\geq 0$; b. Generically they are $>0$. One wants to prove that the remainder scatters. We have discrete and continuous modes. One wants to find a way to describe a mechanism of transfer from the discrete modes into the continuous modes. We want some way of writing the coordinates of the dynamics to reveal a damping effect in the discrete modes due to the transfer of the energy from the discrete modes into the continuous modes. The description of this transfer mechanism is the goal of the Fermi golden rule. Asymptotic stability is analogous to showing that $u(t)$ solving an NLS-type equation is not only of the same siaze in $H^1$ for all time but also showing that the solution scatters. This is the analysis we want to do on the remainder. Eigenvalues obstruct asymptotic stability. That explains the preoccupation of Schlag with his proof of the nonexistence of eigenvalues in the gap. Near ground states, we write the solution in a canonical way as a sum of a modulated ground state plus a remainder term. The NLS can be recast as a dynamical system of the phase and scaling parameter coupled to the (presumably dispersive) behavior. He then changes variables so that the system is expressed as a matrix equation in which the “Hamiltonian structure is obscured”. This is the standard way in the literature that the system is expressed. But somehow this way of writing it is wrong. (???) He makes some assumptions about the absence of embedded eignevalues. He suggests this hypothesis is not necessary but is not certain….some discussion with Tataru. He writes on the board a horizontal line and draws points at 0, and sa few eignevalues parametrized by $\omega$. He then draws wavy stuff over the right half starting at some point to the right of the eigenvalues representing the continuoys spectrum. …slides are coming fast and they are too dense for me to type in real time…. Alexandru Ionescu: Uniquness theorems in general relativity General relativity… Spacetimes $(M^4, g)$ are solutions of the Einstein vacuum equations $$R_{\mu \nu} = g^{\alpha \beta} R_{\alpha \beta \mu \nu} = 0$$ The metric is in 4 dimenions, it has 10 components. The Riemann tensor has 20 components. These are 10 equations for the 20 components. Minkowski space: $(R^3 \times R, -dt^2 + dx^2 + dy^2 + dz^2)$. Besides being Ricci flat, in fact this solution also has zero Riemann tensor and this condition completely characterizes the Minkowski space. Schwarzschild spaces: $ds^2 = -(1 – \frac{2m}{r}) dt^2 + (1 – \frac{2m}{r})^{-1} dr^2 + r^2 (d\theta^2 + (\sin \theta)^2 d\phi^2)$ where $(r, t, \theta, \phi) \in (2m, \infty)\times R \times (0, \pi) \times S^1$. It took several decades to realize that $r=2m$ is merely a coordinate singularity. This was realized with the Kruskal coordinates in which the metric may be expressed $ds^2 = F^2 (-dt^2 + (dx’)^2) + r^2 (d\theta^2 + (\sin \theta)^2 d\phi^2)$. The Kruskal picture is the region between the lobes of hyperboloid of two sheets. The region below the lobes and inside the $|y| = |x|$ cone regions containing the lobes is called the black hole. The domain of outer communication is outside the cone. Kerr spaces: $m$ is the mass of the black hole and $J$ is the angular momentum of the black hole. We assume $m>0, ~a = \frac{J}{m} \in [0, m)$ and let $r_+ = m+ (m^2 - a^2)^{1/2}. In Boyer-Lindquist coordinates $(r, t, \theta, \phi) \in (r_+, \infty)\times R \times (0, \pi) \times S^1 -\frac{\rho^2 \Delta}{\Sigma^2} dt^2 + \frac{\Sigma^2 (\sin \theta)^2}{\rho^2 }( d\phi - \frac{2amr}{\Sigma^2}dt)^2 + \frac{\rho^2}{\Delta}(dr)^2 + \rho^2 (d\theta)^2$$ • $\Delta -= r^2 + a^2 - 2mr$ • $\rho^2 = ….$ slide changed…. • $\Sigma^2 = … For Minkowski, 20 of 20 components of the Riemann tensor vanish. For Schwarzshild 19 of the 20 componenents of the Riemann tensor vanish in the right coordinates. For Kerr, 18 of the 20 components vanish in the right coordinates. Key properties of Kerr spacetimes: • Solutions of the einstein vacuum equations $R_{\alpha \mu} = 0$; • Killing vector field $T = \partial_t$ timelike at “infinity”; • Killing vector field $Z - \partial_\phi$ wiht closed orbits; • Geometric properties: asymptotic flatness, smooth bifurcate sphere, global hyperbolicity; • Rigididty: Kerr spaces are real-analytic. “No hair” theorems: such properties charcaterize the Kerr spaces (Carter, Robinson, Hawking-Ellis, Mazur, Bunting, Weinstein, Chrusciel-Costa). “We are trying to understand final states.” Main Conjecture: If $(M^4, g, T)$ is a regular stationary vacuum, then the domain of outer communication of $M^4$ is isometric to the domain of outer communication of some Kerr spacetime of mass $m$ and angular momentum $ma$, $a \in [0, m)$. What is regular in the conjecture? It took a long time to characterize what that means There is a lot of supporting evidence. • Carter 1971: axially symmetric black holes have only 2 degrees of freedom • Mathematically, an imprecise statement. It said there are “no bifurcations” • Robinson 1975: the uniqueness conjecture holds in the case of axially symmetric black holes • global argument involving the whole space. • Hawking-Ellis 1973: the conjecture holds in the case of real-analytic spacetimes. Hawking’s strategy is to define an additional Killing vector-field in the spacetime and reduce to the Carter-Robinson theorem. The assumption of real analyticity is not what you really want. Theorem 1 (Ionescu-Klainerman): The conjecture holds provided that the scalar identity is assumed to be satisfied on the bifurcation sphere. Theorem 2 (Alexakis-Ionescu-Klainerman): The conjecture holds proved that the spacetime is assumed to be “close” to a Kerr spacetime. * Theorem 3 (Aliexakis-Ionescu-Klainerman):* Assume $\cal{N}, \underline{\cal{N}}$ are smooth, null, nnexpanding hypersurfaces in an Einstein vacuum $(O, g)$ which intersect transversally in a 2-sphere $Z$. Then there is an opern neighborhood $O’$ of $Z$ and a nontrivial Killing vector-field $K$ in $O’$ which is tangent to the null generators of This is a local version of Hawking’s Rigidity Theorem, without assuming analyticity of the spacetime. • Construct the Hawking v. K in the domoan of dependence of $ \cal{N}\cup \underline{\cal{N}}$ (Friedrich-Racz-Wald) • Extend the v.f to a full neighborhood of Z by solving a transport equation $[L, K] = cL Key steps in our strategy: • We deine some tensors: $\pi_{\alpha \beta}, W_{\alpha \beta \mu \nu}. • Prove a system of wave/transport equations of the form: • $\square_g W = {\cal{M}} (W, Dw, \pi, D\pi)$ • D_L pi ={\cal{M}} (W, Dw, \pi, D\pi)$ • Use Carleman estimates and a unique continuation argument to conclude that $W, \pi$ vanish in a neighborhood of $Z$. Model Theorem (I-Klainerman): Assume $\phi \in C^2 (M)$ and $A, B^l \in C^0 (M)$ for $l = 0, \dots, d$. Assume that \square \phi = A \phi + \sum D^l \cdot \partial \phi…..ack slide change. Unique Continuation: assume $\phi$ is smooth in $(O,g)$ and solves a wave equation $D^\alpha D_\alpha \phi = A \phi + B^\alpha D_\alpha \phi.$ Assume $\phi$ vanishes in the set $[h<0]$, where $h \in C^\infty (O), ~ \nabla h \neq 0.$ Does $\phi$ vanish in a neighborhood of $[h \leq 0]$? Suppose we have $T(u)=0 $ in $B$. Suppose $u_1, u_2$ solutions in $B$ and $u_1 \sim u_2$ inside small set $A \subset B$. Basically, there are three possibilities: 1. lack of uniquneess: $u_1 = u_2$ inside $A$ but $u_1$ is far from $u_2$ in the big set $B$. 2. Well-posedness: If $u_1$ is close to $u_2$ in $A$ then $u_1$ is close to $u_2$ in $B$. 3. Unique continuation: • If $u_1 = u_2$ in $A \implies u_1 = u_2$ in $B$. • If $u_1$ is close to $u_2$ in $A$ we are unable to conclude that $u_1$ is close to $u_2$ in $B$ Hormander’s pseudo-convexity condition: Unique continuation holds if $X^\alpha X^\beta D_\alpha D_\beta h < 0$…ack slide change. The method is based on Carleman Estimates. Model theroem in Kerr spaces (I-Klainerman): Assume $W, A, B, C$ are smooth tensors in the Kerr space $K^4$, and \square_{g_0} W = A \cdot W + B \cdot D W {\cal{L}}_T W = C \cdot W. If $W=0$ on the horizon then $W=0$ everywhere outside. T-conditional pseudoconvexity property We would really like a tensor $\cal{S}$ (an analog of the Riemann tensor $\cal{R}$) which has the following properties: • It describes locally the Kerr spaces • It satisfies a suitable geometric equation of the form $$ \square_g \cal{S} = A \cdot W + B \cdot DW$$ $$ {\cal{L}}_T W = C \cdot W $$ We want to then uniquely continue the vanishing. Mars-Simon found such a tensor. This is a tensor for Kerr which is analogous to the Riemann tensor for Minkowski. The Riemann tensor is a local quantity which characterizes the Minkowski space in the sense that when it vanishes, we know that we are in Minkowski space. Similarly, the Mars-Simon tensor characterizes Kerr. To go from local vanishing to conclude global vanishing, we need an analytic continuation. More precise statement of Theorem 1: The domain of outer communication $E$ of a regular stationary vacuum $(M, g, T)$ is locally isometric to the domain of outer communication of a Kerr spacetime, provided that the identity $$ – 4 m^2 {\cal {F}}^2 = (1 – \sigma)^4 holds on the bifurcation sphere $S_0$. More precise statement of Theorem 2: The domain of outer communication $E$ of a regular stationary vacuum $(M,g,T)$ is isometric to the domain of outer communication of a Kerr spacetime, provided that the smallness condition | (1 – \sigma) {\cal{S}} (T, e_\alpha, e_\beta, e_\gamma)| \leq {\overline{\epsilon}} holds along a Cauchy hypersurface in $E$ for some sufficiently small ${\overline{\epsilon}}.$ I discussed with Alex whether one could (or should….) formulate a statement similar to Theorem 2 about Minkowski space using the Riemann tensor like: Suppose that the Riemann tensor is small on some (small? geometric conditions?) set $A$ inside a spacetime manifold $(M,g)$. Can one conclude that the Riemann tensor must therefore vanish on $A$ or perhaps on a bigger set $B$? One can view the [AIK] theorem 2 as a Liouville-type theorem: a smallness condition on the Mars-Simon tensor on a subset of $(M,g)$ with certain conditions implies that the Mars-Simon tensor vanishes. Is there a corresponding Liouville-type theorem where smallness of the Riemann tensor on an appropriate subset implies that the Riemann tensor actually vanishes?
38452f47edd8621c
Linear algebra From Wikipedia, the free encyclopedia Jump to: navigation, search Not to be confused with Elementary algebra. The set of points with coordinates that satisfy a linear equation form a hyperplane in an n-dimensional space. The conditions under which a set of n hyperplanes intersect in a single point is an important focus of study in Linear algebra. Such an investigation is initially motivated by a system of linear equations containing several unknowns. Such equations are naturally represented using the formalism of matrices and vectors.[1][2] Linear algebra is central to both pure and applied mathematics. For instance, abstract algebra arises by relaxing the axioms of a vector space, leading to a number of generalizations. Functional analysis studies the infinite-dimensional version of the theory of vector spaces. Combined with calculus, linear algebra facilitates the solution of linear systems of differential equations. Techniques from linear algebra are also used in analytic geometry, engineering, physics, natural sciences, computer science, computer animation, and the social sciences (particularly in economics). Because linear algebra is such a well-developed theory, nonlinear mathematical models are sometimes approximated by linear ones. The study of linear algebra first emerged from the study of determinants, which were used to solve systems of linear equations. Determinants were used by Leibniz in 1693, and subsequently, Gabriel Cramer devised Cramer's Rule for solving linear systems in 1750. Later, Gauss further developed the theory of solving linear systems by using Gaussian elimination, which was initially listed as an advancement in geodesy.[3] The study of matrix algebra first emerged in England in the mid-1800s. In 1844 Hermann Grassmann published his “Theory of Extension” which included foundational new topics of what is today called linear algebra. In 1848, James Joseph Sylvester introduced the term matrix, which is Latin for "womb". While studying compositions of linear transformations, Arthur Cayley was led to define matrix multiplication and inverses. Crucially, Cayley used a single letter to denote a matrix, thus treating a matrix as an aggregate object. He also realized the connection between matrices and determinants, and wrote "There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants".[3] In 1882, Hüseyin Tevfik Pasha wrote the book titled "Linear Algebra".[4][5] The first modern and more precise definition of a vector space was introduced by Peano in 1888;[3] by 1900, a theory of linear transformations of finite-dimensional vector spaces had emerged. Linear algebra first took its modern form in the first half of the twentieth century, when many ideas and methods of previous centuries were generalized as abstract algebra. The use of matrices in quantum mechanics, special relativity, and statistics helped spread the subject of linear algebra beyond pure mathematics. The development of computers led to increased research in efficient algorithms for Gaussian elimination and matrix decompositions, and linear algebra became an essential tool for modelling and simulations.[3] The origin of many of these ideas is discussed in the articles on determinants and Gaussian elimination. Educational history[edit] Linear algebra first appeared in graduate textbooks in the 1940s and in undergraduate textbooks in the 1950s.[6] Following work by the School Mathematics Study Group, U.S. high schools asked 12th grade students to do "matrix algebra, formerly reserved for college" in the 1960s.[7] In France during the 1960s, educators attempted to teach linear algebra through affine dimensional vector spaces in the first year of secondary school. This was met with a backlash in the 1980s that removed linear algebra from the curriculum.[8] In 1993, the U.S.-based Linear Algebra Curriculum Study Group recommended that undergraduate linear algebra courses be given an application-based "matrix orientation" as opposed to a theoretical orientation.[9] Scope of study[edit] Vector spaces[edit] The main structures of linear algebra are vector spaces. A vector space over a field F is a set V together with two binary operations. Elements of V are called vectors and elements of F are called scalars. The first operation, vector addition, takes any two vectors v and w and outputs a third vector v + w. The second operation takes any scalar a and any vector v and outputs a new vector av. In view of the first example, where the multiplication is done by rescaling the vector v by a scalar a, the multiplication is called scalar multiplication of v by a. The operations of addition and multiplication in a vector space satisfy the following axioms.[10] In the list below, let u, v and w be arbitrary vectors in V, and a and b scalars in F. Axiom Signification Commutativity of addition u + v = v + u Identity element of addition There exists an element 0 ∈ V, called the zero vector, such that v + 0 = v for all vV. Elements of a general vector space V may be objects of any nature, for example, functions, polynomials, vectors, or matrices. Linear algebra is concerned with properties common to all vector spaces. Linear transformations[edit] Similarly as in the theory of other algebraic structures, linear algebra studies mappings between vector spaces that preserve the vector-space structure. Given two vector spaces V and W over a field F, a linear transformation (also called linear map, linear mapping or linear operator) is a map T:V\to W that is compatible with addition and scalar multiplication: T(u+v)=T(u)+T(v), \quad T(av)=aT(v) for any vectors u,vV and a scalar aF. Additionally for any vectors u, vV and scalars a, bF: \quad T(au+bv)=T(au)+T(bv)=aT(u)+bT(v) When a bijective linear mapping exists between two vector spaces (that is, every vector from the second space is associated with exactly one in the first), we say that the two spaces are isomorphic. Because an isomorphism preserves linear structure, two isomorphic vector spaces are "essentially the same" from the linear algebra point of view. One essential question in linear algebra is whether a mapping is an isomorphism or not, and this question can be answered by checking if the determinant is nonzero. If a mapping is not an isomorphism, linear algebra is interested in finding its range (or image) and the set of elements that get mapped to zero, called the kernel of the mapping. Linear transformations have geometric significance. For example, 2 × 2 real matrices denote standard planar mappings that preserve the origin. Subspaces, span, and basis[edit] Again in analogue with theories of other algebraic objects, linear algebra is interested in subsets of vector spaces that are vector spaces themselves; these subsets are called linear subspaces. For instance, the range and kernel of a linear mapping are both subspaces, and are thus often called the range space and the nullspace; these are important examples of subspaces. Another important way of forming a subspace is to take a linear combination of a set of vectors v1, v2, …, vk: a_1 v_1 + a_2 v_2 + \cdots + a_k v_k, where a1, a2, …, ak are scalars. The set of all linear combinations of vectors v1, v2, …, vk is called their span, which forms a subspace. A linear combination of any system of vectors with all zero coefficients is the zero vector of V. If this is the only way to express the zero vector as a linear combination of v1, v2, …, vk then these vectors are linearly independent. Given a set of vectors that span a space, if any vector w is a linear combination of other vectors (and so the set is not linearly independent), then the span would remain the same if we remove w from the set. Thus, a set of linearly dependent vectors is redundant in the sense that there will be a linearly independent subset will span the same subspace. Therefore, we are mostly interested in a linearly independent set of vectors that spans a vector space V, which we call a basis of V. Any set of vectors that spans V contains a basis, and any linearly independent set of vectors in V can be extended to a basis.[11] It turns out that if we accept the axiom of choice, every vector space has a basis;[12] nevertheless, this basis may be unnatural, and indeed, may not even be constructable. For instance, there exists a basis for the real numbers considered as a vector space over the rationals, but no explicit basis has been constructed. Any two bases of a vector space V have the same cardinality, which is called the dimension of V. The dimension of a vector space is well-defined by the dimension theorem for vector spaces. If a basis of V has finite number of elements, V is called a finite-dimensional vector space. If V is finite-dimensional and U is a subspace of V, then dim U ≤ dim V. If U1 and U2 are subspaces of V, then \dim(U_1 + U_2) = \dim U_1 + \dim U_2 - \dim(U_1 \cap U_2).[13] One often restricts consideration to finite-dimensional vector spaces. A fundamental theorem of linear algebra states that all vector spaces of the same dimension are isomorphic,[14] giving an easy way of characterizing isomorphism. Vectors as n-tuples: matrix theory[edit] Main article: Matrix (mathematics) A particular basis {v1, v2, …, vn} of V allows one to construct a coordinate system in V: the vector with coordinates (a1, a2, …, an) is the linear combination a_1 v_1 + a_2 v_2 + \cdots + a_n v_n. \, The condition that v1, v2, …, vn span V guarantees that each vector v can be assigned coordinates, whereas the linear independence of v1, v2, …, vn assures that these coordinates are unique (i.e. there is only one linear combination of the basis vectors that is equal to v). In this way, once a basis of a vector space V over F has been chosen, V may be identified with the coordinate n-space Fn. Under this identification, addition and scalar multiplication of vectors in V correspond to addition and scalar multiplication of their coordinate vectors in Fn. Furthermore, if V and W are an n-dimensional and m-dimensional vector space over F, and a basis of V and a basis of W have been fixed, then any linear transformation T: VW may be encoded by an m × n matrix A with entries in the field F, called the matrix of T with respect to these bases. Two matrices that encode the same linear transformation in different bases are called similar. Matrix theory replaces the study of linear transformations, which were defined axiomatically, by the study of matrices, which are concrete objects. This major technique distinguishes linear algebra from theories of other algebraic structures, which usually cannot be parameterized so concretely. There is an important distinction between the coordinate n-space Rn and a general finite-dimensional vector space V. While Rn has a standard basis {e1, e2, …, en}, a vector space V typically does not come equipped with such a basis and many different bases exist (although they all consist of the same number of elements equal to the dimension of V). One major application of the matrix theory is calculation of determinants, a central concept in linear algebra. While determinants could be defined in a basis-free manner, they are usually introduced via a specific representation of the mapping; the value of the determinant does not depend on the specific basis. It turns out that a mapping has an inverse if and only if the determinant has an inverse (every non-zero real or complex number has an inverse[15]). If the determinant is zero, then the nullspace is nontrivial. Determinants have other applications, including a systematic way of seeing if a set of vectors is linearly independent (we write the vectors as the columns of a matrix, and if the determinant of that matrix is zero, the vectors are linearly dependent). Determinants could also be used to solve systems of linear equations (see Cramer's rule), but in real applications, Gaussian elimination is a faster method. Eigenvalues and eigenvectors[edit] In general, the action of a linear transformation may be quite complex. Attention to low-dimensional examples gives an indication of the variety of their types. One strategy for a general n-dimensional transformation T is to find "characteristic lines" that are invariant sets under T. If v is a non-zero vector such that Tv is a scalar multiple of v, then the line through 0 and v is an invariant set under T and v is called a characteristic vector or eigenvector. The scalar λ such that Tv = λv is called a characteristic value or eigenvalue of T. To find an eigenvector or an eigenvalue, we note that Tv-\lambda v=(T-\lambda \, \text{I})v=0, where I is the identity matrix. For there to be nontrivial solutions to that equation, det(T − λ I) = 0. The determinant is a polynomial, and so the eigenvalues are not guaranteed to exist if the field is R. Thus, we often work with an algebraically closed field such as the complex numbers when dealing with eigenvectors and eigenvalues so that an eigenvalue will always exist. It would be particularly nice if given a transformation T taking a vector space V into itself we can find a basis for V consisting of eigenvectors. If such a basis exists, we can easily compute the action of the transformation on any vector: if v1, v2, …, vn are linearly independent eigenvectors of a mapping of n-dimensional spaces T with (not necessarily distinct) eigenvalues λ1, λ2, …, λn, and if v = a1v1 + ... + an vn, then, T(v)=T(a_1 v_1)+\cdots+T(a_n v_n)=a_1 T(v_1)+\cdots+a_n T(v_n)=a_1 \lambda_1 v_1 + \cdots +a_n \lambda_n v_n. Such a transformation is called a diagonalizable matrix since in the eigenbasis, the transformation is represented by a diagonal matrix. Because operations like matrix multiplication, matrix inversion, and determinant calculation are simple on diagonal matrices, computations involving matrices are much simpler if we can bring the matrix to a diagonal form. Not all matrices are diagonalizable (even over an algebraically closed field). Inner-product spaces[edit] \langle \cdot, \cdot \rangle : V \times V \rightarrow \mathbf{F} that satisfies the following three axioms for all vectors u, v, w in V and all scalars a in F:[16][17] Note that in R, it is symmetric. \langle au,v\rangle= a \langle u,v\rangle. \langle v,v\rangle \geq 0 with equality only for v = 0. We can define the length of a vector v in V by \|v\|^2=\langle v,v\rangle, and we can prove the Cauchy–Schwarz inequality: |\langle u,v\rangle| \leq \|u\| \cdot \|v\|. In particular, the quantity \frac{|\langle u,v\rangle|}{\|u\| \cdot \|v\|} \leq 1, Two vectors are orthogonal if \langle u, v\rangle =0. An orthonormal basis is a basis where all basis vectors have length 1 and are orthogonal to each other. Given any finite-dimensional vector space, an orthonormal basis could be found by the Gram–Schmidt procedure. Orthonormal bases are particularly nice to deal with, since if v = a1 v1 + ... + an vn, then a_i = \langle v,v_i \rangle. \langle T u, v \rangle = \langle u, T^* v\rangle. Some main useful theorems[edit] • A matrix is invertible, or non-singular, if and only if the linear map represented by the matrix is an isomorphism. • Any vector space over a field F of dimension n is isomorphic to Fn as a vector space over F. • Corollary: Any two vector spaces over F of the same finite dimension are isomorphic to each other. • A linear map is an isomorphism if and only if the determinant is nonzero. Because of the ubiquity of vector spaces, linear algebra is used in many fields of mathematics, natural sciences, computer science, and social science. Below are just some examples of applications of linear algebra. Solution of linear systems[edit] Linear algebra provides the formal setting for the linear combination of equations used in the Gaussian method. Suppose the goal is to find and describe the solution(s), if any, of the following system of linear equations: 2x &&\; + \;&& y &&\; - \;&& z &&\; = \;&& 8 & \qquad (L_1) \\ -3x &&\; - \;&& y &&\; + \;&& 2z &&\; = \;&& -11 & \qquad (L_2) \\ -2x &&\; + \;&& y &&\; +\;&& 2z &&\; = \;&& -3 & \qquad (L_3) The Gaussian-elimination algorithm is as follows: eliminate x from all equations below L1, and then eliminate y from all equations below L2. This will put the system into triangular form. Then, using back-substitution, each unknown can be solved for. In the example, x is eliminated from L2 by adding (3/2)L1 to L2. x is then eliminated from L3 by adding L1 to L3. Formally: L_2 + \tfrac{3}{2}L_1 \rightarrow L_2 L_3 + L_1 \rightarrow L_3 The result is: 2x &&\; + && y &&\; - &&\; z &&\; = \;&& 8 & \\ && && \frac{1}{2}y &&\; + &&\; \frac{1}{2}z &&\; = \;&& 1 & \\ && && 2y &&\; + &&\; z &&\; = \;&& 5 & Now y is eliminated from L3 by adding −4L2 to L3: L_3 + -4L_2 \rightarrow L_3 The result is: 2x &&\; + && y \;&& - &&\; z \;&& = \;&& 8 & \\ && && \frac{1}{2}y \;&& + &&\; \frac{1}{2}z \;&& = \;&& 1 & \\ && && && &&\; -z \;&&\; = \;&& 1 & This result is a system of linear equations in triangular form, and so the first part of the algorithm is complete. The last part, back-substitution, consists of solving for the known in reverse order. It can thus be seen that z = -1 \quad (L_3) Then, z can be substituted into L2, which can then be solved to obtain y = 3 \quad (L_2) Next, z and y can be substituted into L1, which can be solved to obtain x = 2 \quad (L_1) The system is solved. We can, in general, write any system of linear equations as a matrix equation: The solution of this system is characterized as follows: first, we find a particular solution x0 of this equation using Gaussian elimination. Then, we compute the solutions of Ax = 0; that is, we find the null space N of A. The solution set of this equation is given by x_0+N=\{x_0+n: n\in N \}. If the number of variables equal the number of equations, then we can characterize when the system has a unique solution: since N is trivial if and only if det A ≠ 0, the equation has a unique solution if and only if det A ≠ 0.[18] Least-squares best fit line[edit] The least squares method is used to determine the best fit line for a set of data.[19] This line will minimize the sum of the squares of the residuals. Fourier series expansion[edit] Fourier series are a representation of a function f: [−π, π] → R as a trigonometric series: This series expansion is extremely useful in solving partial differential equations. In this article, we will not be concerned with convergence issues; it is nice to note that all Lipschitz-continuous functions have a converging Fourier series expansion, and nice enough discontinuous functions have a Fourier series that converges to the function value at most points. The space of all functions that can be represented by a Fourier series form a vector space (technically speaking, we call functions that have the same Fourier series expansion the "same" function, since two different discontinuous functions might have the same Fourier series). Moreover, this space is also an inner product space with the inner product \langle f,g \rangle= \frac{1}{\pi} \int_{-\pi}^\pi f(x) g(x) \, dx. The functions gn(x) = sin(nx) for n > 0 and hn(x) = cos(nx) for n ≥ 0 are an orthonormal basis for the space of Fourier-expandable functions. We can thus use the tools of linear algebra to find the expansion of any function in this space in terms of these basis functions. For instance, to find the coefficient ak, we take the inner product with hk: \langle f,h_k \rangle=\frac{a_0}{2}\langle h_0,h_k \rangle + \sum_{n=1}^\infty \, [a_n \langle h_n,h_k\rangle + b_n \langle\ g_n,h_k \rangle], and by orthonormality, \langle f,h_k\rangle=a_k; that is, a_k = \frac{1}{\pi} \int_{-\pi}^\pi f(x) \cos(kx) \, dx. Quantum mechanics[edit] Quantum mechanics is highly inspired by notions in linear algebra. In quantum mechanics, the physical state of a particle is represented by a vector, and observables (such as momentum, energy, and angular momentum) are represented by linear operators on the underlying vector space. More concretely, the wave function of a particle describes its physical state and lies in the vector space L2 (the functions φ: R3C such that \int_{-\infty}^\infty \int_{-\infty}^\infty \int_{-\infty}^{\infty} |\phi|^2 dxdydz is finite), and it evolves according to the Schrödinger equation. Energy is represented as the operator H=-\frac{\hbar^2}{2m} \nabla^2 + V(x,y,z), where V is the potential energy. H is also known as the Hamiltonian operator. The eigenvalues of H represents the possible energies that can be observed. Given a particle in some state φ, we can expand φ into a linear combination of eigenstates of H. The component of H in each eigenstate determines the probability of measuring the corresponding eigenvalue, and the measurement forces the particle to assume that eigenstate (wave function collapse). Geometric introduction[edit] Many of the principles and techniques of linear algebra can be seen in the geometry of lines in a real two dimensional plane E. When formulated using vectors and matrices the geometry of points and lines in the plane can be extended to the geometry of points and hyperplanes in high-dimensional spaces. Point coordinates in the plane E are ordered pairs of real numbers, (x,y), and a line is defined as the set of points (x,y) that satisfy the linear equation λ: ax+by + c =0 (where the matrix [a, b, c] is nonzero).[20] Then, \lambda: \begin{bmatrix} a & b & c\end{bmatrix} \begin{Bmatrix} x\\ y \\1\end{Bmatrix} = 0, where x=(x, y, 1) is the 3x1 set of homogeneous coordinates associated with the point (x, y).[21] Homogeneous coordinates identify the plane E with the z=1 plane in three dimensional space. The x-y coordinates in E are obtained from homogeneous coordinates y=(y1, y2, y3) by dividing by the third component (if it is nonzero) to obtain y=(y1/y3, y2/y3, 1 ). The linear equation, λ, has the important property, that if x1 and x2 are homogeneous coordinates of points on the line, then the point αx1 + βx2 is also on the line, for any real α and β. Now consider two lines λ1: a1x+b1y + c1 =0 and λ2: a2x+b2y + c2 =0. The intersection of these two lines is defined by x=(x, y, 1) that satisfy the matrix equation, \lambda_{1,2}: \begin{bmatrix} a_1 & b_1 & c_1\\ a_2 & b_2 & c_2 \end{bmatrix} \begin{Bmatrix} x\\ y \\1\end{Bmatrix} = \begin{Bmatrix}0\\0 \end{Bmatrix}, or using homogeneous coordinates, The point of intersection of these two lines is the unique non-zero solution of these equations. In homogeneous coordinates, the solutions are multiples of the following solution:[21] x_1 = \begin{vmatrix} b_1 & c_1\\ b_2 & c_2\end{vmatrix}, x_2 = -\begin{vmatrix} a_1 & c_1\\ a_2 & c_2\end{vmatrix}, x_3 = \begin{vmatrix} a_1 & b_1\\ a_2 & b_2\end{vmatrix} if the rows of B are linearly independent (i.e., λ1 and λ2 represent distinct lines). Divide through by x3 to get Cramer's rule for the solution of a set of two linear equations in two unknowns.[22] Notice that this yields a point in the z=1 plane only when the 2x2 submatrix associated with x3 has a non-zero determinant. It is interesting to consider the case of three lines, λ1, λ2 and λ3, which yield the matrix equation, \lambda_{1,2,3}: \begin{bmatrix} a_1 & b_1 & c_1\\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3\end{bmatrix} \begin{Bmatrix} x\\ y \\1\end{Bmatrix} = \begin{Bmatrix}0\\0 \\0\end{Bmatrix}. which in homogeneous form yields, Clearly, this equation has the solution x=(0,0,0), which is not a point on the z=1 plane E. For a solution to exist in the plane E, the coefficient matrix C must have rank 2, which means its determinant must be zero. Another way to say this is that the columns of the matrix must be linearly dependent. Introduction to linear transformations[edit] Another way to approach linear algebra is to consider linear functions on the two dimensional real plane E=R2. Here R denotes the set of real numbers. Let x=(x, y) be an arbitrary vector in E and consider the linear function λ: ER, given by \lambda: \begin{bmatrix}a & b\end{bmatrix}\begin{Bmatrix} x\\y\end{Bmatrix} = c, This transformation has the important property that if Ay=d, then A(\alpha\mathbf{x}+\beta \mathbf{y}) = \alpha A \mathbf{x} + \beta A\mathbf{y} = \alpha c + \beta d. This shows that the sum of vectors in E map to the sum of their images in R. This is the defining characteristic of a linear map, or linear transformation.[20] For this case, where the image space is a real number the map is called a linear functional.[22] Consider the linear functional a little more carefully. Let i=(1,0) and j =(0,1) be the natural basis vectors on E, so that x=xi+yj. It is now possible to see that A\mathbf{x} = A(x\mathbf{i}+y\mathbf{j})=x A\mathbf{i} + y A\mathbf{j} = \begin{bmatrix}A\mathbf{i} & A\mathbf{j}\end{bmatrix}\begin{Bmatrix} x\\y\end{Bmatrix} = \begin{bmatrix}a & b\end{bmatrix}\begin{Bmatrix} x\\y\end{Bmatrix} = c. Thus, the columns of the matrix A are the image of the basis vectors of E in R. This is true for any pair of vectors used to define coordinates in E. Suppose we select a non-orthogonal non-unit vector basis v and w to define coordinates of vectors in E. This means a vector x has coordinates (α,β), such that xvw. Then, we have the linear functional \lambda: A\mathbf{x} = \begin{bmatrix} A\mathbf{v} & A\mathbf{w} \end{bmatrix}\begin{Bmatrix} \alpha \\ \beta \end{Bmatrix} = \begin{bmatrix} d & e \end{bmatrix}\begin{Bmatrix} \alpha \\ \beta \end{Bmatrix} =c, where Av=d and Aw=e are the images of the basis vectors v and w. This is written in matrix form as \begin{bmatrix}a & b\end{bmatrix} \begin{bmatrix} v_1 & w_1 \\ v_2 & w_2 \end{bmatrix} =\begin{bmatrix} d & e \end{bmatrix}. Coordinates relative to a basis[edit] This leads to the question of how to determine the coordinates of a vector x relative to a general basis v and w in E. Assume that we know the coordinates of the vectors, x, v and w in the natural basis i=(1,0) and j =(0,1). Our goal is two find the real numbers α, β, so that xvw, that is \begin{Bmatrix} x \\ y \end{Bmatrix} = \begin{bmatrix} v_1 & w_1 \\ v_2 & w_2 \end{bmatrix} \begin{Bmatrix} \alpha \\ \beta\end{Bmatrix}. To solve this equation for α, β, we compute the linear coordinate functionals σ and τ for the basis v, w, which are given by,[21] \sigma = \begin{bmatrix}\sigma_1 &\sigma_2\end{bmatrix}=\frac{1}{v_1 w_2- v_2w_1}\begin{bmatrix} w_2 & - w_1\end{bmatrix}, \tau = \begin{bmatrix}\tau_1 &\tau_2\end{bmatrix}=\frac{1}{v_1 w_2- v_2w_1}\begin{bmatrix} -v_2 & v_1\end{bmatrix}, The functionals σ and τ compute the components of x along the basis vectors v and w, respectively, that is, \sigma \mathbf{x}=\alpha, \tau\mathbf{x}=\beta, which can be written in matrix form as \begin{bmatrix} \sigma_1 & \sigma_2 \\ \tau_1 &\tau_2 \end{bmatrix} \begin{Bmatrix} x \\ y \end{Bmatrix} =\begin{Bmatrix} \alpha \\ \beta\end{Bmatrix}. These coordinate functionals have the properties, \sigma\mathbf{v}=1, \sigma\mathbf{w}=0, \tau\mathbf{w}=1, \tau\mathbf{v}=0. These equations can be assembled into the single matrix equation, Thus, the matrix formed by the coordinate linear functionals is the inverse of the matrix formed by the basis vectors.[20][22] Inverse image[edit] The set of points in the plane E that map to the same image in R under the linear functional λ define a line in E. This line is the image of the inverse map, λ−1: RE. This inverse image is the set of the points x=(x, y) that solve the equation, A\mathbf{x}=\begin{bmatrix}a & b\end{bmatrix}\begin{Bmatrix} x\\y\end{Bmatrix} = c. Notice that a linear functional operates on known values for x=(x, y) to compute a value c in R, while the inverse image seeks the values for x=(x, y) that yield a specific value c. In order to solve the equation, we first recognize that only one of the two unknowns (x,y) can be determined, so we select y to be determined, and rearrange the equation by = c - ax. Solve for y and obtain the inverse image as the set of points, \mathbf{x}(t) = \begin{Bmatrix} 0\\ c/b\end{Bmatrix} + t\begin{Bmatrix} 1\\ -a/b\end{Bmatrix}=\mathbf{p} + t\mathbf{h} . For convenience the free parameter x has been relabeled t. The vector p defines the intersection of the line with the y-axis, known as the y-intercept. The vector h satisfies the homogeneous equation, A\mathbf{h}= \begin{bmatrix}a & b\end{bmatrix} \begin{Bmatrix} 1\\ -a/b\end{Bmatrix}= 0. Notice that if h is a solution to this homogeneous equation, then t h is also a solution. The set of points of a linear functional that map to zero define the kernel of the linear functional. The line can be considered to be the set of points h in the kernel translated by the vector p.[20][22] Generalizations and related topics[edit] Since linear algebra is a successful theory, its methods have been developed and generalized in other parts of mathematics. In module theory, one replaces the field of scalars by a ring. The concepts of linear independence, span, basis, and dimension (which is called rank in module theory) still make sense. Nevertheless, many theorems from linear algebra become false in module theory. For instance, not all modules have a basis (those that do are called free modules), the rank of a free module is not necessarily unique, not every linearly independent subset of a module can be extended to form a basis, and not every subset of a module that spans the space contains a basis. Representation theory studies the actions of algebraic objects on vector spaces by representing these objects as matrices. It is interested in all the ways that this is possible, and it does so by finding subspaces invariant under all transformations of the algebra. The concept of eigenvalues and eigenvectors is especially important. Algebraic geometry considers the solutions of systems of polynomial equations. There are several related topics in the field of Computer Programming that utilizes much of the techniques and theorems Linear Algebra encompasses and refers to. See also[edit] 4. ^ 5. ^ 6. ^ Tucker, Alan (1993). "The Growing Importance of Linear Algebra in Undergraduate Mathematics". College Mathematics Journal 24 (1): 3–9. doi:10.2307/2686426.  7. ^ Goodlad, John I.; von stoephasius, Reneta; Klein, M. Frances (1966). "The changing school curriculum". U.S. Department of Health, Education, and Welfare: Office of Education. Retrieved 9 July 2014.  8. ^ Dorier, Jean-Luc; Robert, Aline; Robinet, Jacqueline; Rogalsiu, Marc (2000). Dorier, Jean-Luc, ed. The Obstacle of Formalism in Linear Algebra. Springer. pp. 85–124. ISBN 978-0-7923-6539-6. Retrieved 9 July 2014.  9. ^ Carlson, David; Johnson, Charles R.; Lay, David C.; Porter, A. Duane (1993). "The Linear Algebra Curriculum Study Group Recommendations for the First Course in Linear Algebra". The College Mathematics Journal 24 (1): 41–46. doi:10.2307/2686430.  10. ^ Roman 2005, ch. 1, p. 27 11. ^ Axler (2004), pp. 28–29 12. ^ The existence of a basis is straightforward for countably generated vector spaces, and for well-ordered vector spaces, but in full generality it is logically equivalent to the axiom of choice. 13. ^ Axler (2204), p. 33 14. ^ Axler (2004), p. 55 15. ^ If we restrict to integers, then only 1 and -1 have an inverse. Consequently, the inverse of an integer matrix is an integer matrix if and only if the determinant is 1 or -1. 18. ^ Gunawardena, Jeremy. "Matrix algebra for beginners, Part I". Harvard Medical School. Retrieved 2 May 2012.  19. ^ Miller, Steven. "The Method of Least Squares". Brown University. Retrieved 1 May 2013.  20. ^ a b c d Strang, Gilbert (July 19, 2005), Linear Algebra and Its Applications (4th ed.), Brooks Cole, ISBN 978-0-03-010567-8  21. ^ a b c J. G. Semple and G. T. Kneebone, Algebraic Projective Geometry, Clarendon Press, London, 1952. 22. ^ a b c d E. D. Nering, Linear Algebra and Matrix Theory, John-Wiley, New York, NY, 1963 Further reading[edit] • Fearnley-Sander, Desmond, "Hermann Grassmann and the Creation of Linear Algebra" ([1]), American Mathematical Monthly 86 (1979), pp. 809–817. • Grassmann, Hermann, Die lineale Ausdehnungslehre ein neuer Zweig der Mathematik: dargestellt und durch Anwendungen auf die übrigen Zweige der Mathematik, wie auch auf die Statik, Mechanik, die Lehre vom Magnetismus und die Krystallonomie erläutert, O. Wigand, Leipzig, 1844. Introductory textbooks • Bretscher, Otto (June 28, 2004), Linear Algebra with Applications (3rd ed.), Prentice Hall, ISBN 978-0-13-145334-0  • Farin, Gerald; Hansford, Dianne (December 15, 2004), Practical Linear Algebra: A Geometry Toolbox, AK Peters, ISBN 978-1-56881-234-2  • Friedberg, Stephen H.; Insel, Arnold J.; Spence, Lawrence E. (November 11, 2002), Linear Algebra (4th ed.), Prentice Hall, ISBN 978-0-13-008451-4  • Hefferon, Jim (2008), Linear Algebra  • Kolman, Bernard; Hill, David R. (May 3, 2007), Elementary Linear Algebra with Applications (9th ed.), Prentice Hall, ISBN 978-0-13-229654-0  Advanced textbooks Study guides and outlines External links[edit] Online books[edit]
cb047f151fc581f5
Take the 2-minute tour × You know how there are no antiparticles for the Schrödinger equation, I've been pushing around the equation and have found a solution that seems to indicate there are - I've probably missed something obvious, so please read on and tell me the error of my ways... Schrödinger's equation from Princeton Guide to Advanced Physics p200, write $\hbar$ = 1, then for free particle $$i \psi \frac{\partial T}{\partial t} = \frac{1}{2m}\frac{\partial ^2\psi }{\partial x^2}T$$ $$i \frac{1}{T} \frac{\partial T}{\partial t} = \frac{i^2}{2m}\frac{1}{\psi }\frac{\partial ^2\psi }{\partial x^2}$$ this is true iff both sides equal $\alpha$ it can be shown there is a general solution (1) $$\psi (x,t) \text{:=} \psi (x) e^{-i E t}$$ But if I break time into two sets, past -t and future +t and allow energy to have only negative values for -t, and positive values for +t, then the above general solution can be written as (2) $$\psi (x,t) \text{:=} \psi (x) e^{-i (-E) (-t)}$$ and it can be seen that (2) is the same as (1), diagrammatically energy time diagram And now if I describe the time as monotonically decreasing for t < 0, it appears as if matter(read antimatter) is moving backwards in time. Its as if matter and antimatter are created at time zero (read the rest frame) which matches an interpretation of the Dirac equation. This violates Hamilton's principle that energy can never be negative, however, I think I can get round that by suggesting we never see the negative states, only the consequences of antimatter scattering light which moves forward in time to our frame of reference. In other words the information from the four-vector of the antiparticle is rotated to our frame of reference. Now I've never seen this before, so I'm guessing I've missed something obvious - many apologies in advance, I'm not trying to prove something just confused. share|improve this question Shouldn't the second line (where you have rearranged) have a $-i$ at the front since you have multiplied both sides by $i^2$. On LHS you get $i^3$ = $-i$. –  PPG Nov 26 '13 at 0:22 add comment 4 Answers 4 The functions $-iEt$ and $-i(-E)(-t)$ are exactly the same so they obviously correspond to the same sign of energy if they appear in the exponent defining $|\psi\rangle$. It seems that you think that you may freely replace $t$ by $-t$ and change nothing else. However, this operation isn't a symmetry of the laws of physics, as you have actually demonstrated for Schrödinger's equation (because you also need to change the sign of $E$ or the sign in front of $H$ to make it work). The correct time reversal symmetry acts on the wave function in the simplest Schrödinger's equation model as $$ T: \psi(x,t)\mapsto \psi^T(x,t)= \psi^*(x,-t) $$ Note that there is the extra complex conjugation here – this map is "antilinear" rather than linear, we say. This complex conjugation maps $\exp(ipx) $ to $\exp(-ipx)$ which means that it reverts the sign of the momenta (and velocities), as needed for the particle(s) to evolve backwards in time relatively to the original state. This complex conjugation also restores the positivity of the energy if the original equation had a positive definite Hamiltonian. Note that the sign of the energy and the sign of the direction of time are correlated – much like the position is correlated with the momentum via $[x,p]=i\hbar$. They're "complementary" although the interpretation has to be a bit different for $E,t$. share|improve this answer Hmm. You state "It seems that you think that you may freely replace t by −t and change nothing else", actually I stated "if I break time into two sets, past -t and future +t AND allow energy to have only negative values for -t, and positive values for +t," so both Energy and Time are inverted. Nope, no cigar. –  metzgeer Oct 17 '12 at 10:52 Fine, but I have explained why the non-relativistic kinetic energy is always positive, whether or not you act on the situation with time reversal: the correct time reversal includes the complex conjugation. When I said that you think you may just replace $t$ by $-t$, I meant that you think - and you just confirmed it - that you don't do anything else with the wave function than $t\to -t$ and you may extract things like the sign of the energy. But this ain't the case. Have you tried to read my answer or are you interested in it at all? –  Luboš Motl Oct 18 '12 at 4:54 I thought you were drunk –  metzgeer Oct 18 '12 at 11:06 add comment Feynman studied the relation between negative energy, antimatter, and particles moving backward in time. Let me quote him [1]: "The fundamental idea is that the 'negative energy' states represent the states of electrons moving backward in time [...] reversing the direction of proper time s amounts to the same as reversing the sign of the charge so that the electron moving backward in time would look like a positron moving forward in time." He uses the classical equation of motion for a simple proof, but then uses the representation of positrons as electrons moving backward in time in his Dirac equation approach to QED. Notice that the propagation kernel associated to the Dirac equation takes non-zero values for negative times. But taking the non-relativistic limit, the propagation kernel associated to the Schrödinger equation is exactly zero for negative times (see 15-3) and there is not room for antiparticles within the Schrödinger regime. In fact he confirms this before (15-12): "On the nonrelativistic case, the paths along which the particle reversed its motion in time are excluded". The disappearance of the negative energy levels in the nonrelativistic limit can be easily shown in the technique of the large and small components of the Dirac wavefunctions. [1] Section "Interpretation of negative energy states" In Richard P. Feynman. Quantum Electrodynamics; Advanced Book Classics; Perseus Books Group; 1998. share|improve this answer add comment Energy would exhibit both positive as well as negative energy if it were a living entity. So first one must answer is time alive? To solve any equasion shouldn't you know the values of all propertys within it?Idetify the propertys first. Only then could you solve it. share|improve this answer Your answer doesn't make much sense. What does it mean for time to be alive? Why does energy have negative and positive values if it is alive? –  Chris Mueller Feb 13 at 4:26 add comment Try working back through the maths if you assume that Time itself is a negative form of matter and energy. We are very good at measuring time, but so far have never managed to explain what exactly it is. Time was created in the Big Bang to balance the creation of matter and energy. It displays a negative gravitational force. share|improve this answer add comment protected by Qmechanic Feb 13 at 6:29 Would you like to answer one of these unanswered questions instead?
87317d25261ccfc2
You are currently browsing the tag archive for the ‘nonlinear wave equation’ tag. I’ve just uploaded to the arXiv my paper “The high exponent limit $p \to \infty$ for the one-dimensional nonlinear wave equation“, submitted to Analysis & PDE.  This paper concerns an under-explored limit for the Cauchy problem \displaystyle -\phi_{tt} + \phi_{xx} = |\phi|^{p-1} \phi; \quad \phi(0,x) = \phi_0(x); \quad \phi_t(0,x) = \phi_1(x) (1) to the one-dimensional defocusing nonlinear wave equation, where \phi: {\Bbb R} \times {\Bbb R} \to {\Bbb R} is the unknown scalar field, p > 1 is an exponent, and \phi_0, \phi_1: {\Bbb R} \to {\Bbb R} are the initial position and velocity respectively, and the t and x subscripts denote differentiation in time and space.  To avoid some (extremely minor) technical difficulties let us assume that p is an odd integer, so that the nonlinearity is smooth; then standard energy methods, relying in particular on the conserved energy \displaystyle E(\phi)(t) = \int_{\Bbb R} \frac{1}{2} |\phi_t(t,x)|^2 + \frac{1}{2} |\phi_x(t,x)|^2 + \frac{1}{p+1} |\phi(t,x)|^{p+1}\ dx, (2) on finite speed of propagation, and on the one-dimensional Sobolev embedding H^1({\Bbb R}) \subset L^\infty({\Bbb R}), show that from any smooth initial data \phi_0, \phi_1, there is a unique global smooth solution \phi to the Cauchy problem (1). It is then natural to ask how the solution \phi behaves under various asymptotic limits.  Popular limits for these sorts of PDE include the asymptotic time limit t \to \pm \infty, the non-relativistic limit c \to \infty (where we insert suitable powers of c into various terms in (1)), the small dispersion limit (where we place a small factor in front of the dispersive term +\phi_{xx}), the high-frequency limit (where we send the frequency of the initial data \phi_0, \phi_1 to infinity), and so forth. Tristan Roy recently posed to me a different type of limit, which to the best of my knowledge has not been explored much in the literature (although some of the literature on limits of the Ginzburg-Landau equation has a somewhat similar flavour): the high exponent limit p \to \infty (holding the initial data \phi_0, \phi_1 fixed).  From (1) it is intuitively plausible that as p increases, the nonlinearity gets “stronger” when |\phi| > 1 and “weaker” when |\phi| < 1; the “limiting equation” \displaystyle -\phi_{tt} + \phi_{xx} = |\phi|^{\infty} \phi; \quad \phi(0,x) = \phi_0(x); \quad \phi_t(0,x) = \phi_1(x) (3) would then be expected to be linear when |\phi| < 1 and infinitely repulsive when |\phi| > 1 (i.e. in the limit, the solution should be confined to range in the interval [-1,1], much as is the case with linear wave and Schrödinger equations with an infinite barrier potential; though with the key difference that the nonlinear barrier in (3) is confining the range of \phi rather than the domain.). Of course, the equation (3) does not make rigorous sense as written; we need to formalise what an “infinite nonlinear barrier” is, and how the wave \phi will react to that barrier (e.g. will it reflect off of it, or be absorbed?).  So the questions are to find the correct description of the limiting equation, and to rigorously demonstrate that solutions to (1) converge in some sense to that equation. It is natural to require that \phi_0 stays away from the barrier, in the sense that |\phi_0(x)| < 1 for all x; in particular this implies that the energy (2) stays (locally) bounded as p \to \infty; it also ensures that (1) converges in a satisfactory sense to the free wave equation for sufficiently short times.  For technical reasons we also have to make a mild assumption that either of the null energy densities \phi_1 \pm \partial_x \phi_0 vanish on a set with at most finitely many connected components.  The main result is then that as p \to \infty, the solution \phi = \phi^{(p)} to (1) converges locally uniformly to a Lipschitz, piecewise smooth limit \phi = \phi^{(\infty)}, which is restricted to take values in [-1,1], with -\phi_{tt}+\phi_{xx} (interpreted in a weak sense) being a negative measure supported on \{ \phi=+1\} plus a positive measure supported on \{\phi = -1\}.  Furthermore, we have the reflection conditions \displaystyle (\partial_t \pm \partial_x) |\phi_t \mp \phi_x| = 0. It turns out that the above conditions uniquely determine \phi, and one can even solve for \phi explicitly for any given data; such solutions start off smooth but pick up an increasing number of (Lipschitz continuous) singularities over time as they reflect back and forth across the nonlinear barriers \{\phi=+1\} and \{\phi=-1\}.  (An explicit example of such a reflection is given in the paper.) [The above conditions vaguely resemble entropy conditions, as appear for instance in kinetic formulations of conservation laws, though I do not know of a precise connection in this regard.] In the remainder of this post I would like to describe the strategy of proof and one of the key a priori bounds needed.  I also want to point out the connection to Liouville’s equation, which was discussed in the previous post. Read the rest of this entry » As is well known, the linear one-dimensional wave equation Read the rest of this entry » RSS Google+ feed Get every new post delivered to your Inbox. Join 3,571 other followers
ff10ed780baa95d3
Teacher resources and professional development across the curriculum Teacher professional development and classroom resources across the curriculum Monthly Update sign up Mailing List signup Unit 3: Atoms and Light—Exploring Atomic and Electronic Structure Alpha particle A product of nuclear decay that is two protons and two neutrons, which form a particle with a structure identical to that of a helium nucleus with a charge of +2. Blackbody radiation A type of electromagnetic radiation that is emitted by a black body (a nonreflective and opaque object at uniform and constant temperature), such as the light emitted by a glowing hot stove coil. Cathode ray tube An evacuated tube with two electrodes inside it. High voltage electricity is applied to the negative electrode, creating a stream of electrons that travel to the positive electrode. Occurs when the nucleus of an unstable atom disintegrates, emitting radiation (such as alpha particles, beta particles, or positrons) causing the atom to lose energy and become a different isotope. Electromagnetic radiation Radiation that is emitted and moves in a wave-like shape. It is synonymous with the word "light." Negatively charged subatomic particles. The measure of an object's reluctance to accelerate under an applied force. A form of electromagnetic radiation that falls between the visible light and microwave areas of the electromagnetic spectrum. Infrared light is further divided into "far," "mid," and "near" regions. Far infrared light is thermal, and we experience it as heat. Near infrared waves are used in fiber optic telecommunications. Inversely proportional Two variables are inversely proportional to each other if as the value of one variable increases, the value of another variable decreases at the same rate. A subatomic particle with no net electric charge that combines with protons to form the nucleus of the atom. The number of neutrons in an atom determines the isotope of the element. The core of the atom, which consists of protons and neutrons. The diameter of the nucleus is extremely small relative to the diameter of the entire atom, which includes its electron cloud. The number of protons in the nucleus determines which element the atom is. Photoelectric effect The name given to what happens when light shines on the surface of an element, and then electrons are emitted from it, usually in the form of electricity. An elementary particle (a particle lacking substituent parts) and the quantum (smallest unit) of electromagnetic radiation (light). A positively charged subatomic particle that combines with neutrons to form the nucleus of the atom. The number of protons in the nucleus uniquely determines which specific element that atom is. Quantum model of the atom A model of the atom that describes the electrons in the atom as having only very specific values of energy and locations in space. Schrödinger equation A differential equation that, when solved for an atom, gives many possible solutions, corresponding to different possible wave functions for that atom. This equation is important in quantum mechanics because it demonstrates that an atomic orbital can be described as a probability distribution map of the position of an electron (rather than a rigidly defined orbital, in which the location of an electron can be known). The study of light being absorbed or emitted by matter. Speed of light The speed of light in vacuum is 299,799,458 meters per second, and is the maximum speed any energy or matter can travel. Subatomic particles The particles into which an atom can be split. A form of electromagnetic radiation that falls between X-rays and visible light. Ultraviolet radiation from the sun is filtered through the Earth's atmosphere, mitigating its harmful effects on human health, yet the small fraction that penetrates the atmosphere can cause skin damage and cancer. Wave particle duality In quantum mechanics, when fast moving particles of matter or photons of energy blur the lines between the wave-like nature of light and the particle-like nature of an object. In the solutions to the Schrödinger equation, electrons can be associated with mathematical functions, called "wavefunctions," that relate to their energy and probable locations in space. © Annenberg Foundation 2016. All rights reserved. Legal Policy
517def6b6bbbcc04
Critique of a Metaphysics of Process demarcating things as they are conventionally & ultimately moving along swimmingly Part I : General Metaphysics I pay homage to Je Tsongkhapa inspired by the profound philosophies of Protector Nâgârjuna, Emmanuel Kant & Alfred North Whitehead with thanks to Willem of Ockham to the living trees Nâgârjuna : Mûlamadhyamakakârikâ, 1. Kant, I. : Critique of Practical Reason, conclusion, on Kant's tombstone. "'Creativity' is the universal of universals characterizing ultimate matter of fact. It is the ultimate principle by which the many, which are the universe disjunctively, become the one actual occasion, which is the universe conjunctively. It lies in the nature of things that the many enter into complex unity." Whitehead, A.N. : Process and Reality, § 31. Natura abhorret a vacuo. This work drew direct inspiration from Nâgârjuna's Fundamental Verses on the Middle Way, the Mûlamadhyamakakârikâ (2th century CE), Emmanuel Kant's Kritik der reinen Vernunft (1781), the Critique of Pure Reason, and Alfred North Whitehead's Process and Reality (1927/28). Knowing this, the reader is exempt, except for the odd quotation, from the burden of the usual battery of academic references. For the prolegomena to this metaphysics of process, consult Criticosynthesis. "Be empty, that is all." Part I : General Metaphysics. Chapter 1 Introducing Metaphysics & Ontology. 1.1 Metaphysics & Science. Object-Dependent, Imaginal & Perspectivistic Styles. § 1 The Issue of Style. § 2 Deriving Style from Objects. § 3 Imaginal Style. § 4 Creative Unfoldment. § 5 The Style of Process Metaphysics. B. Opposition, Reduction & Discordant Truce. § 1 The Axiomatic Base. § 2 Monism, Dualism or Pluralism. § 3 Critical Epistemology. § 4 Conflictual Model. § 5 Reductionist Model. § 6 Metaphysics & Criticism. § 7 Discordant Truce. § 8 The Objectivity of Sensate Objects. § 9 The Subjectivity of Mental Objects. § 10 Direct & Indirect Experience. C. Towards a Critical Metaphysics. § 1 Transcendence & Interdependence in Ancient Egyptian Sapience. § 2 Greek Metaphysics : Transcendence & Independence. § 3 Metaphysics in Monotheism & Modern Philosophy. § 4 The Fundamental Question : Being or Knowing ? § 5 Precritical Metaphysics : Being before Knowing. § 6 Critical Metaphysics : Knowing before Being. D. Valid Science & Critical Metaphysics. § 1 Transcendental Logic of Cognition. § 2 The Correct Logic of Scientific Discovery. § 3 The Validity of Scientific Knowledge. § 4 Casus-Law : the Maxims of Knowledge Production. § 5 Metaphysical Background Information. E. Thinking Metaphysical Advancement. § 1 The Mistake of Absolute Relativism. § 2 Logical Advance. § 3 Semantic Advance. 1.2 Immanent Metaphysics. A. The Limit-Concepts of Reason. § 1 Finite Series and the Infinite. § 2 Modern Limit-concepts : Soul, World, God. § 3 The Copernican Revolution. § 4 The Linguistic Turn. § 5 Epistemological Limit-concepts : the Real & the Ideal. § 6 Metaphysical Limit-concepts : Conserver, Designer & Clear Light*. B. Diversity & Convergence in the World. § 1 Horizontal : Variety, Display & the World-Ground. § 2 Vertical : Unity, Intelligent Focus & Clear Light*. C. The Alliance between Science & Immanent Metaphysics. § 1 The Alliance of Form. § 2 The Alliance of Contents. § 3 Empirical Significance & Heuristic Relevance. D. Limitations of a Possible Speculative Discourse. § 1 Logical Limitations. § 2 Semantic Limitations. § 3 Cognitive Limitations. 1.3 Transcendent Metaphysics. A. Jumping Beyond Limit-Concepts. § 1 Epistemological Transgressions. § 2 Ontological Transgressions. § 3 Transgressive Metaphysics. § 4 Deconstruction & the Margin. B. Conceptuality & Non-Conceptuality. § 1 Conceptual Thought. § 2 Ante-rational Regressions. § 3 Meta-rational Transgressions. § 4 Direct Experience & Cognitive Nonduality. § 5 The Epistemological Status of Nonduality. C. Irrationality versus Poetic Sublimity. § 1 Features of Irrationality. § 2 Transcendence & Art. 1.4 Ontology. A. Defining Ontology without the Nature of Being. § 1 Place of Ontology in Metaphysics. § 2 Objects of Ontology : What is There ? § 3 Monist, Dualist & Pluralist Ontologies. § 4 Failures of Materialist & Spiritualist Ontologies. § 5 Voidness, Emptiness & Interdependence. B. Perennial Ontology ? § 1 The Ancient Egyptian Nun & the Pre-Socratic Ground. § 2 The Logic of Being & the Fact of Becoming. § 3 Greek & Indian Concept-Realism. § 4 The Tao. § 5 The Dharma Difference. C. Against Foundation & Substance. § 1 The Definition of Substance. § 2 The Münchausen Trilemma. § 3 Avoiding Dogmatism & Scepticism. D. Conventional Appearance. § 1 What is Truly There ? § 2 Concepts, Determinations & Conditions. § 3 Valid but Mistaken Appearance. § 4 Appearance, Illusion & the Universal Illusion. Ultimate Suchness/Thatness. § 1 The Katapathic View on the Ultimate. § 2 The Apophatic View on the Ultimate. § 3 The Non-Affirmative Negation. § 4 Fabricating the Ultimate : Ending Reified Concepts. § 5 The Direct Experience of the Unfabricated Ultimate. F. The Ontological Scheme. § 1 Event & Actual Occasion. § 2 Efficient & Final Determinations of an Actual Occasion. § 3 The Three Operators. § 4 Aggregates of Actual Occasions. § 5 Individualized Societies. § 6 Panpsychism versus Panexperientialism. § 7 The God* of Process Ontology. Chapter 2 Mental Pliancy & its Enemies. 2.1 Definition of Mind. § 1 Awareness, Attention & Cognizing. § 2 Attending Objects of the Mind. § 3 Cognizing Clarity. § 4 The Luminous Clear Ground of the Mind. 2.2 The Continuum of the Mindstream. § 1 A Non-Spatial Continuum : Temporal and Atemporal. § 2 Symmetry & Symmetry-Break. § 3 From Happiness to Peace. 2.3 The Non-Physical Domain of the Mind. § 1 Physical & Non-Physical Domains. § 2 Upward Causation. § 3 Downward Causation. 2.4 Ego, Self & Selflessness. § 1 Defining the Self. § 2 The Two Foci. § 3 Prehending the Selfless Mindstream. 2.5 Closed & Open Minds. § 1 The Logic of Self-Cherishing Affliction. § 2 Ontologizing the Self. § 3 The Closed Entropic Mind. § 4 The Mind of Enlightenment. § 5 The Open Negentropic Mind or Pliant Mind. Chapter 3 Metaphysics as Conventional Truth. 3.1 Conventional Truth as Valid but Mistaken. § 1 The Validation of Knowledge. § 2 The Relevance of Authority. § 3 The Significance of Experimentation. § 4 The Worth of Conventional Truth. § 5 How Conventional Truth Fails. § 6 Substantial Instantiation in Conventional Truth. 3.2 The Argument of Illusion. § 1 The Argument from the Senses. § 2 The Argument from the Rational Mind. § 3 The Argument from Speculative Reason. § 4 The Argument from Chapter 4 Speculative Thought. 4.1 Speculating on the Subject. § 1 The Identity System. § 2 Desubstantializing Identity. § 3 From Ego-Circularity to Bi-modality. § 4 Selflessness : Clearing the Ontic Self. § 5 The Immortal Nature of the Clear Light* Mind. 4.2 Speculating on the Object. § 1 The Object of Creative thought. § 2 Process : Clearing the Ontic World-System. Chapter 5 Preparing the Mind for Ultimate Truth. 5.1 Defining Ultimate Truth ? § 1 Primordial Ground : the Undifferentiated. § 2 Unbounded Wholeness : the Absolute. § 3 Things As They Are : the Non-Deceptive. § 4 The Duality of the Simultaneous. 5.2 Conceptual Fallacies and Nondual Un-saying. § 1 Against the Ontology of the One Truth. § 2 Against the Ontology of Awakening. § 3 The Case of the Unity of the World-Ground. § 4 The Positive Power of Silence. 5.3 Generating Right View. § 1 Identifying the Culprit. § 2 Eliminating Concepts with Concepts. § 3 Contrived Realization of Full-Emptiness. § 4 Uncontrived Uncovering of the Clear Light Nature of Mind*. Chapter 6 The Logic of Ultimate Analysis. 6.1 Conventional & Ultimate Analysis. § 1 Conventional Analysis. § 2 Ultimate Analysis. § 3 The Dangers of Ultimate Analysis. 6.2 The Formal Presuppositions of Ultimate Analysis. § 1 The Rules of Formal Logic. § 2 Identity. § 3 Duality & Negation. § 4 Excluded Third. 6.3 The Primitives. § 1 The Logical Operators. § 2 The Quantifiers. § 3 Objects § 4 Differentiating Object § 5 The Apprehending Self. 6.4 The Six Instantiations. § 1 Instantiation. § 2 Logical Instantiation. § 3 Functional Instantiation. § 4 Conventional Instantiation. § 5 Substantial Instantiation. § 6 Ultimate (or Absolute) Instantiation. § 7 Mere Existential Instantiation. 6.5 The Logic of the Selflessness of Persons. § 1 Establishing Ontic Identity. § 2 Ontic Identity is not Identical with Mind or Body. § 3 Ontic Identity is not Different from Mind or Body. § 4 No Ontic Identity is Found. 6.6 The Logic of the Selflessness of Phenomena. § 1 Establishing Ontic Sensate Objects. § 2 Ontic Sensate Objects are not Identical with their Parts. § 3 Ontic Sensate Objects are not Different from their Parts. § 4 No Ontic Sensate Objects are Found. 6.7 Conclusions. § 1 Main Problems of Substantiality. § 2 Non-Substantiality. § 3 Dependent Arising & Process. § 4 One Object with Two Epistemic Isolates. § 5 Simultaneity : No Two Worlds & No Two States. 6.8 Full-Emptiness. § 1 Fullness of Earth : Process Nature of Objects & Subjects. § 2 Emptiness of Heaven : Absence of Inherent Existence. § 3 Pansacralism. Chapter 7 Preparative Ontology. 7.1 The Question of Questions : Why Something ? § 1 Nothingness : Relative & Absolute. § 2 Nothingness : Passive & Active. § 3 Nihilism of the Void. § 4 Active Nothingness : Potentiality & Virtuality. 7.2 Operating Something. A. Matter : Particles, Fields & Forces or Hardware. § 1 The Quantum Plasma of the World-Ground. § 2 The Beginning of the Conventional Spacetime Continuum. § 3 Elementary Particles, Fields & Forces. B. Information : Encoded Data or Software. § 4 Information : Informing & Informed. § 5 Informed Information. § 6 The Matter - Information Bond. § 7 Life as Complexification. C. Consciousness : Meaning & Intent or Userware. § 8 Meaning & Intent. § 9 Evolutionary Panexperientialism & Degrees of Consciousness. § 10 The Spiritual Features of Consciousness. 7.3 Towards a Metaphysics of Specifics. Part II : Metaphysics of Specifics. Chapter 8 Metaphysical Cosmology. Chapter 9 Metaphysical Cybernetics. Chapter 10 Metaphysical Biology. Chapter 11 Metaphysical Anthropology. Chapter 12 Metaphysical Mysticism. Chapter 13 Metaphysical Theology. Thematic Glossary Alphabetic Glossary Ontology, the study of what is shared in common by all existing things (individual phenomena or aggregates of phenomena), is the capstone of the love of wisdom. Ontology is also the final speculative goal of metaphysical inquiry, both immanent (within the world) and transcendent (beyond the world). Despite all possible variety between things (including conscious persons endowed with a human mind), ontology tries to lay bare the ultimate nature of all phenomena. In vain, no doubt. But in the process of this conceptual understanding, coarse, subtle & very subtle arguments are put in place. As history unfolds, "this" metaphysics of existence or process will inevitably be replaced by "that" better one. In the dialogue between these versions, complex new scientifically inspiring concepts may emerge. This inexhaustible complexification being one of the hallmark in the history of valid ontologies. To further the speculative branch of philosophy or "metaphysics", the normative disciplines of logic, epistemology, ethics & aesthetics have to influence the mind first (cf. Criticosynthesis, 2008). One has to know the principles of correct reasoning (transcendental logic), the norms of valid knowledge (theory of knowledge), the maxims of knowledge-production (practice of knowledge), the judgments pertaining to the good (the just, fair & right), providing maxims for what must be done (ethics) and judgments pertaining to what we hope others may imitate, namely the sublime beauty of excellent & exemplary states of matter (aesthetics). These normative disciplines foster precise goals. Logic targets correctness, epistemology validity, ethics goodness and aesthetics unity & harmony. If left out, any metaphysical enterprise will be insufficiently capacitated. Then, to conceptualize the ultimate nature of phenomena, speculative depth & extend will be lacking. When Andronikos of Rhodos (first century CE) classified the works of Aristotle, he placed the books on First Philosophy next to fourteen treatises on Nature ("ta physika"). These were called "ta meta ta physika" or "the (books) coming after the (books on) nature" and so "metaphysics" was born. The names given to Aristotle's First Philosophy vary from "theology", "wisdom" (Aristotle), "transphysics" (Albertus Magnus), "hyperphysics" (Simplicius) to "paraphysics" ... Playing on the ambiguity in "meta", it was also taken to connote what is beyond sensible nature. For Aristotle, metaphysics was (a) the science of first principles and causes, (b) the science of being as being and (c) theology. Did Andronikos leave us a hint ? Should metaphysics, before starting to speculate, always first study physics, i.e. "science" ? Without the backbone of valid empirico-formal knowledge, can the totalizing conceptualization sought be anything other than incomplete and/or flawed ? Or worse : irrational nonsense ? § 1 Correctness and Validity. Logic and epistemology teach how formal & empirico-formal knowledge and its advancement are possible. They focus on conventional truth, the functional reality of sensate & mental objects shared with other knowers. Logic rules the architecture of conceptual reasoning. Classical logic identifies truth-values, fallacies, consistency, coherence & completeness. It does so using the principles of identity, non-contradiction and excluded third. It invites us not to multiply entities needlessly (parsimony), and mostly builds on symmetry. Non-classical logics develop systems of inference based on alternative principles, needed to understand special objects like action, possibility or quantum phenomena. They teach us to work with paradox, absence of coherence or contradiction (para-consistency). Applying formal logic to the question of the ultimate nature of phenomena, or ultimate analysis (cf. Ultimate Analysis, 2009), either results in the conceptualization of the absence of substantial reality of oneself (the identitylessness of persons), or in realizing the lack of such in phenomena (the substantiality of phenomena). Reifying the generic idea of emptiness ("shûnyatâ", cf. Emptiness Panacea, 2008) leads to nihilism, affirming self and non-self are unsubstantial and so nothing at all, not even functional. Nihilism may however disguise itself as essentialism, for nothingness itself, as an underlying void thing (hypokeimenon), is at times -paradoxically- turned into the nonexistent "stuff" out of which phenomena emerge. Rejecting ultimate analysis for no good reason leads to eternalism, affirming substantial existence of self and/or non-self. Here the many contradictions of substantialism are waved away. Clearly a mind analyzing reality by way of logic alone is not equipped to realize the wisdom unveiling ultimate truth. Nihilism and eternalism are weak positions. A mind thinking along those lines is not pliant, but either self-cherishing or self-annihilating. Both tendencies point to incorrect ontological presuppositions. Self-grasping has not come to an end. If any metaphysical insight is to be gained, both mentalities must be abandoned. Defining valid knowledge, epistemology demarcates the rules of true knowledge in terms of valid empirico-formal statements of fact. Indeed, science is validated by experimentation & argumentation, metaphysics by the latter only (cf. Criticosynthesis, 2007, chapter 2). Rejecting substantialism, metaphysical speculation on process takes full advantage of the logic of ultimate analysis. Metaphysics of process is not a mummification of ideas, the denial of diversity and impermanence (of life itself) for the sake of a fictional stability, a "Jenseits" of imagination or a Platonic world. Nor is it the reification of the objective & subjective conditions of all possible thought. Metaphysics of process accepts the results of logic & science : absolutely isolated objects cannot be found. Metaphysics is not a speculation on substance but on process. The latter encompasses both absence & presence, both the arising, the abiding and the ceasing. It does so because only interdependent, impermanent phenomena arise, abide & cease. These define a stream of functionally interrelated happenings (efficient) & moments of creative advance (finative). Ergo, metaphysics is not equated with idealism or Platonism. Nor with realism or Aristotelism. § 2 The Pliancy of Mind. Insofar as our speculative pursuit does not consider the link between, on the one hand, the existential conditions defining the egological state of the mind of Homo normalis and, on the other hand, the capacity to cognize the ultimate nature of things, ontology is nothing more than a subtle ornament of dry metaphysical intellectualism. Moreover, as someone describing how to swim without ever having touched water, these intellectual activities miss target. The conclusions reached may be accepted or rejected without ceasing the existential dissatisfaction, both emotional & intellectual, present in those in which these ideas and their speculative study happen. This considerably handicaps philosophy to serve practical goals ! How to outline a philosophy of the practice of philosophy ? Even if the necessity of the arguments cannot be obscured or confused, their influence on sensation, thought, feelings, action and consciousness is insufficient to actually liberate the mind from mental obscurations & afflictive emotions by unconcealing ultimate (absolute) truth, i.e. by the direct, non-conceptual & nondual experience of the ultimate nature of phenomena. Without considering the maieutic dimension assisting the liberation of human beings, without engaged thinking, speculative philosophy does not really take off. Then barren academia is what is left. The Socratic intent opposes this exclusive hold of philologistics on the pursuit of wisdom. Wisdom encompasses theory & practice. Philosophy is both abstract & concrete. Both form a unity. Integral part of society, the practice of philosophy is an integral part of the philosophical life, involving theory & practice. To self-realize the spirit of wisdom, the philosophical life calls for spirituality, or the art & science of addressing consciousness, thought, affect, volition and sensation.  The necessity of such a "practice of philosophy" derives from wisdom's aim to reduce alienation & disorientation, promoting : 1. (inter) subjectivity : self-awareness, consciousness of being a subject, a someone rather than a something, the First Person perspective, ability to interact constructively with others, implying openness, flexibility, respect, tolerance etc. ; 2. cognitive autonomy : capacity to think rationally, to self-reflect, to be able to formulate ideas independent of traditions, to integrate instinct & intuition in a rational way, dialogal capacity, using arguments to posit opinions ; 3. balance : awareness of the importance of happiness, justice and fairness in thought, feelings and actions, communicational action, building peace, mutual understanding & acting against extremes like fundamentalism, nihilism, virulent scepticism, closed dogmatism, exaggerated relativism, blind materialism, naive spiritualism, etc. ; 4. intellectual & spiritual concentration, sharpness & depth : creative capacity, originality, inventivity, novelty, and the spiritual exercises aiming at wholeness, leading to increased mental concentration, intellectual acuteness and extend of interests and compass. The abortion of the practice of sapience by the academy is a recent one. Let it be rejected. In the light of criticism (cf. Criticosynthesis, 2007), academic philosophy is both theoretical & practical : • theoria : the philosophy of the theory of philosophy : (1) normative (judicial) : logic, epistemology, ethics & aesthetics ; (2) descriptive (speculative) : metaphysics incorporating an ontology of process, cosmos, life & the human ; (3) philologistics : history of philosophy, hermeneutics, linguistics, philosophy of language, neurophilosophy, etc. • the praxis of wisdom : the philosophy of the practice of philosophy : namely the tools to apply philosophy in society, in terms of psychology, sociology, politics, economy, advising, counselling, self-realization, etc. The "theoretical" activity of the philosopher (reading, writing, teaching) needs to be complemented by the "practical" activity of the same philosopher (listening, advising, mediating, meditating). Without sufficient input from real-life & real-time philosophical crisis-management, the mighty stream of wisdom becomes a serpentine of triviality and/or a valid pestilence of details (pointless subtlety). This is in-crowd philosophy, elitist and mostly useless. Working together, contemplation (theory) and action (practice) allow wisdom to deepen by the touch of a wide spectrum of different types of interactions. Risks are taken. Opposition & creativity (novelty) must be given their "random" place in the institutional architecture. One must teach philosophers how to integrate themselves in the economical cycle. Kept outside the latter, state-funded philosophy rises. This situation does not benefit philosophy, quite on the contrary. Moreover, it also limits the possibility to enter wisdom, the mind witnessing the ultimate nature of all possible phenomena. In doing so, the absence of a practice of philosophy hinders the development of philosophical thought, both in terms of its depth & extend. Indeed, when human beings in general, and philosophers in particular, only care for their own petty little kingdoms of trust and act accordingly, their minds miss the necessary pliancy to grasp, assimilate & integrate the truth concerning the nature of phenomena. The ability of being flexed without breaking comes from being able to adapt to different conditions. This capacity goes hand in hand with a calm mind cherishing others more than oneself. By eliminating sapiental activities, the stuck, strained mind -accommodating itself first- loses the capacity to swim even if it wishes to do so. And so when these minds do enter the water, their views immediately drown. Only through love & compassion, the wish & activity of causing all possible others to be happy, does the mind slowly open up. Only with this pliant & calm mind may one try to take in the wisdom realizing the ultimate nature of things. Conventional truth, in particular functional interdependence, the bedrock of method & compassion, must be grasped before the wisdom witnessing phenomena as they are may be discovered. One cannot philosophize with a mind stuck in the mud of self-cherishing & self-grasping. Doing so leads to nothing, except to a waste of precious time & good effort. It furthers no merit, reward or solution. Ethics is thus a necessary prerequisite for the ultimate success of metaphysics in general and ontology in particular. It is an integral ingredient to make the mind capable to embark with conventional truths, bringing them to the other shore of ultimate truth. Without compassion, wisdom cannot be found. Without wisdom, compassion is inefficient, i.e. does not liberate from suffering. Reason without ethics is crippled, like seeing with one eye. Such reasonings are like poison in a pot, prompting the smart to put nothing in it ... Of course, without compassion, ultimate truth can be approached with the same ultimate analysis, but the resultant view on ultimate nature, lacking the functionality of conventional reality, will be nihilist. Then, ultimate nature becomes a "noumenon", a limit-concept, not a nondual discovery of the natural light of the mind. Emptiness is reduced to a void viewed as an absolute nothingness, a mere formal condition. To miss this important methodological role of ethics in ontology, so stressed in the East, particularly in the Buddhadharma, is to neglect the actual practice of philosophy to the advantage of a crippled theoretical definition of "wisdom" as "a theory on the totality of being". This mere academism is sterile, even in its subtlety. It does not lead to liberation, while ultimate truth sets us free from the obscurations caused by the "Three Poisons" of ignorance (not knowing ultimate nature), desire (grasping & clinging to sensate and/or mental objects) & hatred (rejecting & disliking this or that sensate and/or mental object). § 3 Unity & Harmony of Mind. The mind is able to bring the manifold under unity. This by integrating separate units and by realizing a creative unison, an upgrading synthesis. This "Gestalt" is more than the mere sum of its components. Complex aggregates ensue. And these are not disordered or amorph. On the contrary, architectures and meaningful patterns are everywhere apparent in Nature. Even electrons are ruled by Pauli's exclusion principle, by which no two electrons can be in the same state or configuration at the same time, accounting for the observed patterns of light emission from atoms. The organization or code of these architectures is called "information". Just as noise is not sound, well-formed information has little redundancy. A compression of structure it aimed at ; an elegance, a symmetry, a play of interdependence and interrelationality, highlighting the togetherness of all phenomena of Nature. These conditions are not part of logic per se, but pertain to aesthetics, the judgment of beauty (cf. Criticosynthesis, 2007, chapter 5). The metaphysical mind needs more than correctness, validity & pliancy. A totalizing, all-encompassing intent must be addressed. Tí tò ón ? or What is being ? already refers to this over-arching zeal of metaphysics. While for Aristotle, this "being" was "substance", process metaphysics posits actual occasions to be the final building-blocks of that which is, i.e. the set of all possible phenomena. The totality of possibilities is thus aimed at. These are necessarily organized, for, to be arguable, metaphysics needs to be well-formed. Here forms of harmonization enter the picture, for information is an architecture, i.e. a structure, form or mathematical representation of process. Harmony is a relatively continuous balance between phenomena, whereas forms of harmony are archetypal ways of balancing out. Balance can be weird, awkward, odd, strange, bizarre, absurd, grotesque, bombastic, exaggerated etc. This evokes the pair symmetry and symmetry-break. Absence of balance is not a form of harmony, but a disharmonization. In a mind able to speculate well, unity & harmony interlock. This final element capacitates the mind sufficiently to entertain metaphysics. Accepting correct reasoning and valid scientific knowledge, training mental pliancy and fostering what brings unity & harmony, the mind is open, deep, sharp, acute & clear enough to be at peace and speculate. without an object nothing is thought without a subject nobody thinks necessity of reality idea of the REAL Factum Rationis necessity of ideality idea of the IDEAL Epistemology : knowledge - truth object of thought subject of thought research-cell Practical opportunistic logic the production of provisional, probable & coherent empirico-formal, scientific knowledge we can hold for true Ethics : volition - the good coordinated movement & its consequence Transcendental free will duty - calling Theoretical intent - conscience family - property - the secular state Practical persons - health - death Esthetics : feeling - the beautiful states of sensate matter or mental objects Transcendental consciousness pursuing excellence & exemplarity sensate & evocative aesthetic features Theoretical aesthetic attitude objective art, social art, revolutionary art, magisterial art Practical subjective art, personal art, psycho-dynamic art, total art judgments pertaining to what we hope others may imitate, namely the beauty of excellent & exemplary states of matter § 4 Ultimate, Non-Relative Truth. On the one hand, ontology, in absolute terms, aims to know the ultimate nature of phenomena. Thus it reveals an ultimate truth. But, as we shall see, transcendent metaphysics is nondual, ineffable & apophatic (without tales). It merely points (as does poetry) to something it cannot denote, designate or conceptualize. This experience cannot be explained in positive terms, for the infinite cannot be contained by the finite. Easily broken by absolute truth, words are unworthy vessels. Conceptualizing it, we are left with nothing else but a non-affirmative negation. Needing a conceptualized framework, only immanent metaphysics is left. But the quest of its periphery does not unveil a transcendent Creator fashioning Nature "ex nihilo", but an intelligent "pneuma" or "Anima Mundi", an Architect limited by the creative freedom at work in Nature. To cognize this ultimate mode of existence, i.e. the natural, spontaneous, uncontrived, unfabricated abiding of phenomena, is to know their ultimate truth. So ultimate truth is not an "entity" above or behind object, as in Platonism, but merely their natural condition, i.e. their suchness/thatness or that what they are in and by themselves. Although open to all conscious beings, this absolute state of each and every object is -unfortunately- realized by only a few. The reason is simple : to eliminate the countless delusions obscuring the mind is very difficult, demanding the ongoing discipline of study, reflection & meditation. The latter asks for renunciation, compassion and the wisdom-mind realizing the true nature of phenomena. Hence, transcendent metaphysics is not impossible sui generis, but because of ignorance (emotional & mental obscurations). On the other hand, ontology does not turn its back to the conventional truth of the nominal, "common sense" hallucination of designated & named appearances. Quite on the contrary. The ultimate exists conventionally. There are no "ultimate objects" next, behind or beyond conventional objects, but each and every conventional object has a veiled, obscured, concealed absolute nature which is its ultimate truth. Unbridled by criticism, these misrepresentations of conventionality lead to mistaken, confused agreements, opinions, notions, ideas and/or theories relating how things exist as "real", "extra-mental" substances "out there" (as in realism), and/or as "ideal" "intra-mental" selves "in here" (as in idealism). But this does not invalidate them as conventional, functional objects. They are valid but mistaken. As the object of science, conventional truth designates the factual nature of relative, fallible empirico-formal statements arrived at through experiment & argument. In an immanent metaphysics, conventional truth, on the basis of such statements of fact, speculates about being as such, the cosmos, life and consciousness. Being non-factual, it only argues (cannot test). Its arguments are more than mere perspectives, but slowly realize greater and greater clarity and comprehensiveness, finally moving to the periphery of its field. But these same conventional objects, valid insofar as their functions are concerned, are mistaken because they conceal their true nature. Indeed, the absence of their own-power is not eliminated by conventional analysis, quite on the contrary. Physical objects are defined as isolated & separate. A pivotal mental object like the self is reified and so deemed substantial ! To cognize designated facts conceptually, is to know conventional or relative truth. Although available through reason, it too -as valid science- is a rare occasion.  Conventional falsehoods are far more common and more easy to adhere to. Science aims at valid but mistaken empirico-formal truth. Immanent metaphysics tries to acquire valid but mistaken conventional speculative truth. Transcendent metaphysics points to ultimate truth, beyond validation and unmistaken. § 5 Conventional, Relative Truth. Either entities are posited in a conventional act of cognition or are revealed by the wisdom realizing the ultimate status of phenomena, implying an uncommon, implicit, hidden dimension of the mind, one able to discover and perceive ultimate nature directly. This unveils the absolute, the ultimate, i.e. things as they are. This is their suchness or thatness. Because conventionally, human beings only cognize by way of conceptual mentation and/or sensation, the conditions determining mental & sensate objects co-determine what we identify as a conventional entity. We thus prelimit objects in terms of the physical laws of perception, the psychophysical phenomenon of sensation & the known cognitive mechanisms of positing mental objects. Conventional truth must accept the theory-ladenness of our observations, for a lot of objectivity does not eliminate subjectivity. In fact, the latter cannot be taken away. As long as object and/or subject are not hypostatized, duality by itself poses no problem. But conventional truth does reify both object and subject of cognition. Reified duality is always problematic. Conventional, conceptual thought and its relative truth splits every act of cognition up in two independent & separate sides, juxtaposing a subject, defined as an object-possessor, and an object, posited or designated by this endowed cogito. However, both are mutually dependent and inclusive. Without subject, there is no object to possess. Without object, there is no positing, grasping, designating cogito. Moreover, all subjects are also the object of another subject. In such a discursive, concept-based cognition, objects, phenomena, events or knowable entities are either sensate or mental. Sensate objects are the product of perception and cognitive interpretation. Thoughts, feelings, volitions and consciousness are mental. The difference becomes very clear when considering dreams. Although the eye-sense is dormant, visual images do appear. These are purely mental and are not caused by changes in the sensitive surface of the retina. Relative, conventional truth, or valid knowledge about how things appear (not how they are in and by themselves), is the concern of science. The latter involves the "craft of magical conjurations", manipulating determinations, conditions, functions & interdependent (re)organizations. Although science may be sophisticated, we cannot, with the standards of the conceptual mind, discover the ultimate nature of things, but only their appearance. By designating, conceptual thought fixates objects. In doing so, it allows objects to appear as existing from their own side, as substances existing according to their own characteristics. Even insofar as theoretical epistemology identifies this ontological illusion and eradicates its confusing influence on the foundations of epistemology itself (refusing to ground the possibility of knowledge in either object or subject), epistemology endorses the methodological need of applied epistemology to take objects and subjects at their face value, i.e. as if existing from their own side, independent from each other, without referent, as commonsense dictates. This reifying characteristic of conceptual thought & science tries -in vain- to transform interdependent & impermanent phenomena into fixed, permanent, independent & substantial things.  Although criticism must conceive facts as theory-independent (if not, by lack of object, knowledge itself would be impossible), conceptually, we can never be sure this to be actually the case or not. Only non-conceptual, nondual wisdom-mind is able to definitively discern or apprehend ultimate truth, the suchness and thatness of all phenomena. Conceptual thought implies categorial designation and this goes for both sensate & mental objects. Hence, it cannot be conceptually known whether conventional objects, existing in a conventional, functional way, on top of this also exist according to their own essence, nature, existentials or substantial characteristics. They are designated dependent on their parts, for they are all compounds. Theoretical epistemology must accept facts also represent reality-as-such, but is not equipped to take a look "behind the surface of the mirror" and then conceptualize how things are there. Concepts are not able to pierce the membrane or lift the veil. Concepts are concealers. Therefore, although objects exist in a conventional way and thus make things work, both realist & idealist metaphysics -claiming sensate objects represent reality-as-such and/or mental objects represent the true order of things as they are- are conventional falsehoods, and this despite their playing a considerable role in applied epistemology (cf. methodological idealism versus methodological realism), as well as in the commonsense, nominal view of valid science (not to speak of invalid conventional knowledge). Confused because of its concordia discors, conceptual reason (in the pre-rational, proto-rational, formal, critical & creative modes of cognition) eclipses ultimate truth and designates objects to appear as this-or-that. Producing consensual illusions, science is not equipped to unveil reality-as-such. On the level of sensate objects, conceptual interpretation is never put to rest, while mental objects are merely (inter)subjective, and thus dependent of context. Moreover, reifying duality is never relinquished. To end this confusion, the ante-rational antecedents as well as the mechanisms of conceptual cognition must be understood, eradicating ontological illusion. This is the work of critical thought. It yields the relative truth of duality, as between sensate & mental objects, between experiments (testing) & discussions (argumentation), between the theory-independent & the theory-laden side of facts, between correspondence & consensus as aspects of conventional truth, etc. In creative thought, i.e. in the mode of conceptual cognition used in immanent metaphysics, the gradual process of ultimate analysis, resulting in an approximate ultimate -the identity between interdependence and absence of substance- causes the ontological, substantializing, reifying strongholds of the duality of mind to finally collapse, opening it up to the discovery of the nondual, immediate, actual wisdom-mind apprehending ultimate nature. This wisdom is not produced, created or caused, but always given as the fundamental (naked) potential of the mind. Although ultimate analysis does not necessarily produce or cause wisdom mind, it works as a valid and potent preparation, as a gateway to ultimate truth, an approximate, contrived (fabricated) ultimate. This is the ultimate purification of the conceptual mind. Being introduced to wisdom mind is however immediate and thus non-gradual, uncontrived and direct. So, as often overlooked, on the side of the subject of experience, the via negativa yields a positive result : the possibility of a nondual dimension of mind beyond reason (formal & critical) & intellect (creative). On the side of the object, this puts down a clear message : the ultimate nature of phenomena lies beyond the conceptual and can therefore not be grasped in any of the conceptual modes of thought (pre-rational, proto-rational, formal, critical & creative). One needs to move ahead ! This begs the question : What ultimate truth does wisdom-mind know ? § 6 Ultimate Analysis. In absolute terms, ontology claims to establish the ultimate truth about every existing thing, which is the same as directly cognizing the ultimate state of phenomena. This ultimate truth, the wisdom realizing what truly is, takes as object things as they are, not as they appear. As Kant and neo-Kantianism have demonstrated, reason & science cannot penetrate further than appearing phenomena. Hence, from their side, ultimate truth is a "noumenon". So although conceptual thought is not equipped to penetrate reality-as-such, it is nevertheless possible to gradually loosen its grip on cognition and prepare the ultimate experience of the suchness of all things, including the mind. This is not an introduction, but a springboard establishing an approximate ultimate. It is a purification of the mind. Dissolving the hard core of conventionality and facilitating the non-gradual "jump" to the other shore of wisdom, certain conceptualizations end the reifying procedures (instantiations) of discursive thought. Thanks to this, the direct perception of the luminous core of the mind, the ultimate, always present nature of mind and of phenomena, may rise. This ultimate analysis (cf. Ultimate Logic, 2009), the gateway to ultimate truth, is a cognitive protocol aiming to arrest the reification of the conceptual mind by means of concepts and, with the greatest subtlety, prepare nonduality, or absence of concepts. It accommodates the direct experience of the ultimate nature of phenomena, of things as they are by way of a totalizing generic idea of the ultimate nature of phenomena. Regard this as an ultimate logic using concepts to clear away the reifying ground, preparing the realization of the unsubstantial, process-based nature of phenomena, i.e. their lack of intrinsic "thingness" or substance ("shûnyatâ") manifesting as their interdependence or dependent-arising ("pratîtya-samutpâda"). This unity of emptiness and dependent-arising is defined as "full-emptiness", a term encompassing all possible phenomena. In this ultimate logic, concepts pertaining to the fundamental structures of conceptual thought are manipulated to end reifying conceptualization, collapsing the conceptual mind under the weight of its reifications, demolishing substantializing theories & mental constructions. As certain conceptualizations stop the confused mind (as it were purifies is), leading to (not causing) the direct experience of the ultimate, it is hence not the case conceptuality always engenders illusion. Otherwise, science and rationality would play no vital role in the cognitive emancipation of human beings, while they do. Ultimate analysis stops the substantial instantiation, and so makes the conceptual mind exclusively run on the existential instantiation. In such a mind, sensate & mental objects do rise, but without any further conceptual elaboration. They arise, abide and cease and without any further ado. § 7 Immanent & Transcendent Metaphysics. Ontology operates a "double coding" : (a) Ultimate truth or unmistaken absolute knowledge, the object of transcendent metaphysics, unveils the ultimate nature of phenomena. Directly perceived by an absolute, nondual, ineffable cognition (called "prehension"), it reveals wisdom at its highest possible level, the level of suchness/thatness. (b) Relative truth or valid but mistaken conventional knowledge, the object of science & immanent metaphysics, deals with the conventional reality of things, grasped in empirico-formal statements of fact (called "apprehensions") considered by all concerned sign-interpreters to be true, even if this only appears to be the case. Invalid conventional knowledge or common falsehood, while quite rampant, is not considered here. The obstinate determination, tenacity or degree of abidance characterizing the dreamlike mirage of appearances backs conventional truth. The latter manifests in science as facts we can hold for true and in immanent metaphysics as valid speculations about the totality of what convention considers to exists. The major immanent leaps to consider here are existence itself, the cosmos, life & consciousness, i.e. answers to the questions : Why something rather than nothing ? Why cosmos ? Why life ? Why sentience ? Besides seeking ultimate truth or the ultimate status of phenomena, preparing the transcendence of conceptual thought by ending reification, thus revealing the potential suchness of the mind, immanent metaphysics, when invalid, signals our ability to cover up our inborn cognitive limitations by brontosauric theories on substance. Reifying, substantializing and so turning ideas into ultimate things or self-sufficient grounds, such transcendent ontologies forget the limitations of conceptual cognition and invalidate their position by not taking reason and science as their guide. In doing so, they do not even accommodate important relative truths, like the influence of ontological illusion on knowledge in epistemology. The extremes of reification designate an absolute object (like in theism) and/or an absolute subject (a metaphysics of an "immortal soul", as in Vedânta). This grand story on the substance of the soul (the "âtman") accommodates a return to a static concept of the Divine, contradicting ultimate analysis. Moreover, such immanent metaphysics are often ill-informed about the objects of science. For example, they mostly do not integrate the special features of large (relativity) and very small objects (quantum). Nor have they grasped the importance of non-linearity (chaos). In practice, illusion (things appearing differently than they are) works. Circumstances, people, things, sensations, thoughts, feelings, volitions and conscious meaning appear solid, unchanging and graspable as "realities" which either "exist out there" or as "idealities" designated as part of the mind "in here". But under ultimate analysis, their material, informational and sentient (conscious) operators are compounds or aggregates (of actual occasion) changing constantly. Nowhere can a stable, unified continuum be identified. Appearances seem independent existences, but under ultimate analysis this can nowhere be found. What seems a substance is always a process ... So, could we be tempted to claim that the "substance" of reality, its ultimate truth, is lack of substantiality ? Describing the ultimate nature of phenomena as unsubstantial is attributing a positive, conceptual contents to the ultimate, characterizing the nondual as without anything, suggestive of a void or absolute nothingness. This leads to nihilism. To the extent we say phenomena are unsubstantial, our scientific & immanent metaphysical knowledge is relative. From the point of view of ultimate truth, there are no phenomena to be called "unsubstantial". Nothing can be said about the ultimate nature of phenomena. Neverthesless, both the direct, nondual cognition & the experience of full-emptiness, the simultaneity of absence of substance and presence of interdependence, i.e. the suchness/thatness of all things, are indeed possible. Conventional appearances do not reveal the ultimate nature of phenomena. They conjure a dreamlike, echolike world of functional interdependences. Upon these, the deluded mind projects (imputes, posits, attributes) the limit-concepts of reality and/or ideality, turning facts into real things (or physical objects) and thoughts into real ideals (attended by a substantial self). These substantial things only seem stable, for ultimate analysis shows they are not. For example, geological formations seem solid, continuous, lasting & permanent, but they are not. What then to think of the so-called lasting qualities of direct sensate & mental objects in general and our sense of selfhood in particular ? All are compounds and so impermanent. Insofar as conventional truth is concerned, the tenacity of functional interdependence, -expressed as the regularity of Nature-, is valid. Its degree of abidance obvious. Appearances exist functionally and conventional existence is a fact. Things exist conventionally, there is something rather than nothing. Objects exist as imputed by the mind, but -in the case no minds are present- exist as resulting from fleeting determinations & conditions. There is not a single atom in existence determining its own ground ! All phenomena are other-powered. Nihilism is refuted by accepting there is a "base of designation" which, existing interdependently in Nature, is extra-mental. In epistemology, this acceptance is a norm necessary to be able to think the possibility of knowledge, but is not something "found", otherwise ontological realism would ground knowledge, leading to scandalous contradictions. Staying within the boundaries of conceptual thought, i.e. the pre-rational, proto-rational, formal, critical and creative modes of cognition, valid immanent metaphysics mostly serves relative, conventional truth. From epistemology, it receives the limit-concepts & conditions necessary to be able to conceptualize the two sides of its concordia discors, namely the parts played by object & subject. From science, it gets the parameters to speculate about the reality of existence as a whole, about the cosmos, the emergence of life and the miracle of consciousness. Hence, metaphysics has two faces. One is turned to conceptual thought and works out an immanent perspective on what is, the other to the ultimate suchness of all things, approaching this by way of nondual, non-conceptual cognitive apprehensions. Confusing this distinction, and addressing the ultimate by way of concepts is the path of falsehood in transcendent metaphysics, while the path of truth regarding suchness/thatness is the wisdom-mind directly realizing the full-emptiness of all phenomena, i.e. the union of a universal lack of substance and an the all-comprehensive interdependence between all things. § 8 Objective & Subjective Immanent Metaphysics. Objectively, as a heuristic, or a general, common sense formulation guiding investigations, valid immanent metaphysics inspires science. It does so by offering a "grand story" about the world and expounds a thematic itinerary of sorts. Answering the question : "Why something rather than nothing ?", two extremes are avoided : being is not posited as eternal, continuous, autarchic, unchanging, substantial or essential, i.e. as non-referential. This is the (Platonic) fallacy of eternalism. Neither is the possibility of ultimate truth denied and fundamental "Dasein", or nature of mind, reduced to mere "Sosein", or the "truths" of the worldly continuum of valid but mistaken interdependent phenomenal aggregates. This is the fallacy of nihilism, in vain avoiding transcendent ontology. While there is no substance, there is some thing. Conventional existence is not denied. Things appear to exist as spatio-temporal, intersubjective formations with their functions, conditions & determinations. Absolute existence is not denied. The ultimate nature of phenomena is not what appears, and this negation is absolute & non-affirming, i.e. negating the realm of appearing phenomena as a whole (while relative negations always affirm something else, as "not-male" implies "female" and "not-evil" implies "good"). The speculative study of functional interdependence calls for the origin of the cosmos, the beginning of life and the meaning of human life. This order is imperative. After affirming there is something rather than nothing, the actuality, nature and meaning of this something is at hand. For anything to be, there must be operators functioning together in a spatio-temporal framework. How did this cosmos we find ourselves in happen ? Next we reason, that for anything to be alive, the cosmos must cause growth & gestation. How is life possible ? For anything to be human, culture must be present. What about consciousness & meaning ? Subjectively, valid immanent metaphysics invokes the object-possessor, and its various sensate & mental objects, speculating about the human mind, freedom, liberty, solidarity, democracy, spirituality, etc. This gives way to vast domains : consciousness, thought, feeling, action & sensation. The conventional, speculative "truth" of immanent metaphysics is true in a provisional sense only. It is valid insofar as its arguments are clear, sound and convincing. So immanent metaphysics literally "stands next" to science ("physics"). It speculates in terms of totalized panoramas, incorporating crucial theories belonging to both physical and human sciences. These are intended to inspire the inventivity and creativity of scientists, advancing discovery and expanding our knowledge-horizon. Immanent metaphysics, insofar as the arguments backing its speculations are warranted by empirico-formal statements of fact, is therefore the ally of science. Insofar conceptual thought remains substantialist, cherishing invalid forms of immanent metaphysics, like ontological realism and/or ontological idealism, conventional truth is reduced to delusional opinions and conventional falsehoods. This involves the perversion of reason (cf. Kant's "perversa ratio"). § 9 The Itinerary of Ontology. • conventional, immanent ontology : speculative totalization of (a) the sensate conditions involving space & time and the forces operating between material, physical actual occasions (particles, waves & fields), (b) the information, formal conditions or architectures pertaining to actual occasions & (c) the meaningful symbolizations of conscious entities ; • ultimate logic : given the immanent sphere of sensation & mentation, as well as the totality of all realities & idealities, both sensate and mental objects are analyzed to discover whether they truly exist as they appear, i.e. as substances from their own side. As these cannot be found anywhere, one cannot posit objects to possess an inherent, essential existence ; • absolute, transcendent ontology : beyond the conventional sphere, conceptual symbolization stops, and a gap, abyss, isthmus or "jump" is suggested. Direct, nondual, non-conceptual intuitive cognition is ineffable, has no mental residue and is one with "great compassion" ("mahâkarunâ"). According to the ultimate logic acting as an approximate ultimate to wisdom-mind, refuting all affirmative, katapathic statements about suchness/thatness, nothing substantial can be said about this pinnacle of human cognition, cultivated in meditation, and unveiled in grand spiritual poetry. Wisdom is a direct encounter with the luminous singularity of the mind itself, with its own ever-enlightened nature. To arrive at this speculative totalization, ontology needs a first principle. Monist logics privilege a single principle or monad. Materialism & spiritualism are historical examples. The former understands matter as the self-sufficient ground of the edifice, while the latter posits spirit as the principal. The advantage of monism is its unity. The system of ontology is erected upon a single ground, and so one does not need to explain any ontological differences between entities, for there are none. On the most fundamental level of reality, all phenomena share the same nature. Logically, such a solution automatically accommodates simplicity and the ideal of finding a single principle explaining the unity of science. A multiplication of founding principles is absent, allowing us to grasp the manifold with a single concept. Materialism argues physicality to be this concept. Several reasons can be advanced. As Aristotle already remarked, "substance is thought to be present most obviously in bodies" (Metaphysics, VII, ii.1, my italics). If this is considered correct, then physicality must come first and so be promoted to the status of founding monad. Also Kant privileged the senses, rejecting intellectual perception as not belonging to most men. By doing so, the impact of stimuli on the sensitive areas of our sense organs is given a higher ontological status than mental objects, deemed to be derived from the former. Sense data are turned into the rock-bottom of science. It eludes these thinkers knowledge cannot be divorced from conscious apprehension, i.e. one cannot observe any object without an observer, and the latter does more than merely passively register the incoming sensuous flux, but co-determines it. Indeed, all observation happens in a framework of theoretical connotations at work from the side of the subject or subjects of knowledge in the act of observation. For alternative reasons, spiritualism thinks consciousness to be the first concept. Hegelianism is a modern, dynamical version of Platonism & Spinozism. Both fail to plunge deep and discover a more fundamental level. Criticism leaves these solutions stand naked (cf. A Philosophy of the Mind and Its Brain, 2009). Non-monists logics always introduce more than one fundamental ontological principle (a duality, triplicity, quaternio, etc.). Duality, with its powerful reflective capacities, introduces otherness. This is a first step outside the monadic & monarchic continuum, adding radical alteriority as a new unity. But herein lies the weakness of dual systems, for now two principles are generated. How to reconcile their ontological difference in a single Nature ? If the ontological difference cannot be reduced to a more fundamental stratum, then the variety of fundamental ontological principles will cause ontology to miss unity, making it unclear how these two or more principles have to be thought together without breaking up the world in as many pieces as there are principles. Of course one may single out one principle and consider the others as merely illusions or dependent of the former, however not to the point of being included by it. Platonism is such a solution. The world is divided in two ("chorismos") without giving the same ontological & epistemological importance to these two divisions. The World of Becoming, due its variety, multiplicity and change, is not rejected, but merely made dependent of the World of Ideas. So although apparently dualistic, Plato's solution is a monism in disguise. Building on Platonic ontology, the most influential ontological dualism of recent times was introduced by Descartes. But a radical difference must be noted. Plato considered the world of becoming as a "shadow" of the world of ideas, a paradigm for the singular things participating in it ("methexis"). For him, becoming participates in Being, and only Being has reality. Descartes introduces three different substances, each with its own distinctness leading up to a substantial difference : the ego cogitans, extension (matter) & God. The Greek depreciation of matter is gone. As God is transcendent, mind & matter are the fundamental substances of the world. Precisely because Descartes defined these two in terms of substance, implying objects endure from their own side, independent & separate from other objects, a pivotal problem rose. How can two ontologically different substances, sharing no common ground (except God), work together ? Handicapped by this ontological dualism, Cartesianism was not able to deal with this, leading (after the échec of German Idealism), to a reduction of mind to matter, and a physicalist understanding of consciousness. Returning to the elegance of monism, and rejecting both materialist (physicalist) and spiritualist essentialism, let us ask : What is the fundamental concept bringing all phenomena under unity ? Reject substantialism or essentialism, for can a single mental or physical substance be posited, i.e. an "self-powered", autarchic object existing from its own side, independent & separate from all other objects, one existing inherently ? The rejection of essentialism is the acceptance of the premise of process thought : there are no substances, there is no "substance of substances", and so all phenomena are "in process", i.e. ever-changing, impermanent and interdependent happenings (occasions not independent nor separate from other occasions). Moreover, "phenomena" are actual (not past, nor future) happenings hic et nunc. There is no "world" behind the "world", no "Jenseits". Process thinking focuses on the things in their actuality. Thinking process & actuality begs the question of the unit or standard of process ? Before describing processes, their arisings, abidings & ceasings, as well as their efficient and final determinations, we have to arrest the first concept of this process-based monism, the ontological principal. Processes (P) go the way of actual happenings, concrete actual occasions (o1, o2, ... o m). Every existing object x or x is characterized by a set of actual occasions O = {ox1, ... oxm} making x unique. This set constitutes the actual continuum of x. Everything outside the occasion-horizon of this continuum does not constitute x. Can we do more than accept actual occasion ox as a logical primitive, a given ? Following Whitehead (1861 - 1947) and his "quantum ontology" (Process & Reality, 1929) : (a) actual occasion o x, an instance of the set of actual occasions O = {o1, ... om}, is an atomic & momentary actuality characterized by "extensiveness" ; event e x, an instance of the set of events E = {e1, ... en}, is the togetherness of actual occasions, and entity en x, an instance of the set of entities = {en1, ... enp}, is the togetherness of events, while "entity" or "object" are synonymous. Extensiveness is what all actual occasions have in common. This extensive plenum of the actual continuum of each actual occasion is : Entities and events are actual occasions interrelated in a determining way in one extensive continuum, and an actual occasion is a limiting type of an event with only one member. Nature is built up of these actual occasions. Events are aggregates or compounds of actual occasions. Entities are aggregates or compounds of events. When an aggregate or compound forms a society, a higher-order self-determination is at hand, a marker to distinguish non-individualized & individualized aggregates (or societies). Monism coupled with essentialism has difficulty explaining the manifold, its multiplicity, variety, differentiation, complexity, richness & interconnectedness. This approach cherishes a single static factor. So certain aspects of the manifold (of Nature) cannot be explained. The reason is clear : no substances are found to exist. The combination fails because absolute autarchy & self-determination cannot be successfully argued. Thinking a single dynamic factor solves many of the problems. In the West, process-monism is rather recent. We find traces of it in Greek philosophy (Heraclites) and a first draft in Leibniz. Elaborated by Whitehead, Process Philosophy emerged. § 10 The World-Continuum or Word-System. Classical Occasionalism, first propounded by the tenth-century Muslim thinker al-Ash'are and found in the writings of Cartesians Johannes Clauberg (1622 - 1665), Arnold Geulincx (1624 - 1669) and Nicolas Malebranche (1638 - 1715), rejects the idea substances entertain any kind of relation. This is affirmed by Nâgârjuna in his A Fundamental Treatise on the Middle Way (Mûlamadhyamakakârikâ, 2th CE, chapter XIV), in terms of an analysis of "connection" ("phrad-pa"), denoting the relation between components in any compounded phenomenon as non-substantial, but also the relation among their conditions & determination compounding them as non-substantial. This points to the absence of reification at any level of ontological analysis. Even the functionality of the efficient determinations characterizing phenomena, their location in a causal and mereological nexus, defining the logical properties of the relation of part and whole, are not permanent, autarchic and existing from their own side. Of course Classical Occasionalism had another agenda. Using the Cartesian substances "matter", "mind" & "God", it elaborated upon the consequences of ontological dualism, claiming finite things can have no efficient causality of their own. Substances cannot be the efficient causes of events. In ontological monism, the question how two or more substances relate is a non-issue, for only one substance prevails. But as soon as the numerical singularity of the fundamental principle (the monad) is relinquished for dualism, thinking change and interrelatedness brings on the question how different kind of things relate ? Classical Occasionalism rejects the possibility of any kind of relation whatsoever. Different substances can a priori never bridge their natures. All physical & mental phenomena are merely "occasions" or happenings on their own, devoid of any interconnectedness and efficient power, utterly incapable of changing themselves. Physical "stuff" cannot act as cause of other physical "stuff", for no necessary connection can be observed between physical causes and their physical effects (a view returning in the writings of David Hume, for whom causality and other lawful determinations are merely psychological habits). Moreover, because mind and brain are so utterly different, the one cannot affect the other. Hence, a person's mind cannot be the true cause of his hand's moving. The mental cannot cause the physical and vice versa. Ergo, as events do exist, they must be caused directly by God Himself. For what God wills has to be taken to be necessary. So far this remarkable view. Let us take onboard the idea substances cannot relate to each other. It would seem then, one should interpret the view substances do not exist as affirming all phenomena are interdependent processes. The conditions and determinations defining this interdependence or universal togetherness of all possible actual occasions are themselves co-existent with this stream of actual occasions making up what exists hic et nunc. They do not exist "outside" this dynamical streams of actual occasions, forming aggregates and societies of actual occasions, events and entities. Like a swimmer, they are adaptive archetypes, intelligently altering their format while performing with style, preventing their momentum from drowning (dying out). An actual occasion is an atomic & momentary actuality characterized by "extensiveness". Although indivisible, an actual occasion is not a "little thing", but a meaningful (creative) momentary differential change "dt", explained in terms of efficient & final determinations. These act as the two state-vectors of all changes in all the processes involving all actual occasions conserved in the interval or isthmus "dt" of the present moment of the world. The structural analysis of actual occasions does not reflect a temporal sequence, for the two state-vectors of process are simultaneous. From the past, efficient determinations enters actual occasion x. Because of its iota of self-determination, x makes a choice (a minimal indeterminacy or "clinamen"), and this creativity enters the efficient determinations of the next actual occasion. In this way, a single actual occasion evidences the smallest possible degree of sentience. Aggregates form and these streams are interlinked and reinforced. Recurrent events form entities, each with their own actual continuum-streams, compounding and bonding into societies. At the level of societies, the experience of conscious unity is present, pointing to a higher-order consciousness, as can be seen in the "kingdoms of Nature", the minerals, the plants, the animals and the humans. If merely product-productive, manufacturing the world, the world could not display creative change and state-transformation. But the ongoing enrichment of the world is a fact of science. Negentropic transformation is the outstanding feature of life & consciousness. This creativity must ontologically be accounted for ... Actual occasions, the actual units of process, are Janus-faced : they take from the past and, on the basis of an inner, finative structure, transform states of affairs, paving the way for further processes. They are not merely product-productive, manufacturing things, but also state-transformative. In this way several degrees of togetherness or concrescence can be identified, called events, entities, aggregates and societies. The organic whole of actual occasions, the world-continuum or universal sea of process, extended from the extremely small to the humongous, is both physical and non-physical or mental. Both have distinct properties, consisting of actual occasions defined in efficient & final terms. The physical (the world of matter) is the domain of physical objects characterized by mass & momentum. The non-physical is, on the one hand, the domain of information (the world of embodied & disembodied mental, abstract, theoretical objects) and, on the other hand, the domain of consciousness (the world of the percipient participator endowed with decisive conscious choice and sentient self-determination). These three domains are complex societies of actual occasions. Moreover, the non-physical is not made part or reduced to the physical. The question of the functional role of the mental on the valuation of possible physical outcome, can be posed. Metaphysics no longer arrests downward causation, giving to both the mental & the physical identical weight, but distinct functional roles. "Efficient determination" is physical momentum & mass of the particles, waves, fields and forces at hand. "Final determination" is self-determination, creativity, valuation and the experience of conscious unity, entering efficient causality & producing novelty. Couple process with a pluralist view on the distinctness of occasions (not on their ontological difference !) and embrace, in principle, an endless number of distinguishing attributes, aspects or operators (hylic pluralism), reducing these to the three complex societies known to function : matter (hardware), information (software) and consciousness (userware). Regarding the latter, the crucial distinction between consciousness per se (as a domain of the world-continuum) and human conscious experience (or inner life), as a very complex region in that domain, should not be missed. On this planet, the human mind is an extraordinary continuum of occasions, the only one capable of featuring inner life & conscious experience. So the world, or the totality of all observable events taking place in the universe, may be divided in three logical basics or primitives. Each is a complex society of actual occasions or a domain of the world.  Each is also an operator characterized by a function, enabling it to work a set of unique interdependent determinations & conditions, discharging its task in such a way as to make different events work together, form more unified functional wholes and harmonize their dynamic signatures, the universal intent in the Divine mind of the Architect of the World. By collecting well-determined events into a single set, three interacting sets are formed  : • matter or "hardware" (of which all elements are mostly M-events) : the physical space-time continuum, the executive hardware of working, physical compounds, defined by particles, waves, fields & forces ; • information or "software" (of which all elements are mostly I-events) : abstracts, universals, theories, codes, laws, architectures & algorhythms, the legislative software of natural & artificial expert-systems ; • consciousness of "userware" (of which all elements are mostly C-events) : free choice, self-determination, meaning, autostructuration, mentality, the intentional activities of subjectivity & inner life. These unique arrangements or world-domains are characterized by a prevailing type of mathematics, tendency, movement & order : • matter : Real Numbers, dispersive, centrifugal, entropic ; • information : Binary Numbers (1 and 0), integrative, algorhythmic, natural & cultural forms, limited but integrated set of natural & artificial expert-systems ; • consciousness : Complex Numbers, paradoxical, centripetal, negentropic, meaningful, symbolic & sentient. Although functionally stand-alone subsystems, they constantly interact on various levels of expression or functional co-relativity & interdependence. Because they are joined, a super-interactionist model allows to understand the relations, conditions, determinations & modes of communication between all actual occasions, events, entities, aggregates & individualized societies happening in the world : • C interacts with M : sensation & mental states = domain of sentience (awareness of objects). • M interacts with I : algorhythms and imperative codes of command = domain of Nature (evolution) ; • C interacts with I : symbols, science, philosophy, art, creativity = domain of culture. § 11 Functional Co-Relative Interdependence. Functional co-relativity outlaws absolute isolation and points to general interdependence. To define "ousia", substantialism (essentialism) has to defend absolute isolation. The essence ("eidos") of an object must have "own-nature" ("svabhâva"), i.e. some thing permanently existing from its own side, unaffected by the changes in its accidents, whether they be quantities, qualities, relations or modalities. As monads, substances must have no "windows". This entails three logical consequences : substantial objects are static, non-functional and self-referential. Because of these sordid features, they hinder the advancement of science & metaphysics. Substantial objects are static because their substantial core does not change (without changing the object into another object). Unchanging objects cannot relate to other objects, for the idea of relation implies openness to others and so openness to fundamental change. If an object is a self-identical monad, it has no "exits" and so cannot interact with other objects. These objects cannot move, produce or cause. Constant autoduplication ensues. Substantial objects are non-functional because they are isolated. Without any possibility to relate to other objects, they cannot produce efficient action, leading up to a relative impossibility to function. Where can these objects be found ? Except for analytical objects, all apprehended objects are functional. Substantial objects, due to their self-identical, inherent "being", have only themselves as sole referent and so cannot apprehend anything else than the monarchic affirmation of themselves and their self-powered own-nature ("svabhâva"). Their solipsism is however based on nothing else than this affirmation and therefore circular. Where can these objects be found ? All synthetic objects depend on determinations and conditions outside themselves. At the micro-level of physical reality, all objects are interconnected, and at higher levels this is also the case. In natural systems, there is nowhere anything non-referentially "on its own", for all events are part of a complex network of determinations & conditions. In artificial systems, processes may be isolated from their environments (like atomic fission), but this procedure entails lots of work to realize & sustain the quarantine, often with much damage to the environment once back reintroduced (depending on the nuclear waste involved, hundreds of thousands of years of containment are necessary). Interdependence of actual occasions, events, entities, aggregates & societies implies function (or efficient conditions of determination). Two types prevail : 1. determined functions : in a system of general determinism, events are connected through a number of efficient determinations, like self-determination, causation, interaction, mechanical determination, statistical determination, holistic determination, teleological determination & dialectical determination. Events are linked if the conditions defining each category are fulfilled. For example, in the case of causation, it is necessary, in order for an effect to occur, to have an efficient cause and a physical substrate (propagating the effect in spacetime). In contemporary scientific determinism, these determinations are not absolutely certain, but relatively probable, for science is terministic, no longer deterministic ; 2. nondetermined functions : considering the inner, mental structure of actual occasions and their togetherness (concrescence), as well individual actions of persons, cultures and civilization, phenomena are also connected by way of various degrees of free choice, intention, freedom, self-determination, valorisation, creativity and conscious life, both individual as social. This final determination escapes the conditions of the categories of any kind of lawful efficient determination. Indeed, without the possibility to posit nondetermined events moving against the system of efficient determination, ethics is reduced to physics and justice impossible. How is responsible action possible without the actual exercise of a degree of freedom, i.e. the ability to accept or reject a course of action, thereby creating an efficient-wise "indeterminate" influencing agent, changing all co-functional interdependent efficient determinations or interactions by entering them, thus adding negentropy to entropy ? How, without free choice, is genuine creative advance possible ? All actual occasions are characterized by their two state vectors : efficient & final determinations. The former is their physical, outer, overt material activity, determined by particles, waves, fields & forces, the latter their mental, inner, covert sentient activity, determined by creativity, novelty & self-determination. Although a single actual occasion has only an infinitesimal iota of sentience, the fact of its togetherness with countless others, entering them with the result of an infinitesimal mental decision, brings about a cumulative effect, and these successive generations of additions allow -at some point- the emergence of societies, i.e. individualized aggregates endowed with the experience of conscious unity. Although an individual actual occasion has a very small degree of sentience in the form of a "clinamen", it is usually part of aggregates devoid of such experience of conscious unity. In that sense, remembering Leibniz, a crystal in a stone thrown at a cat has more affinity with the cat than the stone. Process thought does not embrace full-fledged panpsychism, for then even the stone would be sentient. As an aggregate of micro-sentient actual occasions, the stone is non-individualized, i.e. does not experience its own unity. Thus, it drowns the micro-sentience of the actual occasions of which it is a mere compound in the non-sentient togetherness of its aggregation. As soon as a single, non-sentient object can be identified, panpsychism can no longer be defended, and indeed, Nature abounds with mere aggregates. Societies (like molecules of crystal or living matter) and complex societies (like humans) are rare. Panexperientialism affirms actual occasions exhibit a (very small) degree of sentience, but denies the togetherness of them -devoid of the conscious experience of their own unity- to be sentient insofar as this concrescence goes. Observing the three domains of the world begs the question of the cosmic genesis. The conclusion these three functions, namely matter, information & consciousness, were present from the Big Bang, albeit in varying degrees, cannot be avoided. Like the unfolding of a flower, the efficient determinations of the material domain came first, fixing the original physical parameters of the cosmos. This first, physical unfoldment set the material ground. But together with this event, resulting from the activity of the final determinations in the original "primordial soup", order and structure emerged. This second, informational unfoldment set the conditions of the architecture of the cosmos. Because of this structure, the cosmos could expand and generate stars, the breeding-ground for the third, sentient unfoldment, bringing about life and consciousness. Only at this level societies emerged. First in the form of crystal molecules and, due to complexification resulting from more efficient interactions, as the first living cells. Billions of years were needed to allow living societies to individualize their sentient component, eventually arising as the experience of conscious unity. Foreshadowed by plants, it exploded in animals and eventually evolved into humans. The root of these three cosmic unfoldments can however be found in the singularity of the primordial actual occasion of our universe : the Big Bang. This Big Bang singularity is a discrete moment in the inconceivable, beginningless & endless cycles of arising, abiding, ceasing and re-emerging worlds out of the world-ground, the possibility of all universa. Hence, speculate everything acquired by countless conscious societies, well-ordered (informed) aggregates and efficient physical systems returns, at the Big Crush (or Big Evaporation) of the present universe, to the original singularity. Not an iota of material, informational and conscious actualities is lost, but contributes to the evolution of the endless process of subsequent world-emergence, abidance and collapse. The new world to come is not a "tabula rasa", but endowed with the result of what happened in the one before. Eventually, at the point at infinite infinity, all possible worlds have evolved out of the world-ground into fully sentient societies, and the "Jubilee of Jubilees" is celebrated for ever and ever. Then, at this point, the eternal recurrent cycle of light-manifestations ("neheh", Atum-Re), the periodic process worlds, joins everlastingness ("djet", Osiris). § 12 The Simultaneity of Relative Appearance & Absolute Reality. Only after repeatedly inviting transcendent wisdom to inspire thought, cleansing the conceptual mind from its reifications, may prolonged ultimate analysis facilitate the opening of the gate to "seeing" the ultimate, absolute nature of all possible phenomena, their suchness/thatness or ultimate reality as it is. Ultimate analysis merely assists the conceptual mind to directly recognize the nondual truth in terms of a non-affirmative negation. Immanence is not a ladder from conceptuality to non-conceptuality, from the relative truth of conceptual thought, to the ultimate truth of naked, non-conceptual, nondual cognition. Immanence only offers a threshold, an approximation, a generic idea encompassing the emptiness of the world as a whole. Indeed, a direct, naked state of cognition cannot be caused. The itinerary is not a certainty, but the preparation will certainly be welcome to sustain the awareness after it spontaneously dawns. Indeed, if the conceptual mind has not been thoroughly purified, reification will recur. Ontology based on confused cognition is the screen upon which the tragi-comical illusions of realism & idealism are projected and made to play. But although conventional reality does not appear as it truly is, being like an illusion, it "is", in an ontological sense, not identical with illusion. Appearing like an illusion is not the same as being an illusion. A saint may dress as a dirty pauper. The pauper is like the illusion, for he appears not as he truly is. Whatever appearance the saint chooses, s/he remains sacred. Conventional truth (the relative nature of phenomena) is how ultimate truth (the ultimate nature of phenomena) appears. So the ultimate exists conventionally. All phenomena can be simultaneously experienced as devoid of substantiality and at the same time as functional, interconnected and mutually dependent. Knowing the ultimate does not cause "another" world to suddenly appear. Awareness of suchness/thatness is being conscious of the full-emptiness of each and every phenomenon (its emptiness and universal connectedness). The difference is therefore epistemic, i.e. intra-mental. Directly perceiving this-or-that ultimate nature of conventional appearance, this-or-that actual absence of substance hic et nunc and this in the fullness of interdependence or, on the contrary, only experiencing appearances, merely depends on the discovery of the nature of mind, the fundamental dimension of the cognitive apparatus. As long as the nature of mind remains undiscovered or obscured, conceptual thoughts overlay it and mental designations are reified, producing "objects" such as the idea of a self-powered physical body, a substantial mind and a solid, separate self. These further cover the nature of mind, bringing emotional afflictions, sickness, an unhappy old age and an unwholesome death. Ultimate truth, as approximated by the logic of ultimate analysis, the pinnacle of conventional ontological truth, clarifies all phenomena to be full-empty, i.e. full of functional interdependences but empty of inhering, intrinsic, substantial, non-referential, essential qualities, characteristics, natures, etc. Full-emptiness contradicts substantial existence, but not functional interdependence. "Full-emptiness" translates the unity of emptiness & interdependence. Ultimate truth as given by direct, nondual experience, makes us "see" how all possible phenomena, while devoid of substantial essence, are interdependent "displays" or the "sport" of brilliance of the ground-luminosity, the ultimate base of all, the world-ground. Whether any ontological exercise, the present included, exceeds the limitations of creative thought, cannot be conceptually established. § 13 Transcendental Philosophy and Nâgârjuna. Transcendental philosophy (Criticism) aims at the process of the synthesis of phenomena rather than on a supposed sufficient ground underlying them. Precritical epistemology based the possibility of knowledge on this "Ding-an-sich" (Kant), called "noumenon", thing in itself or absolute (ultimate) ground of phenomena. Criticism ends this. Indeed, the object of science is not a pre-epistemic ultimate Real-Ideal (the unity of absolute reality and absolute ideality), and so does not depend on a self-sufficient ground preceding cognition, but exclusively on the interconnectedness between actual occasions and their modes of togetherness. These are dynamical architectures, various styles of coordinated movements or dances, artistic displays of various degrees of order (negentropy), i.e. unfolding, showcasing & folding things. They are only relative to movement, to process and result from a universal and necessary mode of connection between phenomena. This denotes objectivity, not the "Being" of some absolute thing like a Real or an Ideal before and outside knowledge. An Archimedean ground is nowhere found. Indeed, something is objective if it holds true for any active subject of knowledge, not because it denotes intrinsic, inherent properties of entities supposed to be independent, separate and so autonomous. This is the leading idea of the transcendental reflection on the conditions of the known, of knowledge and of the knower. Science is therefore not the revealer of a pre-existent underlying self-sufficient ground or "hypokeimenon". Epistemology is not the rooting of the possibility of knowledge in something before knowledge. The Real-Ideal is not the object of science. But neither is science random. Indeed, merely conventional, science is a temporarily stable but ever moving product of the process-bound reciprocal relation between the subject and the object of valid empirico-formal knowledge. Kant's Critique of Pure Reason still has residual foundationalist streaks. Although defined as a noumenon, the absolute ground lies across the knower. This indirect relation is to be differentiated from the direct stream of perceptions on the side of the knower. The latter arise in a subject only crosswise affected by the thing it itself ! One cannot say this contact with the absolute causes the direct perceptions recorded by the knower, for causality happens during categorial synthesis, two steps later. This transversal relation between the knower and the absolute is a residue of the substantialist tradition seeking an self-sufficient ground (before knowledge). This is Kant's Achilles'' Heel, but it can & should be removed from transcendental philosophy. Indeed, this remnant of substantial dualism between the knower and the absolute has been eliminated by neo-Kantianism. It promoted an immanentist and relational transcendental philosophy of science. Objects do not bear intrinsic properties, but result from interdependence, relations and interconnectedness. They are process-based instead of substance-based. There is no ground or pregiven, pre-existent and pre-organized absolute "substance of substances". Moreover, the static framework developed by Kant has been replaced by dynamical a priori forms and their plurality. The highly abstract view of Kant made way for the study of the pragmatics of the game of "true" knowing. The reciprocity between the knower and the known is pivotal here. Interlocked, but cherishing different interests & outlooks, they continuously engage in a concordia discors. This view on science is antifoundationalist, immanentist & relationalist. Science providing the best conventional knowledge ever. In the Critique, Kant wanted a philosophy as universal & necessary as Newton's law of gravity. His aim was not soteriological. In his Mûlamadhyamakakârikâ, Nâgârjuna aims at a wisdom ("prajñâ") realizing the ultimate truth ("paramârtha") of all phenomena. Not because this satisfies philosophical or intellectual pursuits, but because such realization liberates sentient beings, awaking them to the nature of their mind. In this foundational treatise of the Middle Way School (Mâdhyamaka), he presents this wisdom in accord with the profound and refined rationalism of Buddhist logicians, philosophers and scholars. Nâgârjuna's exclusive quest was to free all sentient beings from reified conventional truth ("samvriti"). Take away the reification and the absolute dawns. But the latter is indeterminate and non-accessible to the conceptual mode of cognition. The possibility to directly experience the ultimate nature is however not denied. Contrary to Kant, Nâgârjuna and the Buddhadharma at large accepts (a) meta-rationality (the nondual mode of cognition) and (b) the possibility of directly cognizing the absolute. This is realizing the wisdom of the enlightened ones. Hence, his work is foremost soteriological. Keeping this in mind, let us discuss Mâdhyamaka (Nâgârjuna, Âryadeva, Candrakîrti, Shântideva) in the light of a few remarkable parallels with transcendental philosophy. For different reason, both Nâgârjuna and Kant attack all possible substance-thinking. Kant defined the noumenon as a limit-concept, only pointing obliquely towards our sensibility and thus of negative use only. But he also maintained a quasi-causal, transversal (indirect) relationship between the thing in itself and the knower, leading to inner inconsistencies. Later neo-Kantians considered the thing in itself as nothing beyond the brute fact of its givenness, of it not being produced by a deliberate act originating in the subject. Criticism goes a step further, replacing the description of the cognitive act with a normative system of conditions producing valid knowledge. One must consider facts to represent the absolute, but this may well be mistaken ! This normative move evaporates the residual substantialism and brings to the fore a few interesting similarities between transcendental philosophy, the epistemology of science, and Nâgârjuna, the founder of the Middle Way school. Nâgârjuna's analysis is immanentist throughout. Like Kant, he insists the world should not be construed as a single absolute entity of which something can be predicated. It is like an indefinite series of flickerings, much like the flame of a butter lamp. Moreover, conventional knowledge is empty of any relation with a solid, substantial and inherently existing objectivity. Objectivity is not a pre-epistemic substantial ground. Conventional knowledge has no access to the thing in itself, the supposed absolute or ultimate nature of all phenomena. To discover all phenomena are empty of their substantial core is to realize the universal, lawlike, reciprocal relativity of co-dependent consecutive actual entities. The ongoing display is one of creative advance, with entities entering each other's togetherness. Conceptual reason does not discover the absolute nature of phenomena, but reveals the arising, abiding & ceasing nature of all relative events. For Nâgârjuna, science is an exceptionally efficient and valid conventional truth, but also extremely liable to reification and so delusion. Kant too points to the danger of turning ideas of reason into substances "out there". Certain subjective rules are mistaken for objective determinations of the things in themselves (cf. his "transcendental illusion"). This cannot be taken away, only revealed through criticism. Like all conventional knowledge, science tends towards superimposing inherent, substantial existence upon process-based, nonsubstantial actual entities. It tries to fixate the fluid & transient. We cannot help seeing the world as if inherently possessing certain determinations. With respect to our conventional experience, it always remains the case as if ("als ob") subjective rules are an intrinsic feature of the world ... Conventional knowledge is valid but always mistaken ! Indeed, if the observer partakes in the network of relations producing conventional knowledge, things appear to him or her as if well-defined nonrelational determinations (inherent properties) arise from any measuring interaction. Relative to the observer, well-defined features appear as something substantial. This reification is however an illusion, for it makes things appear as something different than what they are. They appear, while they are processes, as substances ! "Your position is that, when one perceives Emptiness as the fact of relativity, Emptiness of relativity does not preclude The viability of activity. Whereas when one perceives the opposite, Action is impossible in emptiness, Emptiness is lost during activity ; One falls into anxiety's abyss." Tsongkhapa : The Short Essence of True Eloquence. Criticism seeks a higher-order solution to the tensions between science, critical metaphysics and a nondogmatic soteriology like the one proposed in the Buddhadharma. Transcendental philosophy and the Middle Way provide lots of arguments backing the empty, dependent, impermanent and nonsubstantial nature of what is. While transcendental philosophy identifies the detailed mechanism of reification, the Middle Way wants to dispel them once and for all. To link critical thought with this intent, is to open reason for the meta-rationality of cognition which is precisely the aim of critical metaphysics. It should be remarked Kant sought a transcendental philosophy as "solid" as Newton's physics. The latter portrayed absolute properties and substantial material objects existing from their own side. In the most cherished Copenhagen interpretation of quantum mechanics this is no longer the case. Quite on the contrary. The historical continuity with classical physics has been broken. A holistic definition of phenomena is at hand. The object can no longer be dissociated from the contribution of the irreversible functioning of the measuring apparatus. The Hilbert space structure used in quantum mechanics conveys the relational nature of our knowledge about the physical, while involving no description of the two relata. Moreover, the extensive use of differential calculus (even in classical physics), shows only (infinitesimal) relations are accessible. No substantial, monadic ground of these is implied. There are no absolutized relata. Indeed, quantum mechanics points to our knowledge as "relational", with neither prius nor posterius between object & subject. Other interpretations, like the "hidden variable" hypothesis, are desperate attempts at restoring substantialism in physics. As Nâgârjuna remarks : neither connection, nor connected nor connector inherently exist. The existence of relations to the detriment of the relata would imply the use of an opposition (relation/relata) and the reification of one of its terms, while the two terms arise in dependence. Object and subject are on the same footing, there is a nonpolar conception of relations between them and so reification of any is avoided. Relations are determined by certain connections of things and this dependent on the way an observer takes cognizance of the observed system ... The present text, inspired by the traditional classification of topics, is divided into two parts, called "General Metaphysics" and "Metaphysics of Specifics". The First Part, General Metaphysics, explains metaphysics in general and ontology in particular, laying the groundwork (chapter 1) and attending the necessary requisites for any metaphysical inquiry (chapter 2). After having clarified the conventional nature of immanent metaphysics (chapter 3) and defined the limitations of speculative thought in terms of creative thinking (chapter 4), the mind is prepared for ultimate truth (chapter 5) and, to ascertain the lack of inherent selfhood and lack of inherent phenomena, ultimate logic is developed (chapter 6). Finally, the general features of the world are derived (chapter 7), ending the introduction to General Metaphysics. In the Second Part, or Metaphysics of Specifics, and this within the framework of the proposed ontological scheme, particular questions are answered. These bring to bare metaphysical cosmology (chapter 8), metaphysical cybernetics (chapter 9), metaphysical biology (chapter 10), metaphysical anthropology (chapter 11), metaphysical mysticism (chapter 12) & metaphysical theology (chapter 13). Within these broad divisions into parts and chapters, the text subdivides into paragraphs. Each paragraph is composed of units identified -in praise of Aristotle- by Greek letters. At times, a unit is Janus-faced, composed of an object-dependent and an imaginal side (the latter starting with "∫"). The former is elaborate, the latter aphoristic, iconic, laconic and ironic. This bi-polarity satisfies the conditions imposed by the chosen style. At the end of every paragraph, a "Lemma" is advanced. In formal logic, this is a subsidiary proposition assumed to be true in order to prove another proposition. Here, it is a short summary of the salient, outstanding points assisting further development. Part I : General Metaphysics. Thomas Aquinas, following the three divisions set by Aristotle, divided the study of "sapientia" or "wisdom" into "metaphysica" (being as being), "prima philosophia" (first principles) and "theologia". This scheme remained intact until early modern times (1500 - 1800 CE). Christian Wolff replaced it by dividing metaphysics into general and special metaphysics. General metaphysics or the science of being as being was given the name "ontologia" (a term coined by Rudolf Goclenius in 1613), whereas special metaphysics was divided into rational theology, rational psychology and rational cosmology, i.e. the sciences of God, souls and bodies respectively. The impact of the rise of the new sciences is obvious. The spirit of the Renaissance stimulated philosophers to expand their horizon, incorporating many new topics into metaphysics. However, these superb minds were not yet inclined to first consider -before engaging in speculative activity proper- the natural capacity of the mind and its knowledge-seeking cogitations. The epistemological turn had not yet taken place and intellectuals still entertained a naive theory of knowledge, one positing a direct conceptual access to reality-as-such or ideality-as-such. Bewitched by this ontological illusion (reifying mere concepts), concept-realism was still deemed unproblematic ! Measuring, before entertaining speculation, the natural possibilities of the mind, Kant's "Copernican revolution", besides being the decisive criticism of concept-realism, demarcated science from metaphysics. Although the Sun appears to rise and set, in reality it does not, for it is merely the Earth turning. Objects do not appear as they are. We should have tools to decide whether phenomena are merely appearances or indeed more. Subordinated to epistemology, Kant's "metaphysics of nature" is divided into a general part, namely ontology, and a specific part, namely the physiology of reason. The latter was divided in transcendent (rational theology, rational cosmology) and two "immanent" parts (rational psychology and rational physics). A "natural" metaphysics is one staying close to what is known about Nature, one focusing on the sensate objects gathered by the senses, the mental constructs processing these, well as other mental objects like the self. This clearly distinguishes metaphysical speculation from theology. In the course of the centuries, the meaning of the world "theology" shifted considerably. The main divide being between, on the one hand, the organized world religions (Hinduism and the three "religions of the book") and their revealed dogma's and, on the other hand, an arguable discourse on the Divine in general (cf. Criticosynthesis, 2008, chapter 7) and God in particular. In the present metaphysics of process, the word "God" has been deconstructed. This is indicated by adding an asterisk (*) to it. This points to the fact the traditional characteristics given to the "God of revelation", like creative activity "ex nihilo" & omnipotence, are not endorsed here. Hence, God*, this remarkable metaphysical object, is part of metaphysical theology, a branch of special metaphysics. Kant's division between "immanent" and "transcendent" is to be noted. Divide metaphysics in, on the one hand, immanent speculations on the order of the world and, on the other hand, transcendent speculations about what is supposed to exist beyond the limitations of the world ; the actual infinities transcending the world, the end-points at infinity of an infinite number of infinite series ... Due to the advent of the new sciences, a redefinition of the discipline of philosophy has to be realized. normative philosophy : logic, epistemology, ethics, aesthetics descriptive or theoretical philosophy : metaphysics Speculative activity unbridled by critical epistemology (cf. Criticosynthesis, 2008) is most likely to get out of hand. Then, the natural mind is no longer equipped to cognize in a valid empirico-formal, conceptual way, resulting in the multiplication of entities, blatant logical errors, extreme views (like nihilism or eternalism), uncritical skepsis and many other mental obscurations like a lack of mental pliancy. As part of philosophy, metaphysics is theoretical, i.e. involves a description of the discipline itself (general) and an elucidation of its objects, topics, issues, etc. (specifics). This metaphysics or theoretical philosophy covers all theoretical subjects not dealt with in a normative discourse. History, language, hermeneutics, the cosmos, life, consciousness, God* etc. are possible topics. Coming after the normative disciplines of logic, epistemology, ethics & aesthetics, the descriptive activity of a metaphysics of being or ontology heralds the "end of philosophy". This makes the mere formal disciplines to act as guardians of a descriptive & totalizing speculative intent. These safe-guards highlight the limit-ideas of a metalanguage of principles, norms & maxims, ruling valid knowledge, good actions and beautiful sensations. These rules assist the intent to totalize our understanding of the world and beyond. Emphasizing nearness & distinctness, metaphysics -divided in immanent & transcendent- is given a border to share with science. Science cannot exorcize metaphysics (from its background), nor can metaphysics be validated without adding scientific fact to its arguments. The crucial difference between science & metaphysics being the non-testability of speculative statements. Indeed, whereas the empirico-formal propositions of science (the statements of fact consolidating the core of the current scientific paradigm) are based on both testable & arguable processes, the totalizing speculations of metaphysics are only based on argumentation. But to argue the validity of a speculative totality is an exercise bringing into play all normative aspects of formal reasoning simultaneously. Hence, not only logical & epistemological considerations are at hand, but also ethical & esthetical and these together in a sublime, coordinated and creative dance. Everything needed to perform such a splendid move must be provided, carefully chosen, put in place, rehearsed, etc. This takes decennia. General metaphysics covers some of the conditions of this process. It tries to invoke the spirit of metaphysical inquiry and summon its speculative power ! general metaphysics : general features, ontology special metaphysics : philosophy of language, speculative theology, cosmology, biology & psychology, etc. General metaphysics has two branches, investigating (a) the general features of metaphysical inquiry (cf. first philosophy) and (b) being qua being, i.e. the nature of all possible being or ontology. This kind of speculation is to be viewed as the "summum" of metaphysics. Demarcated from the general characteristics of any metaphysical inquiry & argumentation, ontology, being the most general of metaphysical disciplines, naturally belongs to general metaphysics. Special metaphysics studies specific objects like God* (metaphysical theology), the cosmos (metaphysical cosmology), life (metaphysical biology), human consciousness (metaphysical anthropology), language, history, law, society, politics, economy etc. Chapter 1. Introducing Metaphysics & Ontology. In this first chapter, the general contours of the present critical metaphysics of process arise. Starting with an investigation of the issue of style, i.e. the best way of expressing speculative thought, the fundamental principle of process metaphysics defines the axiomatic base, reflecting a choice for a single principle or monism, grounding the further elaboration of the system. This basic choice is confronted with epistemological criticism, probing for the limitations of all conceptual cognitive activity and confronting these with the speculative, totalizing intent. Rejecting conflictual & reductionistic epistemologies, the polar structure of the cognitive spectrum is affirmed in accord with transcendental logic. Apprehending sensate and mental objects, the subject of experience is an object-possessor. Both types of objects are confirmed and their distinct properties acknowledged. Such distinction does not lead to ontological difference, but merely to ontological distinctness. In order to circumambulate process metaphysics, a few major historical vantage points are discussed and criticized. The core problem being the uncritical reification of object and/or object of experience, turning them into hypostases or realities (idealities) underlying thought. Once this is out of the way, thinking process again reopens the door to science. Then and only then can metaphysics become the ally of valid empirico-formal thought. Making speculation dependent on conventional knowledge and its apprehension of what exists (either in sensate or mental terms), fulfils its Peripatetic role of being a theoretical form of philosophy "next to" the domain of science, as it were fructifying it. By studying the way metaphysics cannot be eliminated from the latter enhances its status as a discipline necessary for the advancement of knowledge, albeit in an uncomfortable fashion. This raises the question of the advancement of metaphysics itself, i.e. its ability to increase its logical, semantic & pragmatic relevance, if not significance. This elucidation of the advancement of metaphysics is aided by the crucial distinction between speculative activity remaining within the boundaries of what is the known world, or immanent metaphysics, and theoretical philosophy leaving these boundaries behind, as in transcendent metaphysics. While the former can be validated, the latter can not. So then how is a valid transcendent metaphysical inquiry possible ? This question leads to a hermeneutics of sublime poetry ... Finally, having established (immanent) metaphysics and its validation by way of argument, the fundamental move favouring monism is applied to the most general of questions : What builds all possible phenomena ? What do all objects have in common ? This calls for an ontological scheme rejecting both materialist & spiritualist metaphysics. Physical objects nor mental objects constitute phenomena. Instead, momentary actuality is introduced as the ontological principal, bringing process metaphysics close to the fundamental realities of both physics and psychology, namely the collapse of the wave-function in quantum mechanics and the reality of moments of consciousness in psychology and anthropology. 1.1 Metaphysics & Science. Because metaphysics is irrefutable in terms of testability, it has been driven out of the domain of science, encompassing all valid empirico-formal statements of fact. This demarcation, once deemed sufficient to eliminate metaphysics, is however problematic. Indeed, every experimental setup and even every valid scientific theory cannot be properly articulated without untestable metaphysical concepts animating its background. Consider post-Kantian criticism of metaphysics, in particular positivism (Comte) and neo-positivism (Carnap). Here we have two radical departures from metaphysics blatantly failing to deliver. In the former, metaphysics belongs to the second stage after theology (the first stage) and before science (the third stage). The supernatural powers described in the first stage are transformed into abstract notions or entities hiding behind empirical phenomena. Both negativisms (of theological or metaphysical entities abolishing sensate objects) are rejected and replaced by the positivism of empirical phenomena. Neo-positivism radicalizes this view. For Carnap, metaphysicians are musicians without musical skills ! Metaphysics cannot convey any cognitive insight but has only emotional appeal, and this in an inadequate way. Hence, as they are not tautological, nor validated by direct (sensory) experience, metaphysical statements are necessarily pointless, merely conglomerates of meaningless strokes or noise. These approaches, haunted by headaches caused by fifteen centuries of Catholic dogma and four centuries of conflicting metaphysical inquiries, forgot the crux of the matter : the distinction between sensate & mental objects cannot be defined on sensate grounds and so must contain a metaphysical element, i.e. one based on mental objects validated by way of argument only. Metaphysics is an unavoidable "vis a tergo" to befriend with caution, for sure, but impossible to rule out, except at scandalous and hence unacceptable costs. And although it cannot be as precise as scientific thinking, speculative activities compete in terms of the soundness of their arguments, coherence with other theories, appeal, fruitfulness, elegance and simplicity. The question is not how to eliminate speculative thought, but how to bridle it in such a way as to speed up the carriage of science. The era of cooperation between both has finally dawned. Moreover, besides assisting science, metaphysics also (and foremost ?) directs the mind to its largest unity, extent & harmony. No doubt, these carry the spring-board to the highest pursuit : the direct experience of ultimate truth. Thus apprehending full-emptiness, one simultaneously cognizes the emptiness of all possible objects and the fullness of the interconnections between all possible things resting in the bosom of Nature. A. Object-Dependent, Imaginal & Perspectivistic Styles. § 1 The Issue of Style.  α. Put in general terms, "style" is the manner in which an issue is addressed, its dynamism of expression. Style is characteristic of a particular subject matter, but also of a person, group of people or historical period. Insofar as texts are concerned, different styles call for different kinds of writing. ∫ People without style disturb. Chattering geese keep the flock, but the eagle flies alone, undisturbed by the horizons of petty existence. Stylistic choices are defined by the way the author wishes to convey meaning. Although ideally not affecting truth & contents of what is communicated (the logico-semantic value), but mostly how language effectively persuades (the rhetorical value), style nevertheless has a direct impact on how information is understood. This implies the latter may conceal the former and this may be part of the intent of the author. ∫ With style differences can be embraced. Without style, Papageno better keeps his lips locked. But how strong is his desire to speak out ! Like Hapi, the baboon of dawn, gross minds only vocalize to communicate. But to catch the glowing breath of the Morning Star, an intense silent gaze suffices. In literary criticism, a fundamental line is drawn between non-fiction and fiction. Creative writing can be found in poetry, fiction books, novels, short stories, plays etc. ∫ To dream in colours is to see what cannot be seen by any eye. To hear trees sing is the privilege of those walking in pure lands. To smell a splendid cuisine while soundly asleep is the art of connoisseurs. To fly or feel the breeze in Morpheus' lap, or to taste the honey of the night is the endearment bestowed by the gods. May all sentient beings dream and lucidly so. Exposing style identifies expository, descriptive, analytical, academic, technical, persuasive and narrative writing. δ.1 Expository writing focuses on a known topic and informs the reader by providing the facts. δ.2 Descriptive writing uses lots of adjective and adverbs to describe things, conveying a mental picture. δ.3 Analytical writing organizes the exposition by way of a stringent logical structure enabling the necessity of the truth-value of what is conveyed to surface. δ.4 Academic writing takes a third person point of view and brings in deductive reasoning supported by facts to allow a clear understanding of the topic to emerge. δ.5 Technical writing elucidates complicated technical information about the issue at hand. δ.6 Persuasive writing provides facts & arguments to promote a view having the ability & power to influence its readers. δ.7 Narrative writing enumerates events that have happened, might happen, or could happen. ε. Philosophy has always adapted its stylistic choices to its audience. Down the ages, a multitude of styles have been used and meshed together. Some philosophers use fictional styles (the poetry of Parmenides, the dialogues of Plato, the meditations of Descartes, the literature of Nietzsche), while others focus on the academic (Aristotle, Thomas Aquinas, Kant), the analytical (Spinoza, Wittgenstein I, Sartre), the descriptive (Heidegger before "die Kehre"), the technical (Russell, Quine) etc. ∫ Philosophers are merely jugglers. Using different styles to formulate two similar utterances makes the reader wonder whether these different styles intend to carry additional meaning. If not, it surely opens the text to meaning-variability and unexpected turns & creativity. ∫ Readers are heroic beings. They climb steep rocks to attain the summit of understanding. Arrived at the top, they witness more and even higher mountains. It cannot be avoided. The infinity of it all makes any attempt to put the world in a box hilarious. Tragedy invokes comedy and laughter in itself forebodes the twilight of creation. Opening one's door to the stranger of novelty is the only solution. Thinking with style necessarily makes one gracious, kind and ... welcoming. Insofar as philosophy is at hand, two major styles emerge : the object-dependent and the imaginal. In the former, the style is derived from objects, leading to academic, analytical, technical and descriptive approaches. In the latter, a deeper sense is conveyed by triggering the reader's imagination, calling for fictional, persuasive and narrative writing. ∫ Stout choices are a sign of intelligence. But on what does any choice truly rest ? Choices have to be made, true, but they are like a patchwork. The pieces are distinct, but not different. If they were not fundamentally so, nothing could bring about anything. In the present text, object-dependent and imaginal styles are combined. The former brings in a logical structure, whereas the latter, taking advantage of the unavoidable incompleteness, inconsistency and ambiguity of any analysis, invites the imaginal function of its readers. Conjecture this combination gives birth to a very particular, rather independent style, one identifying and opening new perspectives. This choice is rooted on neurophilosophy, avoiding hemispheral lateralisation and taking in the advantages of the neuronal bridge between the two sides of the neocortex. ∫ While knowing even the proud mountain ranges eventually crumble, with style we try to dance like flamingos in love ... Most philosophers avoid discussing their style and take it for granted. In doing so, their exercise is limited by the conditions of the manner with which they address their audiences. People are smart and need a proper invitation. Here, two sides are simultaneously at work : a linear, serial, differential, object-dependent ascent and a non-linear, parallel, integrative, imaginal one. Both lead to a certain kind of conclusion helping us to attach our climbing-ropes to a more secure mooring-post, assisting us to reach out for our next base. § 2 Deriving Style from Objects. α. When the mind of the Renaissance, still imbued with a Medieval spiritual mentality, was pressured by the conflicting intent of the Reformation and the Counter-Reformation, it slowly made place for the scientific world view. As a result, philosophy tried to derive its style from objects. Empirists would cherish sensate objects, rationalists mental objects. In doing so, one hoped metaphysics, in particular the address of totality, could be retained without ridicule. Theology, the address of infinity, was deemed without object. In 1666, Jean-Baptiste Colbert, the Prime Minister of King Louis XIV, founding The French Academy of Science, interdicted astronomers to practise astrology. The aim of the Academy -at the forefront of scientific developments in Europe in the 17th and 18th centuries- was to encourage and protect the spirit of French scientific research. This heralded the official end of the Hermetic Postulate :  "that which is Below corresponds to that which is Above, and that which is Above corresponds to that which is Below, to accomplish the miracles of the One Entity." (cf. Tabula Smaragdina, 2002). As a result, all things "occult" were relegated outside the mainstream, turning them into an interest of chamber scientists (like Newton & Goethe). Far gone was the idea Nature was an interconnected pattern, a living tissue of visible and invisible spiritual forces influencing humanity as well as the stars. Instead, the material world became a disparate clockwork of "disjecta membra", a "nature morte" devoid of "telos", "causa finalis" or inner purpose. ∫ When A is rejected, -A need not necessarily be embraced. Of course, silly superstitions are not valid science, but the intent of the words are more important than how things are said. Despite a spiritualist interpretation, the Hermetic Postulate aimed to underline the interconnectedness of all natural phenomena. Today, this metaphysical dream of the Ancients is again emerging in the mathematics & experiments of the new physics, albeit without the "machinery" of the spiritual agents serving the God of Abraham. Does throwing the child out with the bath-water lead to finding the child again ? Rejecting something makes one dependent upon what was rejected. β. Hand in hand with the rise of modern science, four metaphysical ideas became prominent : β.1 objectivism : the objects of science exist independent and isolated from the mind apprehending them "out there". They possess a nature of their own, one having characteristics abiding inherently as their essence, substance or inherent core ; β.2 realism : these independent objects of science existing on their own exert an influence known by the human mind passively registering this and in doing so acquiring knowledge about them ; β.3 universalism : the objective, real knowledge gathered is the same in every part of Nature, i.e. scientific knowledge has closure ; β.4 reductionism : all phenomena of Nature can be reduced to physical objects and their interactions. γ. Insofar as this modern version of science, to be labelled uncritical, materialist and thoroughly European, gained prominence and became the spearhead of the tinkering harnessed by the Industrial Revolution, philosophers either rejected reason (as in the Protest Philosophy of the Romantics) or considered, to avoid the shipwreck of metaphysics, an object-dependent style as the only way out. Enthused by these developments, they even tried to exorcise the core task of speculation : totality & infinity. They tried, but failed. δ. An object-dependent style fosters analytical, academic & technical writing. In doing so it merely copies the itinerary of materialist science and the industrial approach. Analysis does not necessarily call for synthesis. The academia may replace the authoritarian systems of old, safekeeping the dogmatics of the paradigmatic core. The Bellarmine-effect is therefore their greatest foe. Technical writing forgets the underlying first person perspective, concealing it by the illusion of presence, adequacy & efficiency. Modern science is making place for hyper-modernism, a modular & multi-cultural view moving out of the European fold, one embracing Eastern science as well. ∫ The tragedy of exclusivity leads to the negation of totality, to the inflation of details at the expense of a regulating unity. . By itself, object-dependent writing is not problematic, but its exclusive use clearly is. No system can prove its completeness, eliminate all inconsistency and provide absolute predictability. Knowing this, one may still uses a clock, but never without accepting the irreducible margin of error, the principle of indeterminacy of all possible physical objects. ∫ The imperialism of language needs to be abandoned, complementing word with picture, seriality with parallelism, denotation with connotation. ζ. In the 19th century, despite Kant, materialist science and its ill-advised youthful successes continued to gain ground. Misunderstanding the intent of the Copernican Revolution, showing how objects merely appear and so conceal their truth, criticism was not assimilated. Despite his best efforts, his three Critiques were deemed a form of contradictory idealism, feeding the brontosaurus of German Idealism, turned upside down by Marxism. Instead of grasping them for what they are, namely a new understanding of science per se, they were rejected as an incomplete attempt to pour old wine in new bottles. During his lifetime, the titanic, solitary effort of the master of Köningsberg could not be completed. But it is possible to reconstruct his work in such a way as to avoid the inevitable traps he fell for (cf. Criticosynthesis, 2008, chapter 2). In doing so, objectivism, realism & reductionism are unmasked as fatal errors of a "perversa ratio". ∫ Do not think this perverted, sterile rationality to be grave bound. Today it haunts the Western mind as a zombie, draining the life-force out of scientific novelty. A resurrection of the organicism of the spirit of the Renaissance is at hand. If not by choice, then by the tidal wave of dissatisfaction and alienation, both in terms of culture and ecology. When philosophers are the handmaiden of theology, their speculative efforts are limited by the reasons of dogma. But fideism is not a valid ground for conceptual thought. When they become the slaves of materialist science, philosophers trumpet the jubilee of the misunderstanding of phenomena, including philosophy itself. Although metaphysics depends on valid science, it does not depend on a metaphysical view of science, albeit a materialist one. Pleasurably excited by ticking clocks, by the turning of the wheels of the engines of industry or highly complex natural objects like the human brain or the cosmos, it may indeed seem as if physical objects are the "nec plus ultra" of reality and hence speculating about non-physical objects merely pointless noise. Nevertheless, ongoing test & theory always provide antidotes against too much bewilderment. The Newtonian dream has ended. Although the object-dependent style derived from this cannot be rejected, nor can it be used at the expense of other styles, in particular its antidote and complement : the imaginal style. § 3 Imaginal Style. α. Consider the millenarian tradition of the proto-rational sapiental discourses of Kemet, the golden verses of Pythagoras, the "dark" sayings of Heraclites, the fragment of Anaximander, the two ways of Parmenides, the poetry of Xenophanes, the dialogues of Plato or, at the far end of this series, Boethius' De consolation philosophiae and discover the varying impact of the imaginal on philosophical speculation in Antiquity, and this from the start of speculative writing (as in the Pyramid Texts of Unas) until the end of Late Hellenism. Exceptions, such as the vast scholarly corpus of Aristotle and the Enneads of Plotinus are indeed rare, for even Augustine was tempted to exchange a rather academic & argumentative style for a more literary one (as in his Confessions). Of course, authors (like Plato and Boëthius) may choose literary devices like dialogues to convey proper arguments. Philosophy was not yet divorced from the various other topics of high education, as the division of learning in "trivium" & "quadrivium" demonstrates. Indeed, "philosophia" was envisioned as uniting all branches of knowledge, nourishing the Seven Liberal Arts, the "curriculum" of study in both Classical and Medieval times. With the Summa Theologica of Thomas Aquinas, the authority invoked by the Peripatetic tradition culminated. This opened the gates for a flood of genuinely boring, but highly significant, philosophical works in an object-dependent style (Abelard, Duns Scotus, Willem of Ockham, Cusanus). In many ways, the works of Descartes, Locke, Berkeley, Leibniz, Hume & Kant are part of this mentality. ∫ Each time we overestimate the potential of something, we are bound to discover weakness and frailty. Each time we reduce grandeur, we invoke surprise. When both Heaven and Earth are considered beforehand, what can go wrong ? The answer to any query comes along as soon as we are ready with the question. β. An imaginal style is literary, i.e. creative writing of recognized artistic value. It does not try to eliminate connotation to promote denotation. Syntax never supersedes semantics. It may even invite and manipulate ambiguity to indulge in semantic wealth, not avoiding redundancy. The works of Nietzsche are perhaps the best example history has to offer, but Kierkegaard & Heidegger should also be noted. Of course, these are wholesale works of literature, not aphoristic counterpoints. ∫ Object-dependent style depersonalizes. In doing so it objectifies what remains embedded in the subjective. Imagination personalizes. In this way it subjectifies what cannot do without objectivity. The far extreme of the subjective becomes objective. Too much objectivity betrays a subjective intent. Both are not contradictions but complements. γ. Practically speaking, the distinction between an object-dependent style and an imaginal style is not clear-cut. Writers as Fichte, Schelling, Hegel, but also Schopenhauer, Bergson and many others offer a mix. But examples of a strict object-dependent intent do exist. Consider Spinoza's Ethics, Kant's Critique of Pure Reason, Marx's Capital, Wittgenstein's Tractatus Logico-Philosophicus, Sartre's Being and Nothingness, Popper's The Logic of Scientific Discovery, Habermas' Knowledge and Human Interests etc. ∫ Cucumber soup is made out of a cylindrical green fruit related to melons with thin green rind and white flesh eaten as a vegetable. Firstly, if the soup were only that, it would not be soup. Secondly, who, eating cucumber soup, cares about the cucumber if not for its taste ? δ. A neurophilosophical definition (cf. Neurophilosophical Inquiries, 2003/2009) of the imaginal style focuses on the way the neocortex processes information projected on it by the thalamus. Left Hemisphere Right Hemisphere linguistic kinesthetic propositional visual discrete diffuse analytical synthetic verbal visuospatial digital analogical specific features broad features deliberate totalising denotative connotative literal metaphorical δ.1 Only recently has the importance of this division been understood. The neocortex or "human brain", a folded sheet of ca.11 m² with ca. 20 billion neurons, is divided in two hemispheres connected by the "corpus callosum", an axonal bridge continuous with cortical white matter, consisting of ca. 200 million nerve fibers. The right hemisphere is typically non-language subdominant, whereas the left, containing the speech-area's of Broca and Wernicke, is deemed dominant. δ.2 To define the typical left hemisphere as "dominant" because it processes language reveals a prejudice mainly at work in the West. The right hemisphere may indeed be deemed "dominant" over the left in terms of the analysis of geometric & visual space, the perception of depth, distance, direction, shape, orientation, position, perspective & figure-ground, the detection of complex & hidden figures, visual closure, Gestalt-formation, synthesis of the total stimulus configuration from incomplete data, route finding & maze learning, localizing spatial targets, drawing & copying complex figures & constructional tasks. ε. Although in disciplines like logic, epistemology, ethics and aesthetics, the use of imagination is not wanted (cf. Criticosynthesis, 2008), in the context of metaphysics, the advantages of an imaginal style outweighs the precision necessary in the realm of the normative. The totalising intent, aiming at broad features synthesising the general characteristics of all possible phenomena, do call for a more diffuse band. As those parts of the spectrum invisible to the naked eye are also presented, the connotative associations of the semantic field cannot be missed. Hence, to further meaning, metaphor and analogy are indispensable. ∫ Metaphysics is a marriage and in every marriage compromise is at work. If a compromise would only have clear-cut terms, it would not last and nobody would stay married. Of course, without trust, no grey areas can abide ... ζ. Just as Heidegger before him, Derrida understands metaphysics as a philosophy of presence, a logocentrism placing the spoken word at the center. Writing is then a kind of conservation or fixation after words have been spoken. The audience is absent, while in spoken language the sign immediately vanishes to the advantage of the speaker. With his metaphors, Heidegger did not move outside the "clôture" of the metaphysical traditional starting with Plato. His words still try to capture the nature of phenomena in a discourse pretending to be a fixation of what Heidegger "said about things". ζ.1. The conservation of the spoken meaning by written words is deceptive. Logocentrism is a mummification leaving out important elements. Trying to fixate the "heart" of the matter, other vital organs of the actual communication are removed. This spoken word is deemed primordial, and the written word derivative. In all cases, this derivation is a bleak representation of the original intent. So logocentrism fails to deliver. The spoken word is therefore stronger, but also transient. ∫ The spoken word is like eating the soup, it has tone and taste. But the activity is ephemeral. The written word is like reading the recipe, it is dry and tasteless. But it may help to make the soup again. ζ.2. So to tackle the pretence of presence advanced by logocentrism, a thinking of absence is called in. This by considering how one cannot, compared with the spoken word, recuperate the autonomy or exteriority of the written word. Consider these two French words : "différence" and "différance". The first, written correctly, means "difference", while the second, written incorrectly with an "a", sounds, when spoken, exactly the same as the first, but in fact, does not exist and so means nothing ! So the difference between them is only revealed by the text, not by the spoken word. The spoken word is protected from these letter-based manipulations. The text has its own "power" of misrepresentation, i.e. advances meanings not available in the spoken words. Grammatology wants to address this issue, and deliver the tools to identify the false exists given in the text. ζ.3. Metaphysical texts, in whatever style, are deceptive. But one cannot define their illusions from without, as it were observing them from an Archimedean vantage point. Nietzsche tried to do this by first identifying metaphysics as Platonism and then developing an alternative. But by identifying metaphysics as logocentrism, it becomes clear the battle with the illusion of presence in metaphysical texts has to happen in these texts themselves, not from a safe, matinal outside perspective, for such a proposed safe haven is itself logocentric. In other words, it does not exist. ζ.3. Metaphysical systems tend to invoke words transcending the possibilities of conceptual thought. These transgressions are posited as "exits", while they are false doors. These doors exceed the limitations of the system and/or the borders of conceptuality, and these excesses are vain. Next to every text, a "margin" has to be drawn. In this cleared space, the false doors or "transcendent signifiers" are (a) marked by adding an "asterisk" (*) to them, and (b) identified as deceptive ways to provide the system with illusionary openings allowing it to move out of itself and ground its text in something beyond the text, and this while there is only text. In the present critical transcendent metaphysics, the word "God" is replaced by "God*", thus indicating "God" has been deconstructed. In this way, no new term needs to be invented (leading to a mere cosmetic manipulation). The drawback is this : the deconstruction remains somewhat dependent of what is deconstructed. ∫ At some point, after tiresome journeys, every enduring traveller returns home. Then the road can be trodden again at a lighter pace. Eventually, one no longer steps on, but one flies. Then the activity of travelling itself is walked through. No longer moving, all things come to the traveller. η. It is crucial to criticize the way transcendent metaphysics seeks to ground any speculative endeavour in a reified ground outside the system of metaphysics. Distinguishing between immanent & transcendent identifies the major false door of metaphysics, namely introducing non-conceptuality by way of concepts (like "intellectual perception*" or "intuitive knowledge*"). But immanent metaphysics itself is not without logocentrism, i.e. the vain conviction object-dependent writing is able to be a philosophy of presence exceeding the fluidity of the spoken word. Among many other things, like metaphorical elucidation of denotations, an imaginal style will therefore also try to correct this pretence of the text by pointing to the vain constructs of denotation, promoting the autarchy of the text at the expense of the direct but ephemeral experience of the spoken word and introducing void words arising only as a result of logocentric manipulations of letters. ∫ Systems want to protect themselves from their own collapse. But they are not like houses firmly erected on solid ground, but like trees with their roots up in the sky. Seeking where we fail, we become truly strong. Trying to avoid being hurt, one invites putrid wounds. θ. The two proposed styles complement each other. But neither of them holds the promise to eliminate the false doors exceeding the system and put down by the text fixating speculative activity. Insofar as this activity is oral, it cannot deceive in this way. Oral traditions have existed in the past and so one cannot reject this a priori. Maybe this is indeed the best way to preserve an authentic metaphysical intent. But in a literary culture, an imaginal style introduces metaphor to elucidate denotations but also (and foremost) tries to identify the presence suggested by the latter as a fata morgana. In the immanent approach, this happens by identifying the meaningless "letters" introduced by the text. Insofar as metaphysics as a whole is concerned, this takes place as a process of identifying the false exits leading to a positive, katapathic transcendent metaphysics. Such a guard only allows for a non-affirmative negation, a "via negativa" leading to an apophatic view on the transcendent, one underlining the ineffable or un-saying nature of what lies beyond the realm of possible conceptual thought. If anything positive can be said about this beyond, then clearly such letters are, at best, sublime poetry. ∫ The method is not there to avoid problems, but to identify them. Problems are not identified to solve them, but to avoid them. Avoiding problems does not take them out, but gives us the material of humour. Being able to laugh with depth and extend feeds the intellect. Science and metaphysics are not serious things. Nor are they ridiculous. They preoccupy the humble mind dreaming grand stories. We cannot avoid ourselves. Complementing an object-dependent style with an imaginal style serves the purpose of destroying the illusion strictly defined words are able to mimic the procedures of science. Although process metaphysics needs to be logically correct, avoiding contradictions, promoting completeness and attending parsimony, it does so for the purpose of binding words in a way discrete, serial & analytical communication is made possible. Constantly confronting and exchanging this analysis with the imaginal, builds a higher-order semantic metalevel needed to convey totality and parallel communication fostering synthesis. But these stylistic protocols do not take away the more deeper problem of logocentrism, the fact words only appear to convey the spoken word, the living and wealthy reality of direct human communication. In fact, as both styles make use of symbols, they betray truth by allowing false doors to suggest exits to an absolute representation. By showing where these false exists occur, the reader may draw a margin next to the text. The latter is not criticized by trying to remove these false doors, for this is vain. However, in this margin, the metaphysician explains how they "open" and "close" the text to something deemed "outside" it. Moreover, transcendent signifiers at work in the text are identified by adding an asterisk (*) next to the keyhole. These "procedures" are not invoked to "clear" the text from the problem of logocentrism, for this cannot be avoided. But by entering the lion's den and counting his teeth while he roars, we are better equipped to know how we indeed may be ripped apart by grand & majestic words. In a metaphysical system, in particular a metaphysics of process, the crucial critical demarcation lies between speculative activity staying within the confines of conceptuality (in all its modes, i.e. proto-rational, empirico-formal, transcendental & creative) and cognitive activity exceeding these confines (as in non-conceptual, nondual cognition). Transcendent metaphysics is radically distinguished from immanent metaphysics, and this happens within the domain of metaphysics itself. § 4 Creative Unfoldment. α. Historical perspectivism, developed by Nietzsche, promotes the view all ideations (both sensate and mental) take place from particular perspectives. The world is accessed through perception, sensation & reason, and this direct & indirect experience is possible only through one's individual perspective and interpretation. A perspective-free or an interpretation-free objectivity is rejected. Hence, many possible conceptual schemes, or perspectives, determine the judgment of truth or value and no way of seeing the world can be taken as absolutely "true". At the same time, it does not necessarily propose the validity of all perspectives. ∫ This inflation of the subject at the expense of the object leads to less subjective fulfilment & happiness. The more we are preoccupied with our own perspective, the less pliant the mind becomes. The less pliant the mind, the more dissatisfaction with conventional reality. For historical perspectivism, rejecting objectivity, there are no objective evaluations transcending cultural formations or subjective designations. Experience, always originating in the apprehension of sensate or mental objects, is always particular. There can be no objective facts covering absolute reality, no knowledge of the ultimate nature of phenomena, no logical, scientific, ethical or aesthetic absolutes. The constant reassessment of rules in accord with the circumstances of individual perspectives is all what is left over. What we call "truth" is formalized as a whole shaped by integrating different vantage points. This is a conventional truth, a transient intersubjective consensus. From which perspective did historical perspectivism arise ? If all experiences merely depend on individual perspectives, then perspectivism, as a view encompassing all perspectives, escapes the proposed relativity. As self-defeating as radical relativism, historical perspectivism is an exaggeration, an extreme unwarranted by the normative disciplines of transcendental logic, epistemology, ethics & aesthetics, discovering the principles, norms & maxims we must accept to be able to conceptualize cognition, truth, goodness and beauty (cf. Criticosynthesis, 2008, chapters 2, 3 & 5). By connecting factual uncertainty with normative philosophy, rejecting a set of principles, norms & maxims a priori, a major category mistake is made. While facts validating empirico-formal propositions of science are indeed Janus-faced, simultaneously showing theory-dependent & theory-independent facets, the transcendental meta-logic of thought, valid knowledge, good action and sublime art are universal, necessary and a priori. This is not the result of any description (of logic, epistemology, ethics or aesthetics), but merely the outcome of what is necessary to be able to think the possibility of these crucial domains of human intellectual effort. ∫ In all cases, we stay dependent on what is rejected. Either both terms of the equation are eliminated or both are allowed. Perspectivism is correct in identifying subjective vistas, but -in an inflated mode- cannot sustain its own intent without relying on some object. In the absurd extreme, this object is the absoluteness of perspectivism itself. This is merely a contradictio in actu exercito. γ. While conventional truth can only be known in the context of subjective and intersubjective experiences, critical perspectivism challenges the claim there is no absolute truth. Firstly, within the domain of conventional knowledge, a transcendental set of conditions & rules of thought, cognition, conceptuality, truth, goodness and beauty pertain. These form the normative disciplines studied by normative philosophy. These conditions & rules are found or unearthed by reflecting on the conditions of these objects. What is thought ? What is a cognitive act ? What is a concept ? How to validate knowledge ? How to produce valid knowledge ? How to act for the good ? How to fashion beauty ? Secondly, valid knowledge can only be identified if absolute truth regulates this truth-seeking cognitive act in terms of correspondence & consensus, the two ideas regulating reality (experiment) & ideality (intersubjective argumentation) respectively. Moreover, it may be conjectured, the possibility of a direct experience of absolute reality depends on the extend individual perspectives are eliminated. As the concept always involves such a perspective, only conceptual thought is barred from this. Intuitive, nondual cognition is not rejected beforehand. It is non-conceptual and can be prepared by "purifying" the conceptual mind, i.e. thoroughly ending its addiction to the substantial instantiation (of object and/or subject of knowledge). ∫ Normative statements are true in a meta-conventional sense not escaping conventionalism. Valid empirico-formal statements are true in a conventional sense. Absolute truth, the emptiness of all phenomena, can be conceptually approached by way of ultimate analysis. The direct experience of this truth is possible but ineffable. Although object of un-saying, this nondual experience has nevertheless a direct impact on what is done, said and thought. It therefore modifies our experience of the conventional world. Hence, it is not trivial or insignificant, quite on the contrary ! δ. Critical perspectivism accepts the theory-ladenness of observation, and so cherishes the critical distinction between perception & sensation (Criticosynthesis, 2008, chapter 4). Three fundamental perspectives are given clear borders, marked as "for me", "for us" and "as such". The first person perspective belongs to the intimacy of the observer. Nobody shares two identical reference-points. Position & momentum are unique for every point. So is the available information one has, as well as the clarity of one's conscious apprehensions (sentience). The third person perspective is the paradigmatic, shared, transient, conventional, intersubjective view of a community of sign-interpreters. It is valid (working), but mistaken. While efficient, it does misrepresents objects. Viewing them as independent and existing from their own side, it conceals their true, absolute nature or emptiness. δ.1 This absolute truth is not some super-object grounding or underlying objects. It is the ultimate nature of each and every conventional object. Therefore one can only epistemically isolate emptiness, for in every concrete event, the absence of inherent substance is simultaneous (or united) with the interconnected & interdependent nature of all the elements constituting this actual event. δ.2 The ongoing unity of emptiness (absence of essence) and interdependence is called "full-emptiness". ∫ In the measure a second person perspective opens up, fructifies and shares two first person perspectives, it extols the truth, goodness & beauty of personal love. Extremely rare, this love is often replaced by an act of mutual masturbation. When the cuddling is over, the other person is dropped like an empty can to be filled and consumed again and again. ε. An idiom is the style of a particular writer, school or movement. Let critical perspectivism be the adopted idiom of this process metaphysics, encompassing and integrating the rather "technical" methods of object-dependent and imaginal writing. To succeed, the following distinctions and devices are introduced : ε.1 Uttering "grand stories" is finished. This reveals the awareness no independent substance can be identified. Sensate nor mental objects provide us with an inherent own-nature, an essence independent from other objects, self-powered & autarchic. Process-based, phenomena cannot be grounded in a sufficient ground outside conceptual thought. Hence, the fake grandeur of previous ontological schemes is their pretence to conceptually represent the absolute nature of what is, the suchness of all possible phenomena. ε.2 Accepting perspectives, we divide sensate and mental objects, and grasp the events happening on the sensitive areas of our senses as not identical with the thalamic projection on the neocortex. Although sensate objects have a perceptive base, each apprehended object is the product of perception and interpretation (or perspective). Facts are hybrids. On the one hand, they are theory-independent and, so must be think, correspond with absolute reality. On the other hand, they are theory-dependent, arising within the perspectives or theoretical connotations of an inter-subjective community of sign-interpreters. Because conceptual knowledge is validated by way of test & argument only, one cannot eliminate these signs (in the form of ideas, notions, opinions, hypothesis or theories) without invalidating epistemology. But accepting the theory-ladenness of observation does not eliminate facts are always about something extra-mental. While keeping immanent metaphysics distant from transcendent speculations, an absolute perspective is not rejected. Against Plato, this is not a "substance of substances", but a property of every actual object. While impossible to cognize conceptually, this absolute nature of all phenomena is not a priori deemed outside the realm of the cognitive. This corrects classical criticism. Absolute truth can be part of a non-conceptual cognitive act. Here we take a step further than Kant. The two styles, providing stylistic dynamism to the idiom, bring in the variations necessary to keep the text open and unfolding. They do not interpenetrate, but form a counterpoint running through the text. To allow the reader to identify false doors, meaningless letters or collections of letters, the distinction between world-bound and world-transcending speculation is maintained throughout. Moreover, immanent metaphysics itself is scrutinized, dividing limit-concepts from actual infinities, regulation from constitution and architect from creator. ∫ Mistrusting the written word while composing a story or a system, accepting subjective bias from the first inklings of conceptual thought and keeping the efficient nature of conventionality intact, invites the reader to find his or her own path to absolute truth. This retains the Socratic intent. ζ. Creative unfoldment gives way to unforeseen momentary interactions born out of ambiguity, redundancy and free associations running parallel with the object-dependent channel. Because of this structure, it does not involve automatic writing, but does make use of a surrealist psychic mechanism, a "waiting" birthing unexpected encounters bearing novelty. Metaphysics is therefore also a work of art. ∫ Waiting is the awareness of the conventional reality we find ourselves in hand in hand with the intervention of the most unlimited freedom ready to deeply move us and bring about novelty. Freedom is this total openness to what is possible, a negation and denial of what is thought impossible. Our limitations are to a very large extend self-imposed. Critical perspectivism is the idiom of this metaphysics of process. It brings into view three fundamental perspectives : the immediate, the mediate and the absolute. The immediate context is what is given hic et nunc. Foremost a first person perspective, it directly demonstrates to us the singularity of the act of cognition. In conceptual thought, the concept, by symbolizing object/subject relationships, mediates between the knower and the known. This always involves an interpretation, a unique perspective. The mediate context has intersubjective concepts validated by consensus. When valid, this conventional knowledge works but is deceptive. While actually other-powered, objects are apprehended as self-powered, possessing a nature or essence of their own, separate & independent form other objects, while this can never be found to be the case. While it is true sensate objects are imputed on a perceptive base, they never appear without a large set of mental objects. The absolute perspective, ultimate nature of phenomena or absolute truth of the absolute Real-Ideal cannot be apprehended, but only conceptually approached by using a non-affirming negation. Not sheer nothingness nor a void, it is never some thing separate from actual objects. Hence, to frame its totalizing view on the world, immanent metaphysics must never use actual infinities, but only limit-concepts. This perspectivistic idiom tries to bring into balance the counterpoint of object-dependent & imaginal styles. A few important themes stand out : a consequent sensitivity for integrating objective & subjective perspectives in all areas of speculative interest ; maintaining the difference between a regulative and a constitutive use of concepts ; a radical division between immanent & transcendent speculative activities and finally, providing speculative arguments backing the idea of a "Grand Architect of the Universe", a Corpus, Anima & Spiritus Mundi, or supermind, rather than arguing in favour of the arising of the world from the activity of an omnipotent "Creator God", a "King of Kings" able to will all of this "ex nihilo". Why not ? This "substance of substances" cannot be found ! § 5  The Style of Process Metaphysics. α. Natural languages resemble the objectifying convictions of their users. Nouns and the adjectives qualifying them refer to objects existing apart from other objects. Verbs and the adverbs qualifying them refer to actions between these independent, self-contained, self-powered, separate entities. β. Awareness of full-emptiness, embracing the process-nature of all possible objects and their interdependence, understands nouns as momentary labels placed on the ongoing stream of actual occasions. These moments do not exist on their own, as it were constituting the stream, but are interconnected with all other moments of the stream. The unit of the stream is therefore the differential moment (dt), i.e. an infinitesimal interval, an instance, droplet or isthmus of actuality. The differential moment has architecture, a capacity to shape novelty in what, without this, would only be an efficient transmission of the probabilities of momentum & position (unqualified by architecture and sentience). γ. Seeking a language of process is not like wanting to find a new kind of speech. Nor is it a meta-language counterpointing natural languages. Attending speech and being attentive to conceptual anchors leading to reification and enduring (eternalizing) architectures does not call for a special verbal or written discipline. It merely accompanies the intent of every speech-act. In texts therefore, a recurrent undermining of essentialism is at hand. ∫ In seeking to meet the king, process philosophers only experience his kingdom. They never meet him face to face. Relinquishing the seeking itself is the end of philosophy and the beginning of mysticism. δ. The "I-am-telling-You"-approach of historical process metaphysics invites the reader to develop his or her own arguments. The basics are given, but the unfoldment of the text in the minds of the readers is left open. More than a passive registrator of what is meant, the auditorium is a co-creator of and a contributor to the creative unfoldment of the text. Hence, mere words exceed the text and bring about outspoken reactions. This coalescence may turn it into a cultural object : a tissue of interconnected seeds and their recurrent fruition. The main linguistic problem the text of this metaphysics of process encounters, is the noun- and verb-structure of language. A noun tends to represent a fixed continuum, unchanging relative to the adjectives. In traditional formal logic, the proposition is divided into subject & predicates, in substance & accidents. The former is stable, the latter prone to change. However, any label captures a moving, ever-changing phenomenon, or set of actual occasions. The object signified is not as "fixed" as the symbol signifying it. Language betrays substance-thinking. Not only is there a logocentric misrepresentation, but on top of that not a single word is adequate enough to convey process. Unfortunately, we have to row with what we have. Artificial languages may solve many problems, except being unintelligible for the large majority of human beings. The singular, momentary actual occasion x has differential extension. Every possible property, attribute or aspect characterizing it represents a process, not a substance or ¬ x. Thus, x is to be written as xΔ,with Δ representing, for all possible properties Σp of this instance x of the set of all actual occasions, the totality of its differential extensions. If time is the only property of x, then x.Δdt prevails. Like the water of a river, the bases of perception and mental constructs constantly change. The labels catching these translate them into components of our natural languages. At best, namely as valid empirico-formal knowledge, they truly represent, for the time being, the dynamical features of the water as determined by the morphology of the riverbed, the volume of the water, its momentum, and obstacles in the river, etc. But these conventional truths are mistaken representations. Objects appear as separate and independent, while in truth they are interconnected and interdependent. There is no "water", but merely a label imputed on a perceptive base turned into a sensation. The vastness of this network makes it impossible to represent this in any known language. Even our most sophisticated words fail us dearly. And if we use artificial languages, the issue becomes elitist, like understanding the logic & mathematics of the Schrödinger equation. Process metaphysics wants to understand the stream. It catches the swimmer in the act of swimming. Studying & reflecting, it tries to find out the style of the movement, the features of the ongoing dynamism or kinetography defining the architecture of this movement ... Process philosophy is therefore a kind of kinetography. And movement is more than just moving, sound is more than mere noise. What is added is a certain awesome dynamical symmetry. B. Opposition, Reduction & Discordant Truce. To apprehend in a comprehensive way how all things hang together, forming a Gestalt or mandala of possibilities and their relationships, and to try to affirm this in a coherent way, accommodating a reasonable view of the world, seeing it as a whole, satisfies the metaphysical instinct. But to generate such an articulate worldview is not without methodological problems. The most basic of these is not the coordination of all possible domains of knowledge necessary to make this integration happen (leading to a compromise between attention for parts and for the whole), but the choice of axioms, i.e. propositions not susceptible of proof or disproof, but assumed to be self-evident and so above all suspicion. Besides its Axiomatic Base, a metaphysical project, in every case Herculean, may choose one of the following methods : 1. comparative : first a series of basic concepts like "being", "life", "time", "consciousness", "group", "energy", etc. are chosen and, to arrive at a global view, the history of these compared. One replaces the mandala of one single domain of knowledge with the study of a single foundational concept of that domain. This approach, found in academic courses on metaphysics, is necessary but rather atomistic and so merely a preparation for more serious work ; 2. subjective : here, a single person gives way, possibly in an imaginal style, to what he or she knows, beliefs and/or feels, bringing a small area to a very high level of articulate consciousness. Although highly subjective, this will -given this person's information is not too restricted- serve to prepare a deeper and more extended view ; 3. synthetic : finally, one tries to erect a worldview using all relevant information available within a given time frame. Historical examples of this method are the corpora of Aristotle & Bacon. At present, the interval would obviously extend between the Age of Enlightenment and postmodernism. Such synthetic activity depends on the number of knowledge domains integrated, as well on the validity of the assembled information. These synthetic efforts are never "finished", but merely represent the best possible global picture available. It needs to be corrected and completed by succeeding generations. Grasping how both an extensive treatment of details and a comprehensive global construction will not eliminate all possible lack of clarity, one realizes a complete synthesis will not be arrived at. Some terms may remain foggy or incoherent. Of course, a sincere author tries to do away with these "inadequacies" as much as possible ... Nevertheless, the brontosauric aims of both analytical philosophy (focusing on details), as put into evidence in the Principia Mathematica, and grand speculative stories like Sein und Zeit are bracketed. Indeed, these efforts remained incomplete ... But, in a world knowing Gödel, is completeness wanted ? Given the global dimensions of criticism today, the construction of such a synthetic metaphysical worldview is not a "modern" endeavour restricted to Western culture (as it obviously was in the past), but is necessarily multi-cultural and so hypermodern, incorparating the best of both Western & Eastern views. Because it no longer lingers to merely deconstruct modernism, relinquishes radical relativism and tries to erect an "open" grand story, it also supersedes postmodernism. The latter remained too destructive and sceptical and so basically infertile, barren. Indeed, scepticism and dogmatism are to be avoided. Only criticism, the articulation of clear distinctions, truly advances knowledge. As will become clear, radical postmodernism was also unable to reach its goal : to eliminate metaphysics ! Hail to the foremost spirit of the Western Renaissance and the highest honorary salute to the Masters of Wisdom of the East ! Let us point to six sources aiding the construction of a contemporary synthetic worldview embracing a critical metaphysics : 1. science : valid empirico-formal propositions point to facts all possible concerned sign-interpreters for the moment accept as true. They form the current paradigm, featuring a tenacious, regular knowledge-core, a co-relative field containing all domains of scientific knowledge and at its fringe a periphery touching semi-science, proto-science & metaphysics. At hand is the production of provisional, probable & coherent empirico-formal, scientific knowledge held to be true. The core sources of knowledge are experimentation & argumentation (cf. Criticosynthesis, 2008, chapter 2) ; 2. ethics : if science aims at knowledge and truth, ethics is primarily concerned with volition (the source of action) and the good. Here we articulate judgments pertaining to the good (the just, fair & right), providing maxims for what must be done. The core sources of this good action we seek are objectively duty & calling and subjectively intent & conscience (cf. Criticosynthesis, 2008, chapter 3). Accommodating valid conventional knowledge or science, metaphysics is aware of the normative principles, norms & maxims of ethics. The reason is clear : as soon as anthropological issues arise, one cannot speculate without considering the rules covering good action ; 3. politics : ethical concerns lead to views on the organization of just, fair and right societies. Worldwide democracy is gaining ground for the right of individuals to decide what happens to them in society is a logical extension of critical ethics. Because tirany & dictatorships, whether religious, nationalistic, elective or otherwise, contradict the normative rules of ethics, they must eventually crumble. No metaphysics can be unaware of this. The core source of a good society is the educated choice of its peoples. Of course, democracy can be organized in many ways. In the West, a strong opposition is deemed necessary to fuel debate and to guarantee a variety of opinions circulate. This a Greek streak. In the East, a common goal for the betterment of the majority is deemed more important than opposition, debate and regulated conflict often infringing respect (despite Lao-tze & Chuang-tze, the East favours Confucianism). Clearly, speculating on the actual meaning of human life cannot be done without incorporating politics ; 4. economy : ethics & politics need a system to organize the scarcity of material goods & services in a good way. Solving the energy-problem is the source of an adequate solution satisfying the needs of all sentient beings. Only green energy is a viable solution, for humanity is no longer allowed to plunder Nature without severe & very costly retributions. Technology links economy and science. Bridled by ethics and democracy, these then lead to an efficient & ecological (sustainable) economy. Speculating on how the interaction between science, ethics & politics can be used to satisfy needs by way of goods & services calls for economy and its laws ; 5. art : judgments pertaining to what we hope others may imitate, namely the beauty of excellent & exemplary states of matter, are objectively based on sensate & evocative aesthetic features and subjectively depend on one's aesthetic attitude (cf. Criticosynthesis, 2008, chapter 5). Its source is feeling and its aim the beautiful. A good, global democracy organizing an efficient economy, taking advantage of valid science is therefore not enough. Human beings seek to express their feelings in ways others like or dislike to imitate. A metaphysics has to incorporate the beautiful in terms of harmony, unity, symmetry & asymmetry. Not only because human beings love beauty, but also because (a) Nature is basically an architecture of symmetry and symmetry-breaks and (b) a hypermodern understanding of the Divine integrates concepts like harmony, unity and probabilities leading to these  ; 6. religion : insofar as the Divine (cf. Criticosynthesis, 2008, chapter 7) is part of our metaphysical inquiries about the world, it cannot be more than a "spiritus mundi" remaining, as the Stoic "pneuma", within the order of the world, never transcending worldly possibilities. Then, the Divine does not transcend the world, but merely defines its outer limit. Not explaining Nature from without, it helps to understand its conservation & design, leading to the concept of the "Architect of the World". To connect the order of the world with the idea of some thing outside the world, to not exclusively define immanence by way of limit-concepts but indeed envisage actual infinities, is to move our religious attitude outside Nature, beyond the world. Logic teaches such a transcendent signifier cannot be conceptualized. But can it be cognized ? The possibility of a "cognitio Dei experimentalis" has to be envisaged, but can never be "proven". Such mystical experience is ineffable, object of un-saying. Of course, an immanent conceptualization of the Divine is a powerful source of inspiration for metaphysics. Besides being the object of a personal experience, it can be backed by arguments (like the argument of conservation, the argument of design and the wager-argument). Transcendent metaphysics can be sublime poetry and sublime poetry may influence the conceptual mind. These six sources aiding are used to develop an (immanent) metaphysics of process calling for (a) a comprehensive, totalizing metaphysical worldview incorporating both natural and social realities, and this in tune with (b) a logical study of language and science, making room for (c) the expression of direct experience and nondual, non-conceptual cognition. Of course, it will be impossible to cover all possible speculative objects. Not only because all known objects form a very vast body of knowledge, impossible to fully & completely synthetize by a single mind, but also because new objects are not to be excluded. A priori these cannot be covered. Also, it is inevitable some areas will receive more attention than others. Indeed, the metaphysics discussed in the present text will focus on being, cosmogenesis, biogenesis, sentience, anthropogenesis & the question of the Divine. It will not cover economy & politics. In general metaphysics, the idealized totality presents itself as an organic unity & pluralistic integration of process. An ontological scheme is developed & argued. In its application, as in specific metaphysics, phenomena relevant to the details of the totalized view, are integrated. § 1 The Axiomatic Base. α. The five postulates advanced by Russell in his Human Knowledge can be summarized as follows : (1) the world is composed of more or less permanent things. A "thing" is a part staying invariant under certain operations and constant during a certain time with respect to certain properties ; (2) causes and effects of events remain restricted to a certain part of the previous or succeeding total state ; (3) causality diffuses continuously (with contiguous links), so there is no actio-in-distans ; (4) if structurally similar complex events are ordered in the vicinity of a central event acting as a center, then they belong to the causal series pertaining to that center ; (5) if A looks like B, and both were observed together, one may suppose that if A is again observed and B not, B will nevertheless happen. The first postulate affirms things are more or less permanent. Russell was aware things change, but he refused to impute impermanence as one of the fundamental signs of existence. Permanency, invariance and constancy are given preference over impermanency, variability and change, or, more precisely, process-based creativity or novelty. Was this Russell's Platonic, Greek bias ? Process thinking does not posit permanency, but advances the cycle of arising, abiding & ceasing, i.e. the dependent-arising ("pratîtya-samutpâda") of phenomena. The world is composed of emerging actual occurrences. These stay around for a while and then cease to exist as such, entering into the creative advance of succeeding actual occurrences and their togetherness as events, objects, entities, things ... The second postulate, besides limiting determinations and conditions to causality, restricts the spatiotemporal influence of causality. Of course, as chaos-theory proved, small causes may have large effects (cf. the Butterfly-effect). The third postulate conflicts with quantum mechanics, for its non-locality underlines the absence of Einstein-separated events in the realm of physical reality. The fourth postulate connects structural similarities with causality, while the fifth postulate turns the psychological mechanism of habituation into a source of knowledge. This can only be realized, if A and B are indeed deemed permanent. Adding "more or less" does not change this. These postulates show what happens when the Axiomatic Base it too narrow, too much concerned with identifying identities and less with grasping how "things" emerge out of the sea of ongoing process. ∫ Russell considers realism, with its adjacent notions of permanency and a direct sensuous access to objects, as the hallmark of sanity. Is this not like confirming suffering ? Only those who know they possess nothing can never loose anything. The root cause of this insatisfaction is superimposing static concepts on fundamentally transient phenomena. This essentialist fallacy, accepting objects must have some unchanging core, makes us cling to the same thing even if nothing stays identical. β. The First Postulate, or basic conviction, is : there is a world, a Nature, a universe, or, in other words : all possible phenomena, all what actually is, exists. This aims at maximal totality, a system encompassing all possible systems. Our Second Postulate affirms the totality of the world has a world-ground. This is the sufficient ground of the world, i.e. no deeper level can be found. This ground is however not substantial or self-sufficient. The crucial difference here lies between a self-sufficient reified ground and a process-based, non-substantial sufficient ground. The Third Postulate defines the building-blocks of all what exists in the world as actual occasions. ∫ Thinking there is some better "world" outside the world makes us hope to attain it and fear not to. But accepting the existing world is all we have, brings in the care for every moment of it. γ. The world is the totality of all actual phenomena, the set of all concrete actual occasions, events, entities & things part of the world. concrete actual occasions, events, entities & things given by experience sufficient ground, process-based abstract formative potentiality γ.1 As a set of formative elements, the world-ground is merely the sheer possibility of the world. The world-ground is only the possibility of the next moment of the world itself.  World & world-ground define the world-system. If the ground of the world is merely the possibility of the world, then the actualities of the world are not determined by a substantial transcendent origin outside the world ; they are not otherworldly. γ.2 There is no transcendent self-sufficient ground "outside" the world. The world-ground is a set of ontological principles concerning the primordial and the pre-existent. In process thought, these are merely formative elements necessary to think the next moment of the actual world. They do not stand alone, neither do they act as "creative" principles bringing forth the world. They are a set of process-based roots drawn -by reversal- from the domains of actuality characterizing the world, namely matter, information and consciousness. This is the hermeneutical circularity necessary to eliminate any hint of an ontological divide between the world and its ground. Nevertheless, the world is finite & relative, the world-ground infinite & absolute. ∫ The world-ground is the servant of the world, it does not create it. γ.3 Just imagine an absolute substance "outside" the world, a substantial, self-sufficient world-ground indeed causing the world to come into existence "ex nihilo". Then, the world would depend on something eternal existing from its own side. As in Platonism, the world would be divided in two ontological layers : a perfect world of static eternities and an imperfect world of relative becoming. This view is firmly rejected. In actuality, there is only the world and nothing else. Indeed, as ultimate logic shows, a substance cannot be found. critical : concrete actuality made likely by the primordial sufficient ground of process ; traditional : the mere modification of the primordial own-nature of all things ; critical : sufficient ground but process-based : the primordial possibility of change ; traditional : self-sufficient and thus substantial : the primordial own-nature of all things. γ.4  The "transcendent" speculations of critical metaphysics do not have an absolute self-sufficient, self-powered substance acting as world-ground "outside" the world, but an ultimate nature which is the property of every single actual instance of this totality. The "transcendence" posited is not beyond, above, outside or next to the world. The world-ground, being merely a formative abstract, has no spatiotemporal characteristics. Traditional reified (essentialist) transcendence is not at hand. The object of this transcendent metaphysics is not an eternal, self-sufficient "entity of entities" or "substance of substances". The transcendence aimed at is not a Greek God ! If a transcendental signifier can be identified (albeit by the thorough application of the non-affirmative negation), then this ultimate reality is not a substantial, self-sufficient world-transcendent ground. Absolute reality, as the sufficient ground of every possible phenomenon, is actualized by every phenomenon. ∫ Platonic ontology betrays the deep aristocratic discontent with change, impermanence and seemingly disconnected variety. Wherever it creeps in, cherishing others is eclipsed by the rubble of the few. finite, spatiotemporal, concrete, actual, relative, conventional infinite, non-spatiotemporal, abstract, formative, absolute, ultimate δ. Traditional transcendent metaphysics affirms its object to exist as a substance with inherent properties and not part of the world. But how can this onto-theology be ? If this self-powered supreme & infinite object is conceptualized, then an affirmative negation is at hand, i.e. one positing something outside, above, beyond or next to the world. Such an object must be obvious, but cannot be found, is lacking. Moreover, how can the finite grasp the infinite ? If this is denied, then nondual, non-conceptual cognition of the mind of Clear Light* does not exist. If affirmed, then how explain the tangential moment the world and its ground touch ? ∫ Onto-theology leads to the antics of Baron von Münchhausen. In actuality, there is a single world. There is nothing "outside" or "next to" or "beyond" or "above" this world. The topological view is rejected. Although the world has a world-ground, the latter is not a substantial reality not part of the world, but a propensity acting as the sufficient ground of the world. This sufficient ground is the absolute absence of inherent existence. This lack of substance is the primordial condition for anything to happen. Platonism is firmly rejected. This does not lead to a rejection of a deconstructed transcendent in metaphysics, but to an eliminaton of its traditional object : a substantial actual infinity (the God* of process is an actual infinity, but not a substance). The transcendent nature of phenomenon A is not a different object B, but a different epistemic isolate of A. The "sacred" dimension of the world is found in each and every "profane" actual occasion, event, entity or object. This by ending all substantial instantiation, completely purifying the conceptual mind. The totality of the world is all what is actually happening. The world-ground, transcending this concreteness, is not a substantial actual infinity, but a process-based formative abstract. Transcendence and immanence are not in conflict, for every object manifests a conventional nature and an absolute nature, and this without the latter being ontologically different. Only God* is (again !) the Big Exception. S/He is a process-based actual infinity ! Being actual, God* (in immanence) is not merely potential, not merely formative and therefore not merely abstract. Being also abstract, God* (in transcendence) is not a concrete actuality of the world, not an actual occasions like any other, but an absolute & infinite singularity (cf. infra). § 2 Monism, Dualism or Pluralism. α. The axiomatic choice for monism is in tune with the need for unity, simplicity, elegance and comprehensiveness. The monad does not move beyond itself, but privileges a single principle. In this monarchic continuum, alteriority is not a different ontological entity, but a mere replication of the existing principle. This implies all things are interchangeable, for although ontological distinctness may be accepted, ontological differences nowhere occur. ∫ Can everything be explained by the privileged monad ? If so, then by Ockham's Razor we keep it simple. But if a single case can be found where the principle does not apply, then a forteriori monism is wrong. β. Duality, with its powerful reflective capacities, introduces otherness as a new ontological entity. The power of duality is felt in logic and epistemology. Reflection on the structure of thought itself reveals a binary structure, erected on the principles of the transcendental logic of thought itself, namely the crucial & necessary divide between a transcendental subject and a transcendental object. The armed truce between object & subject can also be felt in epistemology, for to arrive at valid knowledge, both theory & experimentation are necessary and observation is not a passive, merely registering process. ∫ On the one hand, Descartes was correct in emphatically making the difference between the extended and the non-extended, between matter and mind. On the other hand, Cartesius was wrong to reify the difference, shaping an ontological dualism. Although both are distinct, they are not different. This crucial distinction leads back to monism. γ. Non-monists logics introduce more than one fundamental ontological principle (a duality, triplicity, quaternio, etc.). Ontological dualism posits two independent substances : matter versus mind.  By a trinity of factors, a logical closure ensues, for by adding a third principle, a tertium comparationis, duality is not longer "locked" in singular division, no longer the nature morte of the "dead bones" of formal logic (Hegel), but indeed becomes an "unlocked", plural process capable of thinking the manifold. In many ways, triadism is well equipped to deal with manifolds and their processes. Of course, this pluralism merely multiplies the difficulties, for if it is unclear how two substances may interact, then how to explain an ontological triad or anything beyond two ontological principles ? ∫ By the multiplication of principles one does not solve the problem of unity, quite on the contrary. Unity can only be systematized by the monad. Ontological elegance, coherence (orderly relation of parts) and simplicity are born out of the monad and nothing else. δ. To couple monism with essentialism introduces a single ontological substance. The monad is then positioned as independent and self-powered and turned into a static self-sufficient ground existing from its own side, inherently. Such an approach has difficulty explaining the multiplicity, variety, differentiation, complexity, richness & interconnectedness of the manifold. Hence, the ongoing changes & novelty happening in Nature cannot be explained. ∫ In traditional theology, the Divine was turned into an idol in the image of the Egyptian, Persian and Greco-Roman rulers. This has sterilized religious thought. The challenge at hand is to accept a universal cognizing luminosity, a mind of Clear Light*, without the dogma of an aboriginal, unmoved, inherently existing transcendence, at whose fiat the world was created and who's will it must obey to avoid punishments. To remove such paternalistic substantialism from theology is the only way forward. God* is not above, beyond, next to or therefore not not a part of the world but with the world. ε. Thinking a single dynamic principle is the solution sought. Because of the monad, all phenomena fall under the same ontological principle, leading to the absence of ontological rifts. Avoiding essentialism brings in maximal interchangeability, knitting the various textures of existence together, thus interlacing the fabric of Nature, accommodating the organic, interdependent whole it obviously is. ∫ Dynamical monism may accept the presence of a supreme dancer, a sublime movement executed with Divine grace. Such perfect symmetry transformations, the "holomovement of holomovements" of God*, continuously have all other actual occasions as reference frame. The absolute is present as an ultimate differential in every point of Nature, in every concrete actual occasion of the world. The ontological principal of the single world-system is a single principle or monad. Monism guarantees our understanding of the world does not assume ontological differences, while thinking the monad as process-bound ends the search for a static first principle, the assumption of a single, unchanging self-subsisting essence or core. The essentialist fallacy is avoided. Although axiomatic, logically monism has definite advantages over dualism & pluralism. In the latter cases, the interaction between the separate principles, defining an ontological difference, becomes problematic. Although the possibility of distinct actual occasions, events, entities and objects is accepted, the notion they fundamentally represent different static pockets in the ontology of the world is rejected. All compounded things are impermanent, ongoingly arising, abiding & ceasing ; this not randomly, but swimmingly. § 3 Critical Epistemology. α. Before Kant, in the pre-critical era of Western philosophy, being defined (conceptual) knowing. The question of the capacity of our human cognitive apparatus was answered by referring to ontology, introducing one, two or more ontological principles first. As a result, the natural limitations of cognitive activity were either exceeded (as in dogmatism) or narrowed down (as in scepticism). ∫ The drama of conceptual cognition is exaggeration, or moving to extremes, making something more noticeable than necessary. This makes one seek a hypokeimenon, an underlying substance or ultimate thing. This illusion is then carried through. A tragi-comedy. β. The word "criticism" derives from the Greek "kritikós" or "able to discern". In turn, this leads to "krités", or  a person who offers reasoned discernment. Criticism defines borders, frontiers & waymarks. β.1 These demarcations do not negate anything (as does scepticism), nor do they affirm (as does dogmatism), but merely posit distinctions enabling us to remove entanglements and create open spaces or clearings offering breathing-spaces between otherwise ensnared objects (cf. Criticosynthesis, 2008, chapter 2). Because of these, differences & distinctions are possible. β.2 Hence, this "Critique of a Metaphysics of Process" intends to discern the place of a critical metaphysics based not on substance but on process, not on fixating (the eternal or the void), but on thinking constant change and therefore impermanence. It identifies the field of metaphysics by outwardly demarcating it from science and inwardly defining its main targets, to wit totality and infinity, or, in other words, the conventional wholeness and the ultimate suchness of all possible phenomena, the world and the world-ground respectively. ∫ Executing their perfected perfect styles of movement, ultimate dancers simultaneously portray the impermanence of constant, interdependent change, as well as the permanence in the pure kinetographic style of their holomovements. γ. Critical epistemology answers the question how conceptual knowledge and its advancement (production) is possible ? It does not base this analysis on some previously given ontological ground. Reality (accessed through the senses), nor ideality (apprehended by the mind) are deemed pre-cognitive things triggering the possibility of knowledge. The latter is given by the groundless ground of knowledge itself, the Factum Rationis. Hence, the mode of analysis is transcendental ; its object is the structure of the cognitive apparatus, and its subject the reflective activity of the knower, bringing out the principles, norms & maxims of (valid) knowledge by merely disclosing the rules already given in every cognitive act, i.e. what is going on as soon as thought is afoot. ∫ The rational mind is not only formal, but also transcendental. Not only does it produce valid empirico-formal propositions, but also the structure of conditions (on the side of the knower) making it possible for such propositions to be produced. Critical metaphysics differs from all previous speculative systems in its radical abandonment of substantial thinking, of grounding the mind a priori in anything except in the groundlessness of the mind itself. δ. Critical epistemology is not a descriptive activity. Why not ? There is no vantage point outside knowledge empowering us to watch knowledge as such. The possibility of knowledge is apprehended while knowing. The principles, norms and maxims are unveiled in the cognitive act itself, and this by way of reflection. These rules cannot be negated without negating the negating activity itself. Doing so always entails a contradictio in actu exercito. Hence, epistemology is a normative discipline, and its rules are those being used by all possible thinkers of all times. ∫ Valid science must be about experimentation (testing) and dialogue (with dissensus, argumentation & consensus). Valid metaphysics must argue a totalizing worldview embracing the infinite. ε. Positing an Archimedean point outside knowledge grounding knowledge, is a pre-critical strategy ontologizing the possibility of (conceptual) knowledge. This presupposes the presence of an unchanging (fixating) ground outside knowledge. Per definition such a ground cannot be knowledge at all ! ε.1 Such an incorrect view calls for a dogmatic ontology, one placing "being" before "knowing". As such pre-critical thinking is merely an elimination of the necessary tension or concordia discors between the knower and the known, between the subject and the object of thought, either involving the affirmation of the real or of the ideal. In the former case, extra-mental reality is deemed a real self-sufficient ground for the possibility of knowledge. In the latter case, mentality itself is considered to be the underlying ideal self-sufficient ground. ε.2 Both ontological realism and ontological idealism generate inconsistent answers to the fundamental question of epistemology and so pervert a reasonable solution to the problem of conceptual knowledge and its validation & production. ∫ Totalizing knowledge and proposing a comprehensive worldview does entail a narrow interaction between critical metaphysics and science. This to fructify speculative activity with current views in physics, cosmology, biology, anthropology etc. The possibility of conceptual knowledge and its validation involves critical epistemology, a normative discipline unearthing the rules of knowledge by way of a reflective, transcendental analysis staying within the borders of possible knowledge itself. To precede epistemology with ontology was the way of pre-critical thought, immunizing reality or ideality before analyzing the actual capacity of our cognitive apparatus. The capacity of conceptual thought is exceeded by the "urge for Being" found in substantialism and essentialism. Ontological realism posits a world existing independently from thought. But at no point can it impute anything without the knower. Ontological idealism affirms a "pure" mentality constituting the extra-mental. But knowledge is always about some thing. As criticism shows, both do not lead to an epistemology free from the scandals of contradictions & antinomies. § 4 Conflictual Model. α. Because of the inflation of (mythical & theological) metaphysics in pre-modern times, modern philosophy has invoked a radical conflict between speculative activity per se and scientific thought. This created a division between scientific knowledge and non-scientific opinions. While the latter are accepted as valid in their own private sphere, they play no role in the domain of science. The latter is a privileged language-game dealing with the objects of public life, while the former is merely of personal interest and so considered highly subjective & intimate. ∫ One cannot push away all possible speculative activity. Only invalid metaphysics must be abandoned, not metaphysics as such. The tensions between organized religions and science, between faith and valid knowledge, between "alternative" (peripheral) and paradigmatic interests, etc. reflect the conflict between paradigmatic and non-paradigmatic knowledge. Two important cultural objects arise : on the one hand, an "ideal" religious faith based on "grace" (the use of speculation without science) and, on the other hand, "real" scientific facts based on experiments (or science without metaphysics). Merely talking over each others heads, they behave as deaf men arguing. ∫ History put asides, science cannot divorce metaphysics. They are a dual-union participating in the concordia discors of conceptual thinking as such. γ. The conflictual model, feeding an insurmountable conflict between science (the valid empirico-formal propositions forming the paradigm) and pre-critical metaphysics, inhibits speculative activity. Indeed, trying to remove the so-called infection caused by this wrong kind of metaphysics paralyzes theoretical philosophy. Resignation is the outcome. In this way, giving up the attempt to articulate a totalizing view on the world, the treasure-house of cultural objects impoverishes. Reducing the heuristic impact of speculation in this way, decreases the production of knowledge. It also plunges epistemology into darkness, for the unavoidable role of metaphysical background information in both testing, theoretizing and arguing is poignant. ∫ The Gestalt switch invoked by the "cube" of Wittgenstein (TLP 5.5423) shows attention defines observation. Positing a conflict between science and metaphysics, the conflictual model divides the field of knowledge into two separate domains. Accepting the presence of metaphysics, it nevertheless promotes the path of science and relegates speculative interests to one's private life. This approach is also found in the modern division between religion and science. While the former is accepted as part of human cultures, the latter is deemed the sole guardian of objectivity. This results in a depreciation of theoretical philosophy. The conflictual model is rejected. § 5 Reductionist Model. α. The reductionist goes a step further and tries to entirely ban metaphysics from the arena of thought. Only science has anything to say about the world and all non-scientific entries are worthless and so to be disposed of. There are no two distinct sources of truth, but only one, namely science. Logical positivism is a good example of this approach. ∫ Radicalizing against the flow of irrationalisms, one tends to overreact and propose a silly solution emitting a flair of intelligence. Irrationalism cannot be avoided, only handled properly. β. One may also try to cancel out metaphysics by pretending to have access to an absolute knowledge, one needing no further speculation. This Hegelian approach is a super-Platonic strategy. It fails because it presupposes a Herculean conceptual capacity conflicting with a critical reflection on the possibilities of conceptual knowledge. As will become clear when analyzing the nondual mode of cognition, this only works if and only if this absolute knowledge is absolutely ineffable, thus cancelling out its direct conceptual involvement. One may also invoke the supremacy of scientific knowledge, claiming it is totally free from any dealings with metaphysics. This also fails, because both theory & experiment always presuppose metaphysical background information. ∫ Why cut the branch upon which one sits and then be sorry one falls ? γ. The escalation from conflict to reduction increases the intensity of the attack and decreases any possibility of a constructive return. ∫ Intelligence is able to change its mind. The elimination of metaphysics is an attempt to exceed speculation or to laud the activity of scientific methodology, based on repeatable experiments & coherent argumentation. Inflating conceptual thought leads to meta-rationality at the expense of rationality, endorsing dogmatic conceptualizations and the occultation of the factual. Such a strategy breeds fundamentalism, irrationalism and the dictates of nonsense. While a direct experience of absolute truth is possible, it cannot be conceptualized. Privileging access to the objective enthrones science, giving it an inviolate authority leading to instrumentation and fragmentation. Both are rejected. At both ends, the reductionist model fails. § 6 Metaphysics & Criticism. α. A frontal attack of metaphysics, trying to remove it from thought, only manifests how metaphysics remains present in the attacker. The "intentio recta" battling metaphysics in the open field, unveils it as an "intentio obliqua" surreptitious at work in the would-be eliminator. To argue an untestable totalizing view is therefore a "vis a tergo" one cannot escape. ∫ Like the eye cannot see itself, science has a blind spot filled in by metaphysics. One tries to escape only to return. Let us accept this and move on. β. Criticism does not try to animate a conflict with metaphysics, nor does it want to eliminate it. It accepts the abyss between science & metaphysics, but tries to bridge it. Metaphysics, the speculative integration of the totality of phenomena born out of infinity, is capable of being supported by arguments, but cannot be put to the test. The latter distinguishes it from scientific statements, both arguable and testable. γ. Aware metaphysics is part of every possible cognitive activity, criticism merely tries to find the rules covering its use. Negatively, it criticizes metaphysics as an ontology or archaeology of the normative disciplines. Epistemology, ethics and aesthetics must not be rooted in a self-sufficient ground outside knowledge, as it were preceding it. Doing so cripples the understanding of how knowledge and its production are possible. This leads to unworkable antinomies, as Kant showed. Positively, a rehabilitation of metaphysics is at hand. As a critical metaphysics, it acts as a heuristic or teleology of science, advancing speculative notions, concepts & systems. As an "ars inveniendi" it inspires science to move beyond the periphery of its current paradigm, but never without asking it to relinquish its two wings : experiment & argument. δ. The distinction to be drawn then is between pre-critical and critical metaphysics. The former is a mythical & theological speculative format, invoking being to explain knowing and multiplying entities. The latter is a totalizing picture of what exists as emerging out of infinity. This conveys awareness of the limitations of knowledge , but is nevertheless able to serve as a heuristic of science. It tries to find a single founding principle and argue the totality of phenomena (the world) made possible by the set of infinite possibilities (the world-ground). ∫ Without a single unifying principle, the unity of the manifold cannot be thought. ε. As a philosophical discipline in its own right, critical metaphysics encompasses both totality & infinity. Pre-critical, dogmatic, foundational metaphysics, positing a self-sufficient, substantial ground before an ultimate analysis of the possibilities of cognition and the cognizer, asks us to suspend understanding to the advantage of systems of substances a priori. This attempt reifies infinity, turning into a "substance of substances". Not so here. Advancing arguments to understand the world comprehensively, critical (immanent) metaphysics asks about being, cosmos, life and sentience. ε.1 These answers help to clarify the fundamental questions posed by the human being : Who am I ? From where do I come ? Where am I going to ? The first question being the foundation of the foundation : without knowing myself how to understand anything ? This "I" not only refers to a subjective sentient & luminously cognizing center of consciousness, but also to a unique objective point of observation. ε.2 Using the realized totality as stepping stone, critical metaphysics ventures at the periphery of paradigmatic conventionality and explores infinity. First as a series of asymptotic limit-concepts of the world, next as an actual infinity, infinitely totalized as an absolute consciousness (God*). This is not an ens transcending the totality of all actual phenomena, but a series of formative abstracts with a single exception, namely God*. Discordant with ultimate logic, the Pharaonic (Platonic) intent is rejected. The absolute exists conventionally ... God* is the awareness valorizing the possibilities of the materiality & creativity of the world-ground, and the sole abstract actual occasion moving with the world. God* functions as facilitator, as a bridge between what is possible and what is concrete, touching both. Criticism accepts the importance of both immanent & transcendent metaphysics. The former is a heuristic of science and a totalizing worldview, answering fundamental questions by way of a single ontological principle. Using a penetrating analysis, the latter is posited through a special epistemic isolate, namely the realization no inherent existing object can be found. This leads to a non-affirmative identification of suchness/thatness and conventionality. This transcendent aspect is not ontological (does not define another ontological level), but epistemological (implies a change of mind). But while absolute reality can be directly apprehended (known), this does not involve any conventional cognitive activity, and is therefore utterly non-conceptual. The realization of suchness/thatness transcends conventional conceptual reason. Meta-rationality transcends rationality without unveiling a transcendent signifier. Crucially pregnant in private life, this "seeing" of full-emptiness transforms the knower. § 7 Discordant Truce. α. Transcendental logic dictates the principle of rational, conceptual thought. This may be called the concordia discors, the discordant concert or armed truce of the Factum Rationis. Duality is its architecture. α.1 On the one hand, all possible cogitation has contents, i.e. an apprehended object of knowledge or the known, and on the other hand, cogitation implies a thinker, a subject of knowledge or a knower. Both, of radically distinct interests, are nevertheless necessary and always joined, forming a bound, entangled, bi-polar system. α.2 In epistemology, these two make out the simultaneity of two state- vectors : the vector of the subject of knowledge, its languages, theories and theoretical connotations and the vector of the object of knowledge, its physical apparatus, tenacity, inertia and, so must we think, factuality & actuality. A fact is the resultant vector-product. ∫ Knowledge must be about some thing extra-mental. Neither is it possible for knowledge not to be known by a knower. β. The armed truce between subject and object of all possible thought and the groundless ground of all possible knowledge go hand in hand. Because knower and known form a pair and so cannot be reduced to one another, knowledge cannot be grounded in either objective or subjective conditions. β.1 Suppose we reduce the subject to the object, then the latter grounds the possibility of knowledge (as in ontological realism). Suppose we reduce the object to the subject, then the latter constitutes the possibility of knowledge (as in ontological idealism). β.2 Because we keep both sides of the transcendental spectrum at the same level, stressing their interdependence & co-relativity, knowledge can only be grounded in knowledge itself. γ. Shocking confrontations between object and subject of knowledge are inevitable & necessary. They cannot be avoided because the tensions between knower and known are ongoing. They are necessary because without these confrontations experiments cannot be adjusted by theory and theory cannot be falsified by facts. ∫ In the research-cell, the interests of both experiment & discourse play out in the continuous process of communication between, on the one hand, everything dealing with the test apparatus and, on the other hand, all formal and informal theoretical processes (calling for opinions, conjectures, argumentations, refutations, hypothesis & theories). δ. For more than two millennia, concept-realism was uncritically accepted. Concepts were deemed to be reliable copies of reality. δ.1 In Platonic concept-realism, one cannot avoid asking the question : How can another world be the truth of this world ? The ontological cleavage is unacceptable. On the other side, Peripatetic thought summons a psychological critique, for how can the human soul possibly know anything if not by virtue of this remarkable active intellect able to make abstractions on the basis of a manifold of independent observations ? δ.2 Both reductions are problematic. Because they try to escape, in vain, the Factum Rationis, and so represent two excesses denying the concordia discors of all possible conceptual thought, they form an apory. Plato, being an idealist, lost grip on reality (positing an outerworldly substantial ideal). Aristotle, the realist, did not fully clarify the mind (positing an abstracting active intellect). Composite forms of both systems did not avoid the conflicts, although they conceal them better. The crucial tension of thought was not solved by Greek concept-realism, crippling our understanding of formal rationality. This pollution endured until Kant broke the chains we had put on ourselves ... ∫ To attribute existence to concepts, be they related to sensate objects or instead refer to mental objects, is to step outside the duality of the object-subject relationship, claiming to oversee it and decide the ground of knowledge is either objective reality (the senses) or subjective ideality (the mind). Existence only instantiates a set of features attributed to a concept, but adds nothing of its own. Eliminate the properties contained in the set, and the object imputed vanishes. ε. When reason, understood as a stream of conceptual, discursive cognitive acts, is critically watchful and so not deluded by ontological illusions, the ideas of reason (the "Real" & the "Ideal") are not turned into ontological hypostases, but operated as regulative principles holding a hypothetical (not an apodictic) claim. In that case, conceptuality, in tune with the concordia discors, entertains a conflictual interest willingly. On the one hand, it seeks unity in the variety of natural phenomena (the multiple is reduced to a type). On the other hand, in order to guarantee the growth of knowledge, reason wants heterogeneity (the unique, not repeatable & singular). ζ. Besides the discordant truce between the objective and the subjective conditions of all possible knowledge, another concordia discors can be identified, namely between paradigmatic science & critical metaphysics. Science is the theoretically organized system of valid empirico-formal propositions or statements of fact. η. Paradigmatic science has a hard core, a set of statements deemed valid conventional knowledge, held by all involved sign-interpreters as true. The objects involved put down a high probability of recurrence and hence the highest possible relative predictability. Around this tenaciously kept paradigmatic core, covering matters objective & intersubjective, the architecture of valid conventional science unfolds. At its periphery, we find the beginning of non-science or fringe science. Critical metaphysics proves not all non-science is nonsense. η.1 On the one hand, science is factual and theoretical and critical metaphysics is only theoretical, and this in a speculative way. On the other hand, all sensate objects coming into consciousness through the senses are already compounded objects, and so have already been subjected to interpretation. η.2 So every observation of fact cannot do without the observer and his or her mental frame or view. A critical minimum of metaphysics is needed. θ. "Speculation" is not knowledge based on neither fact or investigation. Here, "speculation" refers to (a) a theoretical philosophy of what is beyond the physical and (b) "speculum", the Latin for "mirror," from "specere", or "to look at, to view". The last points to the totalizing, universalizing, all-encompassing, globalizing streak of a sound, valid  & critical metaphysics. It involves an intelligent worldview. Although critical metaphysics is not factual, its theoretical, intellectual structures are arguable. Validation is in the line of the kind of language used to convey the metaphysical view at hand. The sheer power of the combination of its chosen logic & rhetoric certainly plays a role, but not more than compass & depth. ∫ Per definition, critical metaphysics is multi-cultural and global, with a comprehensive worldview integrating as many as possible cultural objects, sensitivities and dada's. The logical conditions of thought making thinking possible convey the simultaneity of knower and known in every act of cognition, in every moment of actual knowing. Ontologies placing the knower before the known (idealisms) or those privileging the known (realisms) are pre-critical exercises in metaphysics. This needs to be identified and acknowledged. If not, ontological illusions come into play. Pre-critical thinking introduces a substance ; a self-contained, self-powered, absolutely independent, isolated and autarchic essence, a thing existing inherently, from its own side only. The extremes of the set of objects belonging to substantial thinking are the hypertrophy of the subject (the knower) and the inflation of the object (the known). The former is rooted in Platonism, the latter in Peripatetics. Both have to be superseded. If not, metaphysics (in particular ontology) is an archaeology of knowledge, grounding the possibility of conceptual thought, knowledge and its advancement in something else than the mere conditions found, namely those normative principles, norms & maxims of possible cognitive thought we have been using all the time. These conditions are ontologized. This reification introduces a "real" or an "ideal" substance to ground the possibility of thought. Moreover, it brings about an illusion causing the perversity of reason. The two sides of the logical & epistemological conditions of conceptual thought are to remain simultaneous in every act of cognition. Subjective and objective conditions remain bound together but in a constant conflict of interest. Their discordant truce allows us to understand thought, knowledge & the production of valid knowledge without scandals. Likewise, the conflict between science & metaphysics can be mediated when the interdependence between both is realized. It is impossible to dissolve this dualism. Those who try do it at their own peril and at the loss of those accepting the tenets of either ontological realism (denying all metaphysics) or ontological idealism (eliminating the role of the factual). Critical metaphysics is based on valid science, but is not a science. It is a theoretical philosophy, a totalizing speculative view of the world. § 8 The Objectivity of Sensate Objects. α. The subject of knowledge, the knower, is an object-possessor. A subject without an object is as nonexistent as a square circle. So the very act of cognition calls for duality. ∫ Although duality is not unity, dual-unions do occur. β. Two and only two kinds of objects are possessed by the knower ; sensate and mental objects. Their difference is not ontological, for both are actual occasions, events or aggregates of events. β.1 These two objects do have distinct sources. Sensate objects depend on the correct functioning of the five sensoric systems, while mental objects depend on the field of consciousness and its center, the knower. β.2 At the bottom level of perception, sensate objects are extra-mental, but at the top level of sensation or conscious sentience these naked perceptions themselves, through neurophysiological code, interpretation & labelling, have become part of the mental world, although they remain objects with particular features derived from perception, distinct from objects imputed by the activity of the mind alone. ∫ To accept the senses, is to accept we don't sense what they perceive. To accept the mind, is to accept concepts do not perceive. γ. Sensate objects are those perceived by the senses, processed by the latter, transported to the thalamus and projected on the neocortex. The latter computes the identification & naming of these afferent impulses. This turns them into sensate objects part of the field of consciousness of the knower to be observed. Hence, perception and sensation differ by their measure of interpretation. γ.1 Biologically & epistemologically, interpretation cannot be eliminated. While it can be reduced, sensate objects are always processed naked perceptive data. γ.2 Sensation and interpretation are simultaneous. The former arises as a result of stimuli influencing the sensitive surfaces of the five senses, the latter by the ongoing activity of mental processes with their particular objects and semiotics. δ. Objectivity is guaranteed because sensate objects depend on what happens at the sensitive surfaces of the five senses. Epistemologically, we must accept facts also carry the input of the world "out there". Suppose we don't, then our knowledge is no longer knowledge about some thing, but merely an intra-mental (intersubjective) phenomenon. The concordia discors is left for a reduction of the object of experience to the subject of experience (as in ontological idealism), leading to a corrupt form of epistemology, misrepresenting the possibilities of knowledge, as well as its production. ∫ Reality nor ideality are a problem. Their reification always is. ε. Objectivity is the tenacity with which sensate objects appear solitary, independent and separated from other objects. Physical reality defined by physics implies a something which is not thought, with relations not requiring they are thought about. This homogeneous approach of Nature defines the latter as constituted by the extra-mental, by the theory-transcendent aspect of facts. In the physicalist & materialist view, sensate objects are "real" because they are independent and separate from Nature being thought about. Although objectivity is stubbornly unyielding, not a single permanent sensate object is found, for every object is fundamentally a differential moment and so in process rather than revealing ipseity, own being, own becoming, own-form, intrinsic nature or substance from its own side. Hence, objectivity is always relative to the interval at hand, and this unveils conscious choice. Also spatially, subjective expectations trigger new objective perspectives. ∫ Reality and ideality are not to be avoided, but merely act as the two regulative ideas bringing, by way of correspondence and by way of consensus respectively, the two methodological sides of the process of knowledge-production to a greater unity. ζ. Without sensate objects, true conventional knowledge, i.e. the valid empirico-formal propositions of science, cannot be articulated nor validated. They, so must we assume, provide the elements not dependent on mental objects. These are not substances, but the ongoing actuality of phenomena. But although facts appear as constituted of elements independent of the mind, they are at the same time constituted by theories depending on opinion, intersubjective testing, conjecture & argumentation, yes, even on implicit or explicit metaphysical background information. Sensate objects are therefore only seemingly stable and inherently self-identical. Not to grasp this is to break away from the concordia discors and plunge reason into the scandal & folly of a "perversa ratio", like the one promoting, by lack of spirit, the "nature morte" of a dying universe without rebirth. ∫ When moving to the extreme of objectivity, subjectivity needs to be invoked ! η. Natural science's exclusive concern with thoughts about Nature, concepts not requiring they being thought, is not an ontological choice (as in ontological realism found in materialism & physicalism), but an epistemic interest or methodological concern. Natural science wants to isolate the "hard facts" as clear as possible, meaning independent of the necessity of their appearance in fields of consciousness in order for them to function. The conditions & determinations of a physical object call for the calculation of the probability of some sensate object to manifest properties. The latter reflect, so we are bound to assume, the interconnectedness between Nature stimulating the sensitive surfaces of the five senses. The recurrence of the form of definiteness at hand identifies the activity of Nature insofar as it is approached homogeneously. θ. Because all phenomena are actual occasions, natural science is able to enlarge is perspective, and integrate other families of actual occasions like information and consciousness. Together with matter, these three represent the hardware, software and userware to be studied by natural science. ∫ Redefining "phenomenon" as "actual occasion", breaks away from the identification of the object of natural science with matter. Code, symbols and information (form), as well as autoregulation & conscious observation (contents), are part of this new science of Nature. The objectivity of sensate objects is the foundation of our outer sense of reality. "Outer" in the sense of coming in through the senses, the gates informing us about what goes on "out there" (in terms of efficiency & finality). We must assume these stimuli to be independent of the operations on the side of the knower. If not, knowledge is no longer about some extra-mental thing. In that case we plunge epistemology in darkness and break away from the necessary discordant truce between objective and subjective conditions of knowledge, its production and advancement. However, the information gathered by the senses depends on the features of their sensitive surfaces, calling for different physical processes and their limitations. What is gathered on these surfaces is then translated and transported to the thalamus, coding it for reception by the neocortex. At the highest level, this information is presented to the human brain and its mind, imputing a sensate object. Objectivity refers to subjectivity. § 9 The Subjectivity of Mental Objects. α. Sensate and mental objects are those possessed or apprehended by the mind, appearing in a field of consciousness with at its center the cognizer, the knower. Sensate objects only appear when the five senses convey their perceptive information correctly to the brain, offering it (by way of interaction) to the mind and its knower (cf. Criticosynthesis, 2008, chapter 4 & A Philosophy of the Mind and Its Brain, 2009). During sensoric deprivation ("pratyâhara"), only mental objects appear. One "observes" with the "inner sense" of consciousness itself. In normal waking, both objects constantly overlap and mingle. Only with analytical attention does one notice their distinctness. β. Subjectivity is guaranteed because sensate objects themselves can only be constituted if and only if the data projected on the neocortex by the thalamus is interpreted. And the latter is not merely a computation of the neocortex, but also involves the impact of the mind independent of the brain, namely through interaction by way of (re)valuating the brain's propensity-fields. β.1 Hence, everything smelled, tasted, seen, heard or touched is already a "thing-for-us" (cf. Kant's "das Ding-für-uns") ; already an appearance of something, not the thing itself ! β.2 This Copernican Revolution reveals the core inspiration of the transcendental level of mind : to unveil, discover or reveal the mechanism of the mind enabling us to impute sensate & mental objects. The presence of these intra-mental operators make it clear sensate object merely appear as independent of the mind, and this in a very striking and convincing way. This is the quest leading to the sublime : how can something appear so strikingly different than it actually is ? ∫ Illusion ("mâyâ") is a truth-concealer, for it poisons the mind to believe a rope is a snake. Like a hallucinogenic, it makes us believe a one-winged bird truly flies. γ. Subjectivity is the invisible, intangible, non-physical, nonspatial, temporal impact of valuation, reassessment, autopoiesis, auto-structuration and conscious (sentient) choice on the contents of consciousness, i.e. on both sensate and mental objects appearing in its field and apprehended by the subject of experience, the knower, and this at every differential moment of the actual stream of consciousness hic et nunc, i.e. in every instance of its temporal ongoingness and creative advance from its beginningless past to its endless future. ∫ The subject of experience, the knower, depends on the known. The known depends on the knower. In each actuality, both are simultaneous. δ. Without mental objects, no thoughts, opinions, conjectures, hypothesis or theories could be articulated. Refuting them would also be impossible. This fact is as important as the tenacity of sensate objects, contributing to the grand spectacle of illusions offered by the conventional world and its suffering. δ.1 Both tyrannies work together to cage our understanding, forcing it to prostrate before the idol of the ideas of the Real or the Ideal. Although theories appear in an intersubjective context shared by all involved sign-interpreters, theoretical constructs, connotations, concepts and words do not replace naked perception, and the data derived from that. Idealism or the eternalism of the subject must be avoided as hard as realism, the eternalism of the object. δ.2 Also the negation of anything objective and/or subjective having any functional relevance whatsoever (annihilationism) is to be rejected. Keeping the concordia discors ever alive is accepting both objective & subjective conditions of conceptual knowledge, giving both an equal share in the production of knowledge. ∫ Moving to the extreme of subjectivity ? Call in common sense ! The subjectivity of mental objects builds our inner sense of conscious existence, our ideality. "Inner" in the sense of also appearing without sensate objects (as in sensory deprivation, sensualizations, visualisations, imaginations & dreams) and "ideal" because a sentient apprehension is a non-physical presence and self-reflective. We must accept the mind to be independent of the sensate objects appearing to it. If not, the mind is devaluated, and reduced to an real object. At this point, a merely passive mind must ensue. But the mind is active and co-determines what is called fact ! It co-defines the real. But an ideal subjectivity does not constitute objectivity. Although theory co-determines observation, sensate objects are not solely defined by language-games. Subjectivity refers to objectivity. § 10 Direct & Indirect Experience. α. Experience, from the Latin "experientia" or "knowledge gained by repeated trials", the compound of "ex-" or "out of" + "peritus" or "experienced, tested", is what is available through observation. This is apprehending, positing or imputing sensate and/or mental objects in the field of consciousness of the knower. Direct experience is the subjective apprehension of objects here & now. Indirect experience is intersubjective. How to conceptualize the experience of smelling a rose ? β. It could be argued consciousness itself is a mental object. However, a "prise de conscience" is something different than merely being a receptive sentient field with an apprehending center, for it involves attention, intention, introspection, autoregulation, etc. These point to the special dynamic characteristics of sentience, related to the inner, cognizing luminosity of the mind itself. The knower is not a passive mental object, but the transcendental "I think" enabling the processes of the empirical ego to occur. It is of all times and necessarily at work in every cognitive act. The knower takes actively part in every cognitive act. ∫ Empirical ego, transcendental ego, creative self and selfless nondual prehension are the levels of consciousness, its degrees of freedom. γ. Direct experience is gained in the context of reality-for-me ; from the vantage point of the first person. Its objects appear when the knower is alone (the set of observers = 1). Shared by a potentially relevant but insignificant group of observers, direct experience may turn into second person knowledge  (the set of observers = 2). Only when after considerable experimentation a significant number of involved sign-interpreters deem it so, direct experience becomes fact, i.e. a third person (the set of observers > 2) item of valid conventional knowledge. At the very moment a fact is produced, experience becomes indirect and therefore intersubjective. δ. Indirect experience involves a sharing of objects by at least more than two observers. Relevant indirect experience is limited to a small group of observers, while significant indirect experience implies high probability objects, namely those highly recurrent. The latter call for a process of validation involving repeated testing & argued (re)modelling. ε. Direct experience, the foundation of our personal sense of reality remains, from moment to moment, the cornerstone of the existential situation we find ourselves in. This is the actual mindstream or stream of consciousness with its fleeting moments of sentient activities. This mindstream determines our happiness or misery. The ongoingness of our loneliness gives definiteness to this passage of time and the connections between events correlated with it. Although highly subjective, this intimate knowledge, this direct, living knowledge (cf. "Da'at") co-determines how we perceive the knower and the known. Inner direct experience, the cultivation of attention & autoregulation, and outer direct experience, the science and art of observation, are pivotal in living our inner life well. ∫ Because the smell of a rose cannot be put into words, the most important things in our lives never depend upon reason. The more knowledge is public, the more it becomes indirect. The more knowledge is private, the more it is direct. Although direct knowledge is the root, it cannot serve to build intersubjective paradigms of valid conventional knowledge. This would lead to the domination of the view of a single (or a few) observers over all others. However, absolute truth is an object of direct knowledge. Intersubjective knowledge is always indirect, and belongs to the world of (valid) conventional information. Part of the sapient observer, it no longer merely belongs to his or her personal "Lebenswelt", but to the community of involved rational sign-interpreters. Of course, direct knowledge gathered by the single observer may influence the latter and thus assist to produce experiences shared in common. In this sense, such knowledge is, conventionally speaking, simultaneously highly relevant and highly insignificant (trivial). But because it is highly relevant, chances exist it leads to significant results. Moreover, only by way of direct knowledge does one realize the suchness/thatness of all possible phenomena. C. Towards a Critical Metaphysics. Western philosophy, starting in Ancient Egypt and Greece, cherished the quest for the unbounded, self-sufficient (substantial) ground of all phenomena, accepting a permanent core or foundation ad hoc. In Kemet, transcendence remained interdependent, and so a more henotheist, pan-en-theist view dominated. In Greco-Roman religion and philosophy transcendence was always linked to independence, of being Olympically isolated from the plebs below. This aristocratic elitism influenced the intellectuals Hellenizing Judeo-Christian theology. The absolute appeared as a Caesar, the sole "substance of substances", the One Alone, omnipotent & omniscient. This is like turning the ultimate into a creative principle, a self-powered "entity of entities". In modern philosophy, the tendency to reify served the quest for the "great formula" explaining the fundamental nature of phenomena. Either the Ideal or the Real were substantialized and used as two conflicting archaeologies of the possibility of knowledge. Their pre-critical kind of metaphysics dealt with the self-sufficient ground itself. Materialism, realism and empirism battled with spiritualism, idealism and rationalism. The resulting chaos was outstanding. These systems were unable to explain the absolute nature of phenomena in terms of process, abolishing permanency. Thanks to the transcendental study of our cognitive faculty, we no longer ground knowledge outside knowledge, but in the groundless ground of the mind itself. Given process, we no longer accept substance, and so radically relinquish inherent existence from its own side, i.e. independent & separate substance. The first question of critical metaphysics, besides keeping the demarcation with science intact, is indeed Why something rather than nothing ? Hence, the study of existence is crucial. For finding no permanent object and concluding all phenomena are impermanent, transforms critical metaphysics into a metaphysics of process. § 1 Transcendence & Interdependence in Ancient Egyptian Sapience. α. The Ancient Egyptians deliver our earliest -though by no means primitive- written evidence of extensive speculative thinking (cf. The Pyramid Texts of Unas, 2006). One may therefore characterize Egyptian thought as the beginning of speculation if not of philosophy. As far back as the third millennium BCE, they posed questions about being and nonbeing, the essence of time, the nature of the cosmos and man, the meaning of death, the foundation of human society and the legitimation of political power, etc. ∫ To read a ca.4.300 years old canonical text without any transcription errors is indeed a rare feat. β. Considering the three stages of cognition (cf. Criticosynthesis, 2008, chapter 6), two important demarcations need to be made. The first exists between ante-rationality and rationality. The second between rationality and meta-rationality. β.1 These distinctions point to the integration of decontextualization. Before the rational stage, conceptualization is either pre-rational or proto-rational, introducing unstable pre-concepts or contextualized concepts. With the advent of formal thought, and based on the gained capacity to make abstractions, theory appears. β.2 The second line is between, on the one hand, conceptuality, and, on the other hand, a-conceptuality & non-conceptuality. Mythical thought is a-conceptual. Nondual thought is non-conceptual. Between these, the concept is at hand in various forms : pre-concept, concrete concept, formal concept, transcendental concept & creative concept. Mode of Stages of Level of Concepts 1 mythical ante-rational a-conceptual 2 pre-rational pre-concept 3 proto-rational concrete concept 4 formal rational abstract concept 5 critical transcendental concept 6 creative meta-rational creative concept 7 nondual non-conceptual γ. In genetic epistemology, the cognitive process is analyzed in terms of coordination of movements, interiorization and permanency : 1. initiation : the formation of new cognitive forms is triggered by the repeated confrontation with an unexpected, novel action, a set of events radically undermining the tenacity with which acquired ideas shape a particular, limited view of the world. This is  a secure & stable architecture of habits & expectations, dramatically challenged by this significant confrontation with the novel action - no conceptualization occurs, for objects and beings are equated with their motoric coordinations (as in mythical thought) ; 2. processing : action-reflection or the interiorization of this novel action by means of semiotical factors ; this is the first level of permanency fashioning pre-concepts having no decontextualized use (as in pre-rational thought) ; 3. expanding : anticipation & retro-action using these pre-concepts, valid insofar as they symbolize the original action, but always with reference to context : the concrete concept (as in proto-rational thought) ; 4. final level of permanency : formal concepts, valid independent of the original action & context, the formation of permanent cognitive (mental) operators : the abstract concept (as in formal thought). δ. Ancient Egyptian cultural objects are always contextualized and rooted in mythical constructs and topical pre-concepts. This makes it more difficult to take note of the general features of the patchwork. But a number of strata do appear : Heliopolitan, Hermopolitan, Osirian, Memphite and Theban speculative thought can be textually identified (cf. Ancient Egyptian Wisdom Readings, 2008). These themes can be isolated because proto-rationality does have a closure, albeit one dependent on the context at hand. The "Greek miracle", the introduction of abstraction or the decontextualized use of concepts, did not preclude pre-Greek civilizations, of which Ancient Egypt was the grandest, to produce great thinkers, writers, men of science & philosophers avant la lettre. Of all peoples of Antiquity, the Ancient Egyptians were the most literary, reproducing huge quantities of hieroglyphic texts in their tombs and on the walls of their temples. Comparatively, a huge number has been recovered, but we know the majority was lost ... ∫ Two central themes run through the whole of Dynastic Egypt : (a) the balancing role of the divine king (in particular in causing the Nile to flood in accordance with Maat) and (b) the unity-in-multiplicity of the natural & divine orders. ε.  In the henotheism of Ancient Egypt, the radical ontological difference between the creating and the created pertains. The former ("natura naturans"), consisted of the light-spirits of the gods and royal ancestors (the "akhu"), residing in the circumpolar stars, untouched by the movement of rising and setting, shining permanently from above. These spirits did interact with their creation ("natura naturata") by means of their "souls" ("bas") and "doubles" ("kas"). The Bas represented the dynamical, interconnective principle, ritually invited to descend and bless creation by way of the offerings made to their Kas. These resided on Earth in the cult-statue hidden away in the dark "naos" or "holy of holies" of the Egyptian temple. Only the king or his representatives could enter this sacred space and offer the world-order ("Maat"). This exclusivity was the result of the fact gods only communicate with gods and the king was the only "Akh" or divine spirit actually embodied on Earth. So he alone could make the connection. The transcendent nature of the deities, their remote presence as well as their exclusive mode of interaction, point to a monarchic mentality, to a radical transcendence, and, mutatis mutandis, the ontological difference between, on the one hand, the eternalized world of the deities and, on the other hand, the chaotic, everchanging world of man. A division to return in Platonism. ∫ The divine "akhu" are the differential states of light, derived from Atum at the first occasion ("zep tepy"), when he was one but also two and so forth (an Ennead). Monotheism, the affirmation of singularity, is not part of Kemet. The "King of Kings" is Hidden, One and Millions (cf. The Hymns to Amun, 2002). The reification of light led to the notion of a hidden, fundamental "essence", a substance existing from its own side. While Akhenaten tried to reify light, turning it into the sole "substance of substances", Egyptian culture at large rejected this singular deification. ζ. Besides positing this substantial division of Nature in two, the Ancient Egyptians stressed their mutual dependence (cf. The Maxims of Good Discourse or the Wisdom of Ptahhotep, 2002). The procedure of weighing became a metaphor of the shamanistic exchange between the transcendent and the human world. The pair of scales involved the natural, automatic functioning of a natural law, namely "Maat", the deity of righteousness & truth born with the universe ... "Said he (Anubis) that is in the tomb :  'Pay attention to the decision of truth Papyrus of Ani, Plate 3  (note how the plummet hangs as a heart on the Feather of Maat) In this short exhortation, a practical method of truth springs to the fore : concentration, observation, quantification (analysis, spatiotemporal flow, measurements) & recording (fixating). This with the purpose of rebalancing, reequilibrating & correcting concrete states of affairs, using the plumb-line of the various equilibria in which these actual aggregates of events are dynamically -scale-wise- involved, causing Maat to be done for them and their environments and the proper Ka, at peace with itself, to flow between all parts of creation. The "logic" behind this operation involves four rules : η. The later notions of "nous" and "logos", at one time supposed to have been introduced into Egypt from abroad at a much later date, were present at a very early period (cf. The Memphis Theology, 2001 & On the Shabaka Stone, 2001). Thus the Greek tradition of the origin of their philosophy in Egypt undoubtedly contains more truth than some Classical scholars would prefer (cf. Hermes the Egyptian, 2002). Before the earliest Greek philosophers were born, the practice of interpreting  the functions and relations of the Egyptian gods philosophically already begun in Egypt. Is it impossible the Greek practice of interpreting their own gods likewise received its first impulse from Egypt ? No. Shabaka Stone : LINE 53 (Memphis Theology - hieroglyphs in red are reconstructed) : "There comes into being in the mind. There comes into being by the tongue. (It is) as the image of Atum.  Ptah is the very great, who gives life to all the gods and their Kas. It all in this mind and by this tongue." eart" may be translated as "mind" & "tongue" as "speech". The "heart of Ptah" is not yet a Greek "nous" devoid of context, i.e. an abstract, rational idea. Only concrete concepts prevail and closure is proto-rational. Rather, the contents of mind (or the meaning of the words) simultaneously move Ptah's tongue, bringing out the words actually spoken. So besides transcendence and a very strong interdependence between Heaven and Earth, Egyptian sapience attributed creative power to the spoken word, in particular in terms of giving particular form to the objects of creation. Such a "great word" was an authority ("hu") by itself, commanding powers ("heka") not to be stopped. Full of understanding ("sia"), it could only be spoken by the divine king himself and his chosen high priests. For only the king was a "Son of Re", the sole divine "akh" or spirit on Earth and so the exclusive mediator between Egypt and the gods. θ. In Ancient Egyptian literature, lots of themes animating Greek philosophy since Pythagoras are on record. However, these speculations always reflect an ante-rational mode of cognition, characterized by the total absence of theory, abstraction and the use of decontextualized (formal) concepts. This makes understanding them so difficult, but also very rewarding. ∫ Not to study Ancient Egyptian literature & sapiental discourses, is to neglect the mother of Western philosophy. It is a mistake to think philosophy started with the Ancient Greeks. Although introducing formal thinking, the Greeks were inspired by the sapience they found in Egypt. Most themes found in Greek metaphysics were part of the ante-rational speculations of the thinkers of Kemet. In particular, their views on substance ("akh"), transcendence ("pet") and interdependence ("ba" & "ka") had a profound effect on Platonism and Greek science. This does not imply Greek philosophy was "out of Africa", but neither can one claim Hellenistic speculative thought was a spontaneous find of the Greeks. Inventing the syllogism, they often got the second premise from the vast Kemetic storehouse of observation. § 2 Greek Metaphysics : Transcendence & Independence. α. Describing the particulars of the Ancient Greek mentality calls for more than youth, keen interest, opportunism, individualism & anthropocentrism. With the introduction of formal conceptual reason and its application to the major problems of philosophy (truth, goodness, beauty & the origin of the world, life and the human), a completely new kind of sapiental thinking was set afoot. Theory, linearization and abstraction were discovered and applied, giving birth to a new style. The Greek method of analysis & synthesis objectified the immediate in discursive terms, and this in a script symbolizing vowels. This Hellenizing leap forward was then offered (enforced) to the world. It was introduced as far as India, where it influenced mathematics, astrology and Buddhist iconography, but also heralded the Ptolemaic Period of Ancient Egypt (305 - 30 BCE), bringing about Hermetism (cf. Hermes the Egyptian, 2003), as well as an Egyptian (Judeo-Christian) Gnosticism. β. As Indo-Europeans, the Ionian "sophoi" pioneering Greek philosophy, had typical features of their own : • individuality / authority : a single member of humanity was no longer ontologically inferior to the group, the tribe, the clan, the nome, etc. There must be good reasons to accept any authority ; • exploring mentality : one must seek the final frontier, integrate what is the best and keep what is good ; • unique dynamic script : by the introduction of vowels, the written and the spoken word mirrored more adequately ;  • linearizing, geometrizing method : phenomena obey mathematics, and a stable, linear description prevails ; • anthropomorphic theology : the Supreme Beings are like a human family, with a paternalist figure-head. Henotheism ensues and prevails throughout Paganism. The Supreme is essentially One, but existentially Many. γ. In their ante-rational speculations, the pre-Socratics sought the foundation or "arché" of the world. This final, self-sufficient, autarchic ground had to explain existence as well as the moral order. For Anaximander of Miletus (ca. 611 - 547 BCE), the cosmos developed out of the "apeiron", or "no bound", the boundless, infinite & indefinite. This is without distinguishable qualities. Later, Aristotle would add a few of his own : immortal, Divine and imperishable. δ. The Archaic stratum of the "Greek Miracle" was layered. Steeped in Greek myth (Hesiod, Homer), pre-concepts emerged, rapidly followed by a series of concrete concepts playing a comprehensive, totalizing role in the explanation of what is at hand : • Milesian "arché", "phusis" & "apeiron" : the elemental laws of the cosmos are rooted in substance, which is all ; • Pythagorian "tetraktys" : the elemental cosmos is rooted in numbers which form man, gods & demons ; • Heraclitian "psuche" & "logos" : a quasi-reflective self-consciousness, symbolical & psychological ; • Parmenidian "aletheia" : the moment of truth is a decision away from opinion ("doxa") entering "being" ; • Protagorian "anthropos" : man is the measure of all things and the relative reigns. ε. The Ionians, largely basing themselves on myth, introduced the first pre-concepts & concrete concepts. Thanks to Pythagoras (ca. 580 BCE - ca. 500) and the Eleatics, the a priori dawned. A new mathematics, logic & rhetoric were born. The term "philosophy" coined. ε.1 After the Persian Wars (449 BCE), starting with the Sophists, Greek philosophy displayed the rule of reason & the subsequent liberation of thought from all possible contexts. Abstraction could come into play. The subsequent relativism of the Sophists is rejected by Socrates (470 - 399 BCE). He sought universal, eternal truths by way of dialogue, criticizing established views and inviting his listeners to discover this truth by the use of their own minds. For Socrates, the practice of philosophy helps to understanding the role of the human being as part of the "polis", a designated community. Plato, Xenophon & Aristophanes portray an original, unique, civilized but non-conformist individualist, ironical, brave, dispassionate and impossible to classify, belonging to no school. ε.2 This exceptional individual embodied the ideal of Greek philosophy. • philosophy is a radical, uncompromising, authentic search for understanding, insight & wisdom ; • philosophy is never an intellectual, optional "game", but demands the enthusiast arousal of all faculties, addressing the "complete" human and giving birth to a practice of philosophy ; • philosophy equals relative, conventional, approximate truth, but never absolute truth. Greek philosophy, accepting intuition, never eliminates reason. ζ. The classical systems of Plato (428 - 347 BCE) & Aristotle (384 - 322 BCE) are a reply to the relativism of the Sophists. Protagorian relativism is rejected. To refute this scepticism, i.e. the unwillingness to accept there is only "doxa", opinion, not "aletheia", truth, Classical Greek philosophy opts for substantialism, accepting some permanent, static unchanging, self-sufficient core to exist in changing things. This core being its substance. This essence ("eidos") or substance ("ousia") may be subjective or objective. ζ.1 As the ideal, it is a subject fundamentally unmodified by change. This higher subject is viewed as an inner, inherent ground acting, from its own side, as the common support of the successive inner states of mind. ζ.2 As the real, the substance of a thing is deemed the stuff out of which it consists, explaining the manifestation of the extra-mental, objective, kicking world "out there". Both need to be criticized. η. In Western substantialism or essentialism, the substance of A is the permanent, unchanging, eternal, final, self-sufficient ground, foundation, core or essence of A, something existing from its own side, never as an attribute of or in relation with any other thing. ∫ If I think my wife (husband) is real, how to make love to her (him) ? If I think my wife (husband) is ideal, how to remain serious ? θ. Both Plato & Aristotle are substantialists and concept-realists. They seek a self-sufficient ground and both root our concepts in an extra-mental reality outside knowledge. Plato cuts reality in two qualitatively different worlds. True knowledge is remembering the world of ideas. He roots it in the ideal. Aristotle divides the mind in two functionally different intellects. To draw out & abstract the common element, an "intellectus agens" is needed. He roots knowledge in the real. ι. The foundationalism inherent in concept-realism seeks permanence but cannot find it. It therefore ends the infinite regress ad hoc and posits something to be possessed by the subject. This is either an object of the mind (like an active intellect or an eternal soul) or an object of the extra-mental world (the permanent stuff of reality). Greek concept-realism seeks substance ("ousia") and substrate ("hypokeimenon"). This core is permanent, unchanging and exists from its own side. κ. In concept-realism and foundationalism, truth is transcendent, independent and permanent (eternal). As soon as positing a fixed & static object is habitual, the mind arrests its primary critical task to continuously distinguish between a substance-based and a process-based view on sensate and mental objects. Avoiding the first, an infinite potential and dynamic transformation due to interdependence is made possible. Entrapped by the illusions displayed by truth-concealers, ever-changing display and the rise of multiplicity are impossible. ∫ Positing substance splits the stream, while accepting process makes way for the flow. The "Greek miracle" escaped the narrow confines of the contextual thinking characterizing the way of Antiquity. Formal, theoretical thought, individualism and a dialogal attitude would revolutionize speculation and give birth to philosophy as a rational way to understand the world as a whole. The pre-Socratics introduced fundamental concepts like "arché", "logos" & "aletheia". The Eleatics heralded the a priori, while Democritus focused on the a posteriori and the Sophists introduced the pragmatism & relativism of the "anthropos". The classical systems of Plato and Aristotle tried to bring these together within the framework of a generalizing concept-realism, grounding the truth of concepts in either a transcendent ideality or in the world of the senses respectively. Substantialism (essentialism) were deemed necessary to explain the possibility of knowledge. Ontology defined epistemology. § 3 Metaphysics in Monotheism and Modern Philosophy. α. Greek rationalism and concept-realism influenced Egyptian thinking, triggering Hermetism (cf. Hermes the Egyptian, 2003). The "Greek miracle" had a decisive impact on Judaism, as it would have on Christianity and Islam. At first, Platonism and neo-Platonism prevailed, but then Aristotelism took over. Greek substantialism overcrowded monotheist theology. What started as an apology serving the spread of a Semitic religion of the desert among educated urban Greco-Romans, ended up as a fundamental theology saturated by the static framework of Classical Greek thought, inviting the identification of the Supreme Being of monotheism with the Platonic "substance of substances", the "summum bonum" or the Peripatetic "Prime Mover". Thus the "Living God" of revelation, in touch with His Creation, was transformed into a "Caesar", a Supreme Being, independent & self-sufficient, the One Alone, the Monad or "Absolute of absolutes" looking down on His creatures. Omniscient & omnipotent, this "God of Gods" could hardly entertain any interest in humanity, except in terms of a strict Greek analysis of the rules and obligations laid down in His scripture ... ∫ The "religions of the book" derived a view of the absolute using an exegesis based on Greek metaphysics. Such a view serves well all figures of authority trying to fool other men into "spiritual" servitude. β. Greek logic forced the implicit theomonism of the Torah into a monotheism. This view was unable to embrace the bi-polar nature of the Deity. Indeed, "YHVH ALHYM", revealed to Moses on the Horeb, was both singular ("YHVH") and plural ("ALHYM"). This "coincidentio oppositorum", also found in Ancient Egyptian sapience, in particular in the transcendent function of Pharaoh, a shamanist king of sorts, comes nearer to the direct experience of the Divine "face to Face". In essence, God is ineffable (singular), but existentially He is "Elohîm", and so plural. This is pan-en-theism and theomonism, but not strict monotheism. What happened ? The Greeks translated the Hebrew Name of God as "theos" (singular), eclipsing the Divine Presence ("shekinah") given with the plural "Elohîm". In this way, Judaism got Hellenized, triggering countless fringe counter-movements (cf. the Qumrân-people, the Zelotes, the Johannites, the Jesus-people etc.) γ. The issues related to the Persons of the Holy Trinity were tackled with the Greek triadic logic of "monos" (manation), "proodos" (emanation) & "epistrophe" (return). The stringent nature of both Greek formal logic and concept-realism caused the dogmatic breach between Orthodox and Roman Trinitarism, for Rome allowed the Spirit to also proceed from the Son (cf. the "filioque" - A Christian Orthodoxy and the Holy Spirit, 2004). The conceptual difficulties related to the nature of Jesus Christ, to be named "God" in the same measure as His Father -with whom He is consubstantial- but also fully & perfectly human, gave rise to a rich tapestry of conflicting views. These were elaborated using the full measure of the possibilities given by traditional formal logic. They caused many heresies (alternative choices) and doctrinal problems. These induced violence, both mental and physical. The direct experience of the "Living Christ" was thus replaced by a theological system, a monolith intended to rule the world, spiritually & worldly. The spiritual impetus of the Egyptian Hermits in Christ was soon replaced by monastic orders protected by walls and controlled by the Episcopate. δ. The Koran sees with two eyes. With the left, the remote, essential, substantial side of "Allâh" ("The God") is seen. This leads to the theology of the law. With the right, the near, actual presence of "Allâh" is experienced, bringing in rapture, beauty, poetry and all possible enjoyments. This leads to the theology of spiritual emancipation. After the death of Muhammad, the Prophet of Islam, peace be with him, Islam spread out and assimilated Greek science, logic & philosophy. In a few centuries, it had gotten Hellenized and even integrated the Hermetism of Harran ! The logic of remoteness, largely and subreptively based on the model of The One of Plotinus, gave weight to the idea of predestination. The overpowering, Imperial interpretation of the omniscient & omnipotent status of "Allâh", favoured by jurists, scholars & intellectuals alike, made any kind of intimate encounter with the Divine suspicious (as in Sufism). Due to the Greek "privatio", the world and man were deemed without self-sufficient substance, and hence, with the turn of Greek logic, The God is the only one truly in charge of Being, exception made for the Perfect Man, an embodiment of the 99 Names of "Allâh", personified in the person of Mohammed. Again the logic of Greek formalism had embanked a living stream, causing strong oppositions and theological schisms. Politically (cf. Sunna versus Shi'a), as well as hermeneutically (cf. Sharia versus Sufism), tensions were and are too often coupled with disrespect, brutality & violence. Because the power of formal logic is nowhere granted more privilege than in Islamic theology, the danger to be entrapped in radical dogmatism & fanatism is outstanding. ∫ Monotheist theology remains a monolithic mastodon, displaying a gigantism slowly brought down by the discoveries of science and the ongoing creative advance of the human mind. The impact of the monotheist concept of God on pre-critical metaphysics was unmistaken. In Scholasticism, philosophy merely served theology, so the link is obvious. However, it took modern philosophy also quite some time to abolish the substantial God. • Humanism : (a) non-radical, nominalist denial of the conceptual realism of Scholasticism, (b) observation & experiment, (c) bricoleur-mentality deriving from the individual & (d) focus on solving practical problems ; Although the authority of religious potentates in non-spiritual matters comes under fire, the existence of a Supreme Being is not denied, neither is substantialism, trying to identify a permanent "core" in phenomena. Disclosing the plan or mind of the omnipotent & omnipresent God was no small motivation. • Rationalism of Nature : (a) mathematics of the final foundation of knowledge in a clear, distinct, continuous, certain & absolute self-sufficient ground, the final truth of which is to be intuitively grasped, (b) systematic observation & formalization of facts, (c) focused on a closed, knowledge-founding & dualistic worldview & anthropology ; For Descartes, God guarantees truth. Classical rationalism maintains an abstract concept of the Supreme Being, still viewed as existing from its own side, inherently. Both the ego cogitans, the extended things and God are substances. Spinoza goes a step further, and defines God as the sole substance with an infinite number of attributes (of which humans only grasp two). Leibniz also maintains the God of substance, adding a theodicy stressing He created the best possible world ... • Empirism of Nature : (a) mathematical certainty & impressions are the foundation of knowledge (phenomenalism), (b) systematic observation & its formalization, (c) sceptic agnosticism undermining positive science, scholastic & natural metaphysics alike ; Empirists like Locke and Hume no longer wish to incorporate non-sensate objects like God. They introduce the first step in an increasing cleavage between science and the God of revelation. No longer needing "this hypothesis" (Laplace), they restrict the domain of valid knowledge to statements incorporating empirical data. God slowly fades to the background and becomes a private matter. • Criticism : (a) a systematic, transcendental investigation of the objective boundaries of "Verstand" (mind) and "Vernunft" (reason) operating in the subject of knowledge, (b) the elimination of the ideas of God, Soul & World as constitutive for knowledge, (c) Copernican Revolution : the human mind imposes Nature its own a priori categories, (d) focused on a new, scientific (immanent) metaphysics not moving beyond the boundaries necessary for mind & reason to function properly ; Even Kant, although ousting God from the field of pure reason, retained the concept of a substantial God, reintroduced as a postulate of practical reason ! This divide between theory & practice as well as unsolved theoretical problems, triggered idealism. Misunderstanding Kant, German Idealists like Fichte, Schelling & Hegel bring about a reactionary revival of Divine substance. Introducing a dynamism, Hegel tries to incorporate the idea of historical change. It eludes him one cannot truly couple substantialism with dynamism, except by violating the "dead bones" of formal logic ... the result being a philosophy pitying facts. • Technologicism : (a) metaphysics & theology are negative values, facts are positive (Comte) and science is able to work in a way not involving subjectivity at all (Weber), (b) sense-data are the foundation of knowledge & the emergent technological materialism (Russell), (c) a definite movement towards a new, secular scientific class fashioning their logical-positivist monolith dictating atheism (or agnosticism) and reductionist humanism ; In the Romantic Age, while God is finally driven out from the edifice of Newtonian science, we witness an exotism introducing Eastern ideas of the Divine and interest in fringe subjects (cf. psychic research, occultism, Egyptomania). In philosophy, a protest movement unfolds rejecting the supreme role of reason. Nietzsche correctly foresees the end of the Platonic God ... Technology based on Newtonian science is the new "Holy Grail". • Institutionalism : (a) rapid, massive global divulgation of closed Carnot systems, (b) valid knowledge is tested & consensual : a scientific elitism with its given discourses, conventions, parlances and local logics - science as the servant of industry, the military, the "powers that be", (c) focused on the illusionary metaphysics of permanent scientific discovery & material growth, (d) denial of the role of the First Person Perspective in science, (e) negation of the results of observational psychology and the cult of sense-data, instrumentalism & strategic communication ; With materialism, physicalism, scientism, logical positivism, instrumentalism and the like, the subject of experience is reduced to the physical stuff of the brain, and believe in God has become silly & retarded. Metaphysics is no longer a valid subject of inquiry. This new paradigm conquers the Western world and is institutionalized. Opposing views are disposed of as useless and boycotted. • Fossilism : (a) globalization of egology, destruction of ecosystems & social depravity, (b) rapid moral degeneration, corrupt status quo, the rise of counter & anti-cultural movements, the institutionalization of incompetence, massive global squandering of material resources, (c) virulent nihilism, death-art, the cult of irrationalism & the rise of posthumous modernism, technocratic science, militarism, narcissism & consumerism, (d) total & global misunderstanding of the needs of humanity & its survival, (e) collective forms of psychosis & hysteria, rise of violence, insecurity & ecological catastrophes, (f) fall of communism and the assimilation of socialism and ecology into late capitalism and its inherent Plutocracy : egoism "enlightened" by black light. ε.1 Modernism collapsed as soon as the "grand tales" invented by reductionism, materialism & physicalism were found to be defunct. Postmodernism introduced a "margin", a sidetrack deconstructing these main ideologies. The days of foundationalism, so cherished by modernity, are finally over. Replacing the substantial God with a physical self-sufficient ground did not lead to the expected social, political & economical harmonization, quite on the contrary. It destroyed the ecosystem and brought about a new world disorder. There is no "invisible hand" regulating late capitalism. Modernity ends in chaos & more suffering for all. Physical poverty and a psychological poverty-mentality abound. Who has not been driven into the cage of alienation ? ε.2 Hypermodernism will truly begin when science realizes it has refuted too much. Relativity, quantum, chaos and string reintroduce the subject and a renewed interest in criticism brings about a "linguistic turn". Even the absolute is reintroduced, albeit not as the substantial God. The way Nature is questioned influences the way Nature responds. Metaphysics cannot be banished but needs to be redefined. The advent of the WWW ends the restriction of information, assisting the divulgation of a multi-cultural and global worldview. But this hypermodernism has not yet reached society at large. Forced by economical & ecological catastrophes, a global change and the advent of a New Renaissance may be expected. ζ. The death of the Greek version of the Divine is not the end of the concept of the absolute, nor of the possibility of an absolute process. God* as conceived here is no longer before or beyond the world, but with all entities. In this view, God* is both impersonal (transcendent, primordial) and personal (immanent). Sharing many features of the semantic field of the Supreme Being as found in the monotheisms expounding God, It differs radically on a few crucial points : this ultimate, merely sufficient ground, is not the "substance of substances", but a Divine Process. This is both impersonal and personal, both a He, a She and an It, merely by convention addressed as "He", "Him" and "His". Moreover, God* is not omnipotent, nor a Creator ! η. The God* of process is a non-spatiotemporal actual entity giving relevance to the realm of pure possibility in the becoming of the actual world. Both potential & actual, He (She, It) is the meeting ground of the actual world & pure possibilities. Together, the realm of abstract possibilities and the actual world constitute Nature. ∫ The "God of the Philosophers" is not a God of revelation, except if the latter is ongoing. S/He is not a God beyond Nature, but with Nature. Greek substantialism, being the intellectual framework of the educated elite, became part of the theologies of the three monotheisms. God was the "substance of substance", a Supreme Being who created the world "ex nihilo". Forced by the necessities of formal logic, these theologies incorporated the problems inherent in every formal system, namely completeness & consistency. Following Plato & Aristotle, the God of monotheism became a substantial God, self-referential & autarchic, an absolute existing inherently from its own side, isolated and independent from its own creation. Unchanging, such a God could not accommodate history and be "Emmanuel", a "God-with-us". This ultimate God-as-substance was believed to be the ontological "imperial" root of all possible existence. This God is distinct (another thing or "totaliter aliter") and radically different (made of other kind of "stuff" as the world). By identifying the mind of God with Plato's world of ideas, the Augustinian Platonists had to exchange Divine grace for enlightened, intuitive reason. Thomist Peripatetics introduced perception as a valid source of knowledge and so prepared the end of fundamental theology, the rational explanation of the "facts" of revelation. For Thomas Aquinas, the relation between God and the world is a "relatio rationis", not a real or mutual bond. This scholastic notion can be explained by taking the example of a subject apprehending an object. From the side of the object only a logical, rational relationship persists. The object is not affected by the subject apprehending it. From the side of the subject however, a real relationship is at hand, for the subject is really affected by the perception of the object. According to Thomism, God is not affected by the world, and so God is like a super-object, not a subject. The world however is affected by this object-God. The relationship between God and the world can therefore not be reciprocal. If so, the world only contributes to the glory of God ("gloria externa Dei"). The finite is nothing more than a necessary "explicatio Dei". This is seen as the only way the world can contribute to God. This view contradicts the notion of the "Living God", a Deity part of history and so influenced by the free choice of sentient beings. § 4 The Fundamental Question : Being or Knowing ? α. Driven by the archaic need to find a self-sufficient ground, an "arché", the Greeks first unveiled the foundation and then explained how knowledge is possible. α.1 Plato posited a world of ideas, in all ways better than the world of becoming, and derived his epistemology of remembrance from the radical division ("chorismos") between both. The world of becoming, ever changing, multiple and diverse, could not serve as a self-sufficient ground for the absolute, unchanging truth he sought. Likewise, Aristotle, although rejecting the existence of two worlds, would first explain how all things depend on four causes (material, efficient, formal & final), and only then explain how the passive & active parts of the intellect functioned. α.2 In Greek concept-realism, the theory on being (ontology) acted as an archaeology for the theory on knowledge (epistemology). One seeks a place ("epi") on which a subject might stand ("histâmi"). Being came before knowing. β. In the Middle Ages, the apory between exaggerated realists ("reales") and nominalists ("nominales") implied a logico-linguistic transposition of the ontological apory between Plato and Aristotle. Indeed, the so-called "battle of universals" transposed Greek concept-realism, nurturing the division between "ante rem" and "in re". Universals are either before or in the realities of which they are abstractions. The extraordinary contribution of Abelard (1079 - 1142) to epistemology is his avoidance of the apory by introducing a third option : 1. universale ante rem : the universals exist before the realities they subsume : Platonism ; 2. universale in re : the universals only exist in the realities ("quidditas rei") of which they are abstractions : Aristotelism ; 3. universale post rem : universals are words, abstract universal concepts with a meaning, given to them by human convention, in which real similarities between particulars are expressed. The latter are not "essentia" and not "nihil", but "quasi res". Abelard's solution calls for a crucial distinction : universals are not real, but they are nevertheless words (real sounds) with a significance referring to real similarities between real particulars. Because of their meaning, they are therefore more than "nothing". The foundation of his particular nominalism is "the real" as evidenced by similarities between objects, whereas the "reales" supposed an ante-rational symbiosis or a symbolical adualism between "verbum" & "res", between Platonic ideas and material objects ("methexis"). With his solution, Abelard paved the way for Hume (1711 - 1776), for this radical empirism accepted -without being to able to explain them- similarities between sense-data. ∫ Too much empirism betrays the necessity of an active mind. Too much mentalism hampers the sincerity with which we hold things to be true. γ. With William of Ockham (1290 - 1350), concept-realism is finally relinquished. The foundational approach is also left behind. The nominal representations arrived at in real science are only terministic, i.e. probable. They concern individuals, never extra-mental "universals". Real science deals with true or false propositions referring to individual things. These empirical data are primordial and exclusive to establish the existence of a thing. The concept ("terminus conceptus" or "intentio animæ") is a natural sign, the natural reaction to the stimuli of a direct empirical apprehension. Rational science is possible, but it does only concern terms, not universal substances. With Ockham, the first inkling of what would become the Copernican Revolution is felt : one first needs to study the possibilities of knowledge before making statements about being. Our cognitive apparatus (the tool) is to be thoroughly known before launching ontology. Knowing is before being. ∫ Franciscan logic is simple : less is more. In an effort to lessen their feelings of insecurity and to explain how to control the multiple, non-linear, chaotic world (of becoming), Egyptian & Greek sages alike sought a "hypokeimenon", in other words, a singular super-thing underlying every possible other thing. Their minds favoured an isolated, self-dependent & unchanging absolute self-sufficient ground : solid, permanent & separate. They could not conceive the absolute as dynamical, interdependent & other-dependent. These philosophers placed being before anything else. These "saa", "sophoi" or sages considered it their privilege to make statements about this final self-sufficient ground. Different "schools" arose. In Egypt these remained contextualized (Memphis, Heliopolis, Hermopolis, Abydos, Thebes) and so dependent of the "Great House", the rule of the Solar king to guarantee unity (in plurality). In Greece, while the tenets of each school were reasonable, bringing them together merely generated contradictions, inviting the scorn of the sceptic and the sophist. This in turn motivated system builders like Plato, Aristotle & Plotinus. Although the ontological intent may be laudable, especially as a quest for a totalizing, comprehensive world view, metaphysics cannot but fail if one does not first consider the instrument with which this captivating pretence of total overview is made, namely the mind. Indeed, all statements about the absolute nature of phenomena always happen as part of the field of consciousness of those who make the claim. One cannot step outside the mind to witness how things are without it. The trick of Baron von Münchhausen, lifting himself up by pulling at his own hair, may delude those ill-prepared, but never fools attentive thinkers. Imputing being before knowing is the way of pre-critical philosophy. First studying the mind and then making generalizing statements about the common features of all possible phenomena is what is at hand. § 5 Precritical Metaphysics : Being before Knowing. α. Remigius of Auxerre (ca. 841 - 908), taught any species to be a "partitio substantialis" of the genus. The species is also the substantial unity of many individuals. Thus, individuals only differ accidentally from one another. All beings are modifications of one Being. A new child is not a new substance, but a new property of the already existing substance called "humanity" (a flair of monopsychism is felt). β. When being is posited before knowing, an implicate symbolical adualism between the name (or word) and its reality or "res" must be at hand. Words are not merely "flatus vocis", but refer to an extra-mental reality outside them, either as an idea or universal existing in another world or as a universal realized in individuals in this world. This semantic adualism baked into the fabric of reality backs the ontological "proof" of the existence of God (cf. Anselm of Canterbury - Criticosynthesis, 2008, chapter 7), but can also be found in Heidegger after "die Kehre". In strict empirism (cf. David Hume), this natural, pre-epistemic bond between words and what they represent is eliminated, but then it becomes unclear how one is able to identify any common ground between sense-data on the basis of sense-data alone, triggering scepticism. ∫ To think the transition between words and their reality as seamless is to accept the unchecked psychomorphic activity of ante-rationality. γ. Besides the dangers of dogmatism (identifying a common ground between words and reality ad hoc) and scepticism (denying any common ground, plunging epistemology in absolute relativism ad hoc), promoting being before knowing, and so positing entities before analyzing the possibilities of the cognitive tool attending them, a multiplication of self-sufficient grounds ensues. This absurdity, already apparent in classical Greek thought (namely the divide between Plato & Aristotle), returns in Scholasticism as the schism between "reales" and "nominales" and can also be found in the Modern Age as the conflict between empirism and rationalism. This was the scandal keeping Kant awake at night ... How to erect a stable foundation for philosophy ? One as solid and universal as Newton's law of gravitation ? This cannot be only a matter of choice (this-or-that conjectured self-sufficient ground), but must be based on a transcendental logic necessitating the principles of conceptual rationality itself. ∫ First we learn how to use a tool, then we use it. But we learn to use it by using it and so when using it we merely perfect our use of it. Not only does essentialist concept-realism conjure a world of static models tainted by apory, but it displays the naiveté of believing anything true can be acquired by stepping outside the limitations imposed, in the first place by cognition itself, but also by conceptual reason and its empirico-formal propositions and their paradigmatic synthesis. The conviction of having found an Archimedean stronghold blinds reason, no longer able to argue its over-the-top imputations, except ad hoc. Two extreme positions are therefore to be avoided : "being" is not to be identified with a world of ideas "in here", nor with the real world "out there". What being is in an absolute sense, as transcendent metaphysics clarifies, is no longer an object of conceptual reason. Relative being only affirms the existence of a set of features of actual occasions. Non-existence is the absence of such. Full-emptiness affirms every phenomenon, although other-dependent, lacks substantial existence of any kind. Empty of self, it is full of the others. § 6 Critical Metaphysics : Knowing before Being. α. With his "Copernican Revolution", Kant (1724 - 1804) completed the self-reflective movement initiated by Descartes, focusing on the subject of experience. Integrating the best of rationalism and empirism, he avoided the battle-field of the endless (metaphysical and ontological) controversies by (a) finding and (b) applying the conditions of all possible conceptual knowledge. β. An armed truce between object and subject is realized. Inspired by Newton (1642 - 1727) and turning against Hume, Kant deems synthetic propositions a priori possible (Hume only accepted direct synthetic propositions a posteriori). Contemporary criticism no longer goes as far as Kant. Empirico-formal statements are fallible and relative. γ. There is a categorial system producing scientific statements of fact. These are always valid and necessary (for Hume, scientific knowledge is not always valid and necessary). This system stipulates the conditions of valid knowledge and is therefore the transcendental foundation of all possible knowledge. δ. Unlike concept-realism (Platonic or Peripatetic) and nominalism (of Ockham or Hume), critical thought, inspired by Descartes, is rooted in the "I think", the transcendental condition of empirical self-consciousness without which nothing can be properly called "experience". This "I", the apex of the system of transcendental concepts, is "for all times" the idea of the connected of experiences. It is not a Cartesian substantial ego cogitans, nor a mere empirical datum, but the empty, formal condition accompanying every experience of the empirical ego. Kant calls it the transcendental (conditional) unity of all possible experience (or apperception) a priori. Like the transcendental system of which it is the formal head, it is, by necessity, shared by all those who cognize. ε. "What can I know ?" is the first question asked. Which conditions make knowledge possible ? To denote this special reflective activity a new word was coined, namely "transcendental". This meta-knowledge is not occupied with outer objects, but with our manner of knowing these objects, so far as this is meant to be possible a priori, i.e. always, everywhere and this necessarily so. Kant's aim is to prepare for a true, immanent metaphysics, different from the transcendent, dogmatic ontologisms of the past, turning thoughts into things. ζ. The transcendental system of the conditions of possible knowledge (or transcendental logic) is a hierarchy of concepts defining the objective & subjective ground of all possible knowledge, both in terms of the synthetic propositions a priori of object-knowledge (transcendental analytic covering understanding), as well as regarding the greatest possible expansion under the unity of reason. These transcendental concepts are not empirical, but are the product of the transcendental method, bringing to consciousness principles which cannot be denied because they are part of every denial. They are "pure" because they are empty of empirical data and stand on their own, while rooted in (or suspended on) the transcendental "I think" and its Factum Rationis. η. In classical (Kantian) criticism, reason, the higher faculty of knowledge, is only occupied with understanding, while the latter only processes the input from the senses. Reason is deemed not to have an intellect to inform it ! No faculty higher than reason ! In hypermodern criticism, meta-rationality, intuition or "intellectual perception" (in the form of nondual cognition) are not denied a priori. The creative objects of creative thought, as well as the ineffable dual-unions of nondual cognition are accepted and explained. This links epistemology with aesthetics and art as well as with mysticism, as clarified by transcendent metaphysics. ∫ Classical criticism still accepts substances. Hypermodern criticism banishes the archaeology of truth, beauty & goodness. Nowhere does it find self-powered entities ... Criticism seeks a hierarchy of concepts defining the objective ground of all possible thought, knowledge, cogitation, apprehension, imputation, attribution & mental grasping ... This object is not found in a self-sufficient extra-mental ground, but in the conditions & determinations of the mind itself. Transcendental logic deals with the general dualistic set of principles ruling the possibility of cognition in all its modes. Epistemology explains how (valid) conceptual knowledge is possible and produced. The issue is reduced to conceptuality, present in only four out of the seven modes (cf. the proto-concept -or concrete concept-, formal concept, transcendental concept & creative concept). In the first two modes (mythical & pre-rational) the concept is not yet formed, while in the last (nonduality) it is radically transcended (left behind). Criticism integrates some of the findings of genetic epistemology and tries to bring out the full scale of stages & modes of featuring knower, knowing & known. The development of this faculty of cognition runs in three fundamental stages, called "ante-rational", "rational" & "meta-rational". Seven modes of cognitive functioning ensue : mythical, pre-rational & proto-rational cognition (for ante-rationality), formal & transcendental cognition (for rationality), creative & nondual cognition (for meta-rationality). Only by thoroughly understanding the instrument, while it performs all possible cognitive activities, is it possible to assess the capacity of our tool, the mind. Both ante-rationality & meta-rationality are interesting stages. They are necessary in an extensive view. But classical criticism focused on the rational stage. Ante-rationality shows how pre-formal concepts operate. It makes us appreciate these concrete concepts may offer a strong sense of closure and thus endure for millennia. Meta-rationality invites us to push the limits of reason, allowing it to access higher possibilities with increasing degrees of freedom. Investigating the extremes makes the Middle Way of reason a suitable path. Not eclipsing the poles allows reason to spread out it wings as far as possible. D. Valid Science & Critical Metaphysics. Together but apart, valid science and critical metaphysics complement each other. Without valid science, speculative efforts may wander away from conventional truth. The totalized views thus arrived at will not easily connect with the mainstream. How can they be helpful, assist, inspire or accommodate care for others ? Without critical metaphysics, science no longer strives to seek beyond its furthest horizon. It turns all of its attention to analyze further and lacks a general, synthetic view inviting new vistas & possibilities. Speculating while assuming radical nominalism purifies metaphysics from making absolute statements about phenomena. Making the case for universal interdependence and absence of substance, critical metaphysics invites the mind to purify concepts by means of concepts. This ultimate analysis is not the cause of nondual cognition, but merely eliminates the reifying tendency of the mind, positing substance or x. Once this tendency is completely eradicated (as in ¬ x) the mind is totally healed from any delusion. It no longer sees a darkened rope as a snake, but things as they are. This suchness/thatness of phenomena is a datum of nondual cognition, although not in the sense of conceptual knowledge. The direct experience of this absolute reality is ineffable, but its impact on the mind decisive and so highly relevant. A mind impressed by this will comprehend interconnectedness more clearly, with more width and depth. This indirect role of transcendent metaphysics on immanent speculations cannot be underestimated. Because metaphysics is always present in the background of testing & argumentation, and so cannot be eliminated, a critical positioning is necessary. Metaphysics is not foundational. It does not act as an archaeology for correct logic, truth, beauty & goodness. Nor is its ontology more than a current & conventional picture of the world lasting as long as its constituting elements remain valid. Metaphysics is not testable. It is therefore not a science, but a heuristic instrument of science, a "speculum" reflecting a totalizing, comprehensive worldview or apprehension of the whole and an "ars inveniendi". Metaphysics is not irrational. Only two criteria for validity remain : correct logic and argumentation. Scrambled speculation and/or unarguable positions define invalid metaphysics. Which logic is invoked and how the principles and their developments are argued determines the weight of any metaphysics. ∫ As phenomena are complex, so is metaphysics. Mistrust easy answers even if sometimes they do exist ! § 1 Transcendental Logic of Cognition. α. No act of cognition without, on the one hand, a transcendental object, appearing as an object of knowledge (what ?), and, on the other hand, a transcendental subject or subject of knowledge (who ?), a member of a community of intersubjective sign-interpreters making use of language. Transcendental logic, ruling all possible cognition, captures the fact of reason as the necessary product of two irreducible & entangled sides : • the transcendental subject : the thinker, the one thinking, a knower as it were possessing its object ; • the transcendental object : what is thought and so placed before the subject as the known. The transcendental subject is not a closed, Cartesian substance or ontological "ego cogitans". It is more than a mere Kantian unity of apperception accompanying all cogitations. Intersubjectivity, language-games, the use of signals, icons and symbols by persons and groups, enlarge the scope of the transcendental subject, appearing as a community of language users, both in terms of personal membership(s), and the actual discourses, as well as their historical tradition (the magister of past, successful games). Concrete discourses are regulated by absolute ideality (the Ideal). The transcendental object is not a construct of mind, a shadow or a reflection of merely ideal realities. Although the direct evidence of the senses is co-determined by the observer, objective knowledge is possible and backed, so must we think, by the extra-mental or absolute reality (the Real). ∫ Without a known, one cannot posit its knower. Without a knower, one cannot possess a known. β. In conceptual cognition, the Factum Rationis must be a concordia discors, for both sides ought to be kept together but apart. They engage in communication to achieve a common goal : correct (conventional) thinking & knowing, i.e. the production of valid or justified empirico-formal propositions. γ. In mythical & nondual cognition, the duality identified by transcendental logic is present but special. While emphasizing the object, mythical cognition confounds object & subject. It is not reflexive, without a trace of self-reflection and usually focused on some grand object. At the other end of the spectrum of cognition, nondual thought is the pinnacle of reflectivity and reflexivity ! Being non-conceptual, it merely escapes the reification of the duality of the fact of reason, but not the duality itself. Suppose duality would be superseded, i.e. turned into a higher unity. Then nondual cognition could not be an act of cognition, for nonduality would be monadic. Although dualistic, nonduality implies a dual-union. ∫ Duality does not pose problems, but its reification does. The absolute experience of duality is the experience of nonduality. δ. Critical thought raises the reflective to the reflexive. Pre-rational concepts anticipate to stabilize and become concrete concepts offering mental closure. The pre-concept, because of its semiotical entrenchment, introduces the first inkling of reflectivity. Pre-concepts & pre-relations are dependent on the variations existing between the relational characteristics of objects and can not be reversed, making them rather impermanent and difficult to maintain. They stand between action-schema and concrete concept. With proto-rationality, the ante-rational phase of the genesis of the cognitive faculty finds closure, harmonizing mythical traditions, original concepts and their concrete realization in cultural objects. Formal thought liberates the self-reflective nature of cognition from the confines of contexts, introducing abstraction, theory and free dialogue. This reflective process is carried through and refined by transcendental cognitive activity, laying bare the principles, norms & maxims of conceptual reason. Producing (hyper)concepts, creative cognition brings the mind to its largest possible extension. It does however not observe its own natural state, but the own-self and its complex creative hyper-thoughts. Emptied by ultimate logic, the former creative mind may directly experiences its own nature. The nature of mind is ultimate reflectivity & reflexivity. In other words, the absolute mind fully knowing the absolute object. The nature of mind is (a) self-clarity, (b) primordial absence of conceptualization, (c) spontaneous self-liberation of mental flux, (d) unobscured self-reflexion and (e) impartiality. The transcendental system -laid bare by a reflection on the conditions of all possible cognition- is before the facts or a priori. It makes clear the intra-mental mechanism of the knowing mind, existing on the side of the transcendental subject only. Its principle is not monadic but dualistic. All cognitive acts involve a subject (the object-possessor) and an object (the subject-possessed). The role of the subject is crucial : it alone possesses the object, not vice versa. In mythical cognition and nondual cognition, non-conceptuality prevails, either by innate confusion or by thorough elimination (purification) respectively. In nondual cognition, object and subject form a dual-union, a special condition allowing a direct experience of full-emptiness, the unity of the absolute nature of all phenomena (emptiness) with the universal interdependence between all phenomena (fullness). The transcendental system works with principles. In all acts of cognition, the Who ? and the What ? are present. The subject refers to a mental "prise de conscience" of an object leading up to opinion, idea, hypothesis and theory. Without a subject, how can anything be known ? The object is an extra-mental reality. It has a decisive role to play : to tell us which possibility eventuates. It informs about the transition from mere potentiality (or possibility) to actual occasion (or concreteness). Is it this or that ? Without an object, the subject cannot be posited either. § 2 The Correct Logic of Scientific Discovery. α. The propositions of science are (a) empirical, (b) formal and (c) in that order. They are empirical because without sensate objects the extra-mental cannot be established. They are formal because without mental objects nothing can be labelled. Empirico-formal statements are foremost empirical because science is fundamentally preoccupied with the theory-independent side of facts, i.e. think about Nature without thinking about thought. All possible scientific knowledge is in the form of empirico-formal propositions. These are terministic (probable) but in all cases fallible and thus relative conditions & determinations. ∫ Science is about knowledge merely working for a while. β. Epistemology is a normative discipline, bringing out the principles, norms and maxims of valid conceptual knowledge. This empirico-formal information is true in the eyes of all involved sign-interpreters. The rules of valid conceptual cognition must be used in every correct cogitation producing valid conceptual knowledge. This is conventional knowledge, concealing the nature of phenomena, namely their lack of existence in and of themselves. Indeed, this worldly knowledge displays sensate objects as independent of and separated from the consciousness apprehending them. γ. The principles of cognition in general are given by transcendental logic, the norms of conceptual cognition are defined by the theory of knowledge (and truth) hand in hand with the maxims by the knowledge-factory of applied epistemology. This edifice is not a description arrived at by observing the faculty of cognition from a vantage point outside it. It is a normative set of rules found to apply when cognition cognizes the possibilities of cognition itself, i.e. tries to find the objective and subjective conditions accommodating conceptual reason in general and formal reason in particular. ∫ Epistemology is always about both object and subject. To eliminate either one is to plunge the theory of knowledge into ontological illusion, solidifying the conditions of knowledge in a pre-epistemological ground outside knowledge. δ. Science deals with propositions arrived at by the joint efforts of experimentation & argumentation. The former is foremost an activity involving objects, the latter is foremost intersubjective. The discordant concord of both object and subject of conceptual knowledge is necessary. Each must defend its own interest while maintaining the discordant truce. This is essential to produce conceptual knowledge that works. ε. Both object and subject constitute conceptual knowledge, and each -driven by opposing interests- aim differently. On the one hand, testing requires the monologue of Nature. Only extra-mental data are sought. Nature is given the opportunity to answer questions in a clear-cut way. Theory nor intersubjective cognitive activity act as sources for this monologue. The issue is to know how Nature can be kicked and how Nature kicks back. On the other hand, argumentation is dialogal and so intersubjective. The monologue with Nature is silenced and replaced by discursive activities, involving theory-formation, discussion, dissensus, argumentation, consensus and theory-transformation. ζ. Testing and argumentation always imply a "ceteris paribus" clause and operate against the implicit or explicit background of untestable metaphysical speculations. Moreover, what science understands under "testing" is also undergoing change. Proposing hypotheses, conceiving tests to validate or refute these and carrying out controlled tests repeatedly is the simplistic approach to experimentation of physics-like science. In biology-like science this is not possible, for no two living things are exactly identical as are two elementary particles. Medical science cannot function without case studies, anecdotal reports, case histories etc. Insofar as science becomes biology-based, one may expect the emergence of consciousness-like science. The principles of the transcendental system give rise to a theoretical inquiry into the conditions of conventional knowledge. The mere possibility of a subject of cognition (the transcendental subject) becomes a concrete subject of knowledge. Likewise, the transcendental object turns into an actual object of knowledge. Theoretical epistemology studies the possibility & validity of scientific knowledge. It restricts epistemology to the formal and transcendental modes of cognition, trying to organize the possibility & expansion of scientific knowledge in terms of principles and norms a priori. Its critical format avoids a dogmatic ad hoc, nor a sceptic principiis obstat. Empirico-formal propositions are possible because facts possess, so are we obliged to think, extra-mental "stuff" informing us about absolute reality. Unfortunately, we only "catch" this with the "net" of our own theories, so lots of it slips through and is lost to us. Subject and object represent different interests but have to work together. Argumentation and testing are the tools with which scientific progress is made. Indeed, both intersubjective consensus as monologous correspondence offer the necessary criteria to validate empirico-formal propositions. § 3 The Validity of Scientific Knowledge. α. By shaping the unconditionality of the object of knowledge, the idea "absolute reality" or "reality-as-such" (the Real) guarantees the unity & the expansion of the monologous and object-oriented side of conceptual knowledge. This monologue intends correspondence (with facts). By shaping the unconditionality of the intersubjectivity of knowledge, the idea "absolute ideality" or "ideality-as-such" (the Ideal) guarantees the unity & the expansion of the dialogal subject-oriented side of conceptual knowledge. This dialogue intends consensus (between all involved sign-interpreters). These ideas do not constitute conceptual knowledge, they regulate it to bring about its highest unity & expansion. α.1 In every observation of fact, both regulations are simultaneously at work. The idea of the Real pushes the mind to pursue sensate adventures, whereas the idea of the Ideal brings its constructions in the larger arena of the community of interpreters of signals, icons & symbols, seeking consensus and approval. Experimentation concentrates on the real. Discourse, dissensus, argumentation and consensus on the ideal. Both intend to articulate empirico-formal propositions or statements of fact, in casu valid scientific knowledge. α.2 Experimentation, regulated by the idea of the Real, involves a one-to-one relationship with the object of knowledge, at the maximal exclusion of intersubjective dialogue and discussion. It is always instrumental. This is the image of "objective" science as the monologue of Nature with herself. The highest art of dialogue, regulated by the idea of the Ideal, involves the constant dialogue with & between other subjects of knowledge about ideas, concepts, theoretical connotations, conjectures or theories. Here we have the image of a community of people seeking the truth about something and communicating to find out what it is (as in the more contemporary forms of idealism and social theory). ∫ Valid scientific knowledge is the set of well-formed propositions validated by argument & experiment. β. The ideas of the Real and the Ideal converge towards an imaginal point, Real-Ideal or "focus imaginarius" which, as a postponed horizon, is a complete, universal consensus on the adequate correspondence between our theories and reality-as-such. The "adequatio intellectus ad rem" or "veritas est adequatio rei et intellectus" of the realist goes hand in hand with the "leges cogitandi sunt leges essendi" of the idealist. Both ideas are pushed beyond any possible limit (beyond "Diesseits"). Thus unconditional, they represent what transcends conceptuality ; a perfect unity between thought and fact, as it were the dwindling away of the theory-dependent facet of facts, a fiction brought about by the faculty of imagination. This heuristic fiction suggests a position "beyond the mirror surface", a "world behind" ("Jenseits") regulating the possibility of knowledge without grounding the latter or serving as its foundation. These two ideas voice the fundamental property of scientific thinking, namely the discordant truce expressed in the continuous & permanent confrontations between "testing" (object of knowledge) and "language" (subjects of knowledge). ∫ Not science, but transcendental philosophy unearths, posits & clarifies the rules of the game of true knowing. γ. Depending on correspondence & consensus, the empirico-formal propositions of science are valid or invalid. Valid propositions always call for both correspondence (between theory and fact) and consensus (between all involved sign-interpreters). The actual paradigm of science consists of all valid empirico-formal propositions. of science ∫ After millennia of invalid science posing as absolute truth, the question of validity is crucial. We don't need another dogma or anti-dogma, but a critical demarcation between what works and what does not. δ. On the side of the object of knowledge, we must think "reality-as-such" as knowable, but this without being conceptually equipped to know whether this is the case or not. Absolute reality, apprehended by nondual cognition as absolute truth, is ineffable. Facts are intra-linguistic and so co-determined by the notions, opinions, ideas, theoretical connotations, hypothesis & theories formulated by the subject of knowledge. But facts are also -so must we think- extra-linguistic, i.e. the messengers of this absolute Real. Given this ambiguity, facts do not a priori represent absolute reality, nor reality-for-me, but merely reality-for-us. ∫ The letters of confidence presented by facts may be fakes, and in an ultimate sense they are. Insofar as they conceal process, they merely appear as substances. ε. On the side of the subject of knowledge, we have to think the "consensus omnium" as possible (without us ever reaching it in an actual discurus). In this way ensues the distinction between (a) "my" consensus (with myself), (b) "our" consensus here & now (i.e. the agreement between the users of the same language) and (c) the "consensus omnium", the regulative idea on the side of the subject of knowledge. The theory-dependent facet of facts is intra-linguistic. It belongs to a theory to form a pattern of expectation. But this pattern, although always rooted in my subjectivity, is in truth always inter-subjective, belonging to a community of communicators using signs (signals, icons & symbols). ∫ The power of conviction portrayed by an actual consensus may be fallible, and in truth it is. Concealing change, conviction merely appears as solid, lasting & trustworthy. ζ. In the present critical theory of truth, merely seeking to find reasons to accept a theory as if true or conventionally true, the following categories emerge : • the subject of knowledge / the one thinking / intersubjective discourse or dialogue (consensus, dissensus, argumentation, consensus, etc.) / consensus omnium / the idea of the Ideal ; • the object of knowledge / what is thought / monologous testing (experimental setup, tests, observations) / adequatio intellectus ad rem / the idea of the Real. It depends on transcendental philosophy to unearth the conditions of this capacity of the mind to apprehend the truth of the matter. This discipline does not belong to science, but exclusively to normative philosophy. A theory of truth explains how to validate empirico-formal propositions. Testing statements of fact, but observing no correspondence with the facts, means invalidating them. To discuss these propositions, but finding no consensus regarding them, invalidates them. Being insignificant (in the statistical sense), they cannot enter the current paradigm of science. The ability to validate propositions is crucial to science. In a realist account of knowledge, one grounding the possibility of knowledge in a pre-epistemological self-sufficient ground, in casu, the Real, validation is induction. Accumulating data is supposed to lead to generalizing statements of fact. Logically incorrect, induction fails to deliver. A finite set of observations cannot back a general statement. Dogmatic falsificationism avoids the problem of induction by turning things upside down. Instead of starting with a number of individual propositions from which to derive a general law, it begins with a universal statement and tries to find exceptions. If one is found, then the general statement is refuted or falsified. This variant of empirical justificationism accepts a theory can never be completely justified. Hence, the more it is corroborated, i.e. withstands attempts at falsification, the more trustworthy the theory becomes. But the naturalistic, onto-epistemological presence of a given empirical ground is not yet left behind. A pre-epistemological moment is retained. Refined falsificationism no longer accepts any "ontological" confrontation between theory and fact. Coherence replaces correspondence. Only theories clash. This answers the question of how to translate sense-data in propositions. Only propositions clash. Critical theory adds the hybrid nature of facts. Janus-faced, they are two-faceted : one, turned towards the subject of knowledge, is theory-dependent and intra-mental and the other, turned -so must we think- toward the reality of the object of knowledge, is theory-independent and extra-mental. We recognize something as "a fact" because our theories allow us to do so and because this fact acquired, so we believe, the guarantees of absolute reality (the Real). In an idealist account, an ideal self-sufficient ground is designated. Conforming facts to mentality, idealism is generated whereby the object is constituted by the subject, by the Ideal. But a general consensus neither delivers, for facts must refer to extra-mental phenomena, and so in some way have to escape language. But both positions do contain a nugget of gold. Realism makes us understand knowledge implies a known and the latter cannot be exclusively mental. Idealism points to the intersubjective use of language, and the theory-dependence of observation. So in terms of validation, a reconciliation or coherence between a correspondence theory of truth and a consensus theory of truth accommodates the critical understanding of how knowledge is validated. This happens in a transcendental coherency theory of truth. On the side of correspondence, test & experiment stand out. They are deemed a monologue with Nature. Here is decided which possibility (out of an infinite set of possibilities) will actualize to become concrete. On the side of consensus, intersubjective dialogue is at hand. This dialogue involves all possible speech-acts done in the pursuit of knowledge and its advancement, but may be restricted to conjecture, disputation & (dis)agreement. The interaction between both interests assists their entanglement : disagreement invites new experiments and new experimental results brings about conceptual changes calling for a new discussion, etc. The ongoing nature of this process of communication intends to harmonize correspondence & consensus. Because no direct, one-to-one observations of the Real, nor the realization of the Ideal by a concrete community of sign-interpreters, are accepted, criticism opts for a transcendental coherency theory of truth. § 4 Casus-Law : the Maxims of Knowledge Production. α. What scientists have been doing (diachronical) and what they do today (synchronical), is not identical with the principles and norms of knowledge they are always using (and abusing). β. Theoretical and applied epistemology are both necessary. The former may be compared to "statute-law", universal, imperative and normative, the latter to "casus-law", local, adaptive and descriptive. Contextualism and decontextualization are both necessary, and so emphasis on either "what must" or "what is", is lacking. A pluralistic system of authority between them is needed. γ. In applied epistemology, the context of knowledge-production is studied, and so the principles & norms of knowledge are not made explicit. In every concrete situation they are at work and are addressed. Theoretical epistemology is general & necessary (a priori), applied epistemology is contextual & situational (a posteriori). The latter affirms the laws of discovery to be context-specific and complex, far beyond the capacities of a simple formal logic. ∫ Good scientific research depends on many important factors outside the conditions of epistemology, like for example enough orgiastic sex. δ. To ask : Quid juris ? is to foster the normative approach prevailing in theoretical epistemology. As such, validity and justification of knowledge rule over how it is produced. In applied epistemology, the logic of discovery answers the question : Quid facti ? This is the difference between the idea of a stable and universal method and the constant revision of standards, procedures and criteria as one moves along and enters new research areas. Take note of the distinctions between the principles of transcendental logic, the norms of theoretical epistemology and the maxims of applied epistemology. These rules of transcendental philosophy aim at different objects, namely the general structure of cognition, the conditions of conceptual knowledge & its validation and the production of valid empirico-formal propositions. ε. The general structure of applied epistemology is derived from theoretical insights, for (a) the subject of knowledge and its norms becomes the subject of experience and (b) the object of knowledge and its norms, the object of experience. In physical science, the latter is given form as the rules of experimentation, whereas in the human sciences, the rules of participant observation are applied. Both make use of this-or-that actual discourse, with its non-strategic communication (dialogue, dissensus, argumentation, consensus). The maxims ruling an actual research-cell are not like binding norms. Deviation from them is possible, but not advisable. Violating a maxim does not entail the end of the possibility, unity & expansion of knowledge, but slows down its actual manufacture. The process of production is not halted (and replaced by an illusion), but efficiency drops. Hence, the research-cell at hand will suffer and become a less attractive competitor in the market of available facts. To produce knowledge, there are no absolute rules. Once its actual process of manufacture is set afoot, merely valid theories & rules-of-thumb prevail. The latter cover argumentation & experimentation. Nevertheless, these relative constructs are important and do result in scientific advancement. The opportunism and contextuality of some of these procedures underlines the conventional nature of scientific knowledge. Although science is the pinnacle of conventional knowledge (in the mode of formal reason), it ever remains a relative, fallible and incomplete attempt to understand Nature. To consider it as solid, unchangeable and secure is merely a waking dream. Conceptual reason is simply not equipped to grasp the absolute Real-Ideal. Science is terministic, probable, conventional. Only a humble & kind science is a true science. Conventional knowledge, whether valid (as in the case of science) or invalid, misrepresents the world. The maxims of knowledge-production call to methodologically accept realistic correspondence & idealistic consensus as if. The way of science must confirm the substantial nature of its objects, and in epistemology, at least as a method to expand knowledge. Physical objects must be independent & separate. Because of this reification baked into the methods of science, conventional knowledge is valid but mistaken. It is valid because (a) this knowledge truly functions in terms of material, informational & sentient features and (b) its objects exist in a relative, impermanent, interdependent way. It is mistaken because it reifies its objects into static entities, concealing their fundamental process-based nature. § 5 Metaphysical Background Information. α. The proto-rational, formal, transcendental (critical) & creative modes of cognition are conceptual. Together, they form the set of all possible conventional knowledge. Through proto-rationality, the ante-rational remains linked with rationality. In these early stages of the development of the mind and its cognitive apparatus, we call forth our unconscious metaphysical beliefs, dreams and expectations. ∫ Refusing pain (denial) and seeking pleasure (identification) are the earliest ego-building operations the mind familiarizes with. β. The integrated presence of the ante-rational mind in the higher modes of conceptual cognition can be traced as generalizing beliefs and unarguable "feel right" frameworks. By countless ante-rational coordinations of movements, their introjection & stabilization as mythical, pre-rational and proto-rational mental operators, continuity, tenacity, substantiality, solidity, independence, separateness etc. are given form. ∫ To know what to refute is to be able to identify the truth more clearly. γ. The problem situations encountered in science are due to three factors, namely (a) inconsistency between a ruling theory, (b) discrepancy between theory and experiment and (c) the relation between theory and metaphysical background information. The latter not only determine what explanations we choose to attack, but also what kind of answers are fitting, deemed improvements of or advances on earlier answers. This background results from general views of the structure of the world. Themselves untestable, they are speculative anticipations of testable theories. ∫ How many times we (dis)like something without good reason ? δ. Let us considering a few historical metaphysical backgrounds : • Parmenides : the universe is deemed full, there is no void  or empty space. Hence, motion is impossible. A genuine worldview must be rational and so devoid of contradictions ; • Democritus : all change is nothing but movement of atoms in the void. The world is "full" and "empty" at the same time. There is no qualitative change possible, for only rearrangement pertains ; • Pythagoras & Plato : for Pythagoras, the cosmos was arithmetized, a view abandoned with the discovery of irrational numbers. For Plato matter is formed space, geometry explains the universe ; • Aristotle : space is matter and the dualism of matter and form (hylemorphism) takes over : the essence of a thing inheres in it and contains its potentialities ; • Descartes : the essence or form of matter is its spatial extension. All physical theory is geometrical. Causation is push or action at vanishing distance. Qualities are quantities ; • Newton : causation is by push and central attractive forces (gravity). Every change functionally depends on another change (cf. differentials). Action-at-a-distance seems the only way to explain the central forces ; • Maxwell : not all forces are central, for changing fields of vectorial forces exist whose local changes are dependent upon local changes at vanishing distances. Matter may be explained as fields of forces or disturbances of these fields ; • Einstein : matter is destructible and inter-convertible with radiation, i.e. field energy and thus with the geometrical properties of space. Geometrization of fields is at hand. • Bohr : before observation, the quantum phenomena exist in a paradoxical state of superposition ruled by quantum logic, and turn only into this particle or that wave after being observed. Most of these vast generalizations are based upon "intuitive" ideas, some striking us now as outdated and mistaken. They presented a unifying picture of the world. More of the nature of myths, they helped science to find its purposes & inspiration. ∫ Stylish caprice, sharp opportunism & clear improvisation instead of strict lawfulness are the ornaments of the rule of inventivity. Identifying a substantial, self-sufficient ground or "hypokeimenon" may well be called the fundamental metaphysical dream of the West. Dreaming such a primary reality, existing alone without need of anything from outside, as it were "standing under" phenomena and determining "what they are", means allowing something uncaused or self-caused to possess attributes inhering in it without it inhering in anything else. Insofar as this self-sufficient ground is deemed primary, it is an ultimate substance and so indestructible. The failure of this metaphysical background is evident. Has a single primary substance been identified ? If so, where is it ? ∫ Looking for substance instead of process is our ground addiction. Like fish in the water, we are blind to it. Critical epistemology accepts the task of critical metaphysics to inspire scientific research. It brings the implicit metaphysical background to the surface and identifies its frailties. Substance-metaphysics has to make way for process-metaphysics. The fundamental sufficient ground of all possible phenomena is not an independent, separate, uncaused or self-caused primary substance, featuring properties inhering in it, as it were  it and for itself, from its own side, self-powered. ζ.1 In the categories of Aristotle, substance, quantity, quality & relation do exist inherently. Likewise, space, time, matter & momentum are deemed absolute. In essentialism or substance philosophy, discrete individuality & separateness are therefore linked. A fixity within a uniform nature defined unity of being. This allows for descriptive & classificatoric stability & passivity. ζ.2 The new metaphysical dream features interactive relatedness, wholeness, novelty, agency, productive drive, fluidity and evanescence. Instead of a unity of being defined under individualized specificity, there is unity of law under functional typology. Science and pre-critical metaphysics cannot be reconciled. Metaphysics no longer acts an a pre-epistemological archaeology & ontology, defining the self-sufficient ground and erecting an architecture upon it. This precisely because it is untestable and so has no sensate objects to offer. Only the language-game of true knowing provides the rules of engagement, setting in motion the process of the manufacture of knowledge. This is conventional knowledge, valid insofar as theory & experiment dictate. Relative and fallible, it cannot be considered permanent or absolute. Moreover, it cherishes a substantialist streak, albeit methodologically. Metaphysics becomes "critical" when the demarcation with science is maintained : science is arguable & testable, metaphysics only arguable. Critical metaphysics is the heuristic of science, its "ars inveniendi". It stays close to science and its development, in particular to the fields of cosmology, physics, biology & anthropology. Moreover, despite the demarcation, it is impossible to eliminate generalizing ideas from the background of scientific research. Metaphysics is a "vis a tergo". Argumentation & experimentation are always conducted with the help of such metaphysical dreams. Insofar as they are implicit, they cannot be manipulated to help current research and so may eventually hinder it. This has to be amended. Bringing these to the surface is understanding the metaphysics internally driving scientific work, the beliefs carrying the work of reason. Changing the background to accommodate research is therefore primordial. In view of the long essentialist tradition, one cannot stress enough the importance of process, change, transformation and creative advance. Logically, this is the transition from substantialism or x to process or x, and this by negating x.   or "there is" : is the affirming persistent existence of "x", in casu the existentializing quantor confirming the permanent existence of x or x. The dream of finding this indestructible, unchanging substance is over. The hypnotic spell of Plato's dreamwork is broken. Socrates refuted not enough ! The thinkers of Antiquity, the Scholastics & the Modernists posit substance. Ultimate analysis awakes one to the realization all phenomena are impermanent and devoid of own-power. They are other-powered. If postmodernism was the unavoidable deconstruction of the Modernist dream, then hypermodernism is the affirmation of process and its architectures. May this be the beginning of the final movement in the long march of emancipation of humanity, the emergence of a global consciousness and its subsequent cultural objects. This New Renaissance is not a return to late Antiquity and its Platonism, but an advancement reconciling process and change with interdependence, and the need for a global organization of the affairs of Earth in all crucial issues. E. Thinking Metaphysical Advancement. Because polemics are not the issue here, this paragraph is kept to the bare minimum. Suppose we think metaphysics or philosophy in general is still in the business of discovering a self-sufficient, substantial ground. Given modern science, in particular physics, has taken over such fundamental preoccupations, one may decide metaphysics no longer has any role to play and so just oust it. Philosophy itself, i.e. this irresistible & definitive longing for wisdom, may be crippled and turned into another ivory tower of academic pursuit, merely offering the logistics. One may wonder in what measure such an instrumental, uncritical and non-innovative approach bleaks the original beginner's attitude called for in a serious, prolonged and free engagement in this science & art of the love of wisdom. Denying the very need of metaphysics, any argument backing the notion of the advancement in metaphysics must involve a contradictio in terminis. But here ontology is not the aim. A comprehensive, coherent & scientific worldview is. Critical metaphysics is aware of its initial border with science. It only leaves this behind for the final border of transcendent metaphysics, but never without identifying the transcendent signifiers of un-saying. Especially within the immanent order of actual occasions. In the present exercise, mindful of Ockham's Razor, the principle of parsimony, these must be kept to the bare minimum : (a) emptiness inseparable from (b) the Clear Light* of the mind, the seed of awakening ("bodhi"), the potential of enlightenment, forming together full-emptiness. To identify metaphysical advancement, one has to know what metaphysics is all about. Inspired by science and on the basis of a theory on existence (ontology), immanent metaphysics argues a totalizing, comprehensive framework speculating about being, the origin of the universe, life and the human. Its focus is on actual occasions and their concrete form. Transcendent metaphysics probes into absolute, infinite existence, into pure formless possibilities, the "pure ground" of lacking ground. This is a sufficient ground, but not a self-sufficient ground. Insofar as critical metaphysics goes, speculative advancements are possible. But finding the proper conditions or rules of comparison is crucial. The criteria of instrumental action or experimentation should not be applied here. For in this case, there is no increase in "factuality", but in "mentality". Those who confuse both assume (or force) philosophy to be the copy-cat of experimental science or mathematics. Instead, our criteriology identifies advance by using the logic of communication, a hermeneutics of logical & semantic moments of progress. ∫ Establishing a right view or vantage point is the beginning of thought. § 1 The Mistake of Absolute Relativism. α. In brief, the present metaphysics of process does not endorse absolute relativism. While the intelligently organized interdependence between all possible phenomena is accepted, some special & exceptional items are found and kept absolute. Ergo, the absolute is not rejected, banned, ousted or negated, but given its most efficient role, whatever that is. Therefore, theology, theophany and theonomy are possible, but -given the conceptual limitations of transcendent metaphysics- bound by the rules of non-contradiction and inviting ongoing remodelling. β. The rules of normative (transcendental) philosophy are found to be "of all times". Indeed, they are always in the process of being used by correct conceptual thinking. One cannot even deny their use without using them ! Process ontology argues absolute abstract forms or formative abstracts, mere potentialities like primordial matter, creativity & God*. Likewise, in science, constant values are also found. Very small changes in the highly intelligently chosen natural constants would make the physical world devoid of life & sentience. γ. Transcendent philosophy, using the benefits of ultimate analysis, establishes the non-separability between, on the one hand, the absolute and emptiness and on the other hand, emptiness and the original mind of enlightenment. This is the absolute united with the nature of mind, the Clear Light* ; the nondual realized by the absolute experience of duality. So also here absolutes pertain. δ. Consider "everything is relative" and "no absolute exists". If the view expressed in these statements is relative, then an absolute might exist after all. Ergo, they are ineffective. But if this view is absolute, then it refutes its own claim ; a contradictio in actu exercito. In both cases the statement is undermined. Saying philosophy knows no advance because all statements are relative is denying historical process and the unfolding architectures of thought. ∫ Some things change while other things are kept constant. Some things are always the same and some things change all the time. The relative and the absolute walk hand in hand. In a general sense, universal relativism is rejected while evolutive, negentropic change (in dissipative, highly intelligent, chaotic living systems) is accepted. This not only involves efficient determining factors, but also state-transformative ones, entering the efficient causation of other actual occasions. A universal continuous creative advance is thus at hand. All objects of immanent metaphysics are constantly changing. But this change is not random, amorph or without outstanding features. The change has an architecture involving constants, i.e. principles uninfluenced by the momentum of universal creativity. § 2 Logical Advance. α. Well-defined logical operators increase the quality of communication. But way before this is established, the importance of a priori structures needs to dawn. Then necessity enters the picture and absolute truth becomes singular, for there cannot be two absolute truths, only one. All this was realized by the Eleatics. Before them concepts remained confused because of an attachment to context enforced by the rules of ante-rationality. Formal reason required abstraction, necessity and the ideas of "everywhere" and "always". The Sophists, using logic but arguing absolute relativity, did inspire the concept-realism of the classical systems of Plato & Aristotle, both retaining the concept of the absolute and desperately trying, to justify the objects of knowledge, to find an absolute self-sufficient ground outside knowledge. β. In Late Hellenism, and particularly for the Stoa, language became an independent area of study. Logic was not longer embedded in metaphysics, but part of the new science of language (linguistics). The technical apparatus developed by the Platonic and Peripatetic schools, as well as the mechanics of classical formal logic had been fully mastered. An overview of knowledge was sought, and concept-realism still prevailed. Concepts were either rooted in universal ideas or in immanent forms. Physics studies things ("pragmata" or "res"'), whereas "dialectica" and "grammatica" study words ("phonai" or "voces"). The term "universalia" (the Latin of the Greek "ta katholou") denotes the logical concepts of "genus" and "species". The apory between Plato's world of ideas and Aristotle's immanent forms, is no longer part of the Stoic context. A simplification took place bringing logic and linguistics to the fore. γ. In the Middle Ages, the apory between exaggerated realists ("reales") and nominalists ("nominales") saw the light. It was a logico-linguistic transposition of the ontological apory between Plato and Aristotle. This advancement was considerable and led to William of Ockham, who finally relinquished concept-realism and formulated radical nominalism. The foundational approach was left behind. In all cases, the nominal representations arrived at are terministic, i.e. probabilistic, stochastic. They concern individuals, never extra-mental "universals". Science deals with true or false empirico-formal propositions referring to individual things called "facts". These empirical data & conceptual constructions are primordial and exclusive to establish the existence of a thing. With Ockham, conventional knowledge acknowledged its frailty. δ. While in the course of history, logic became an independent discipline within philosophy, transcendental logic defined a direct impact on our grasp of the possibility of knowledge and its production, in particular of science or established conventional knowledge. This after millennia of extreme views, both from the side of the object (as in empirism) and the subject (as in rationalism). Arising in Western philosophy, but absent in pre-Kantian  philosophy, this logic and its articulation point to another crucial step forward in the process of the ongoing advance of the longing for wisdom. Although its early mistakes spurred the ontology of the idealists and the irrationalism of the protest philosophers, criticism has radically & irreversibly ended the long-reign of metaphysics over epistemology. To constructively engage critical metaphysics in the vicinity of paradigmatic science, is to be aware logic is unable to radically ban speculative, totalizing views from science. Working together, two extremes bring forth the Middle Way. The idea philosophy does not advance and has no paradigm-shifts is wrong. Creative advance affects all phenomena, and philosophy is not an exception. The "death of philosophy" league has tried its best shots but failed. The old roaches are not gone. In fact, their moves are so perfect, they are bound to stay. Logical & semantical advance stares one in the face. While these improvements touch areas later becoming specialized fields of learning of their own (like logic), they also affect the core business of philosophy : to propose a reasonable worldview or total view involving all (known) actual occasions. Meaning-shifts redefine both object and subject of this quest. In the West, the pivotal paradigm-shift was announced by Kant. Although he wanted to secure the necessary and universal status of "rational" knowledge modelled on Newton, his transcendental method proved to be the beginning of the end of substantialism (essentialism) in epistemology and philosophy of science. Moreover, his analysis would eventually raise the important question of the interpreted nature of the sensate & mental objects grasped by the knower. If all conventional, rational, conceptual knowledge is an interpretation, not the "real thing", then all conceptual knowledge is "for us". How can we truly know such relative knowledge is about reality/ideality "as it is", i.e. about the absolute ? Conceptually, there is no way to answer this. We must accept facts are also extra-mental, but we could be fooling ourselves. A subtle epistemology is aware of the possibility of this universal illusion. A study of knowledge stressing the production (praxis) of knowledge would probably miss it. But in the field of theoretical epistemology it acts as a very powerful reminder of the relativity of all possible conventional knowledge. § 3 Semantic Advance. α. To establish a clear-cut difference between object and subject is the logical prerequisite for semantic stability. This calls for a semantic field of denotations & connotations part of an architecture and a dynamic flow or "stream" of sensate & mental referrers. The history of these semantic fields is remarkable, giving rise to a multitude of views concerning objective and subjective phenomena and their states. ∫ A clarification of views results from integrating many different vantage points. β. Take the "psyche", evolving from a gaseous entity (Homer), to a meaning-giver, a sign-user of symbols, icons & signals, in short userware. Take matter, from a solid, self-contained ground (Ionian thinkers), to a stochastic process involving particle-fields or matter-waves (hardware) and an intelligent code ("logos" or software). Both semantic fields result from previous articulations and the process is ongoing. But a slow integration & clarification is present. This points to semantic advance. It is impossible to include an evolution of the philosophical vocabulary of the West since Ptahhotep (ca. 2300 BCE). But such a project would present the case of (a) countless redefinitions of a series of basic terms referring to certain recurring sensate & mental objects and (b) a number of drastic meaning-shifts in the denotations & connotations present in the semantic field of these terms, leading to a very slow but definite creative advance. Four dazzling moments : (a) Greek civilization realizing the decontextualized mode of thinking according to formal logical rules, (b) Kant initiating his Copernican Revolution, (c) Wittgenstein defining the meaning of words as their use, (d) Derrida deconstructing the transcendent signifiers. 1.2 Immanent Metaphysics. Since when did humankind's curiosity start to extend beyond the satisfaction of mere instrumental & strategic needs ? When did total observation dawn ? First as a view to totalize the experience of the world and then as questions about what lies further than the horizon, about the beginning & the end of oneself and the world. This supposes communication, the process of conveying information and connecting with other sign-users of signals, icons & symbols. Striking evidence of this cycle of communication, stamping temporary glyphs upon physical states, is found in the French cave of Peche Merle, around 16.000 BCE. It is the representation of a human hand ! Iconically & symbolically, the Upper Palaeolithic is rich. The Cro-Magnon worshipped the Great Mother Goddess and manipulated a variety of symbol-sets. These superior hominids were able to symbolize their experiences. They invented initiatory rites and a variety of tools. Moreover, before them Homo sapiens Neanderthalensis was religiously active (cf. their cult of the dead - ca.30.000 BCE). The Neolithic (ca. 10.000 BCE) brought a fixed horizon of observation and the agricultural cycle. If earlier glyphs were mostly Lunar, diffused and fertility-based, they soon became Solar, centered and organizational. Experience moved from a variable local horizon to a fixed one, empowering economical & political stability. The advent of Pharaonic Egypt is an enduring example. These prehistorical, ante-rational & bi-polar symbols are a treasure-house of images & metaphors. They are contextual pre-concepts & concrete "operational" mental procedures. In a less coarse mentality, they work in the background of future metaphysics, underlining the bi-polar experience of the world. • immanent symbols :  "phusis", accidental existence, world of becoming, Demiurge, Generator, Conserver, She, pantheism - the Lunar symbols ; • transcendent symbols : "arché", substantial existence, essence, world of being, God, Creator, He, theism - the Solar symbols. The Latin roots of the words "immanent" or "in" + "manere", to remain, and "transcendent" or "trans" + "scendere", to climb over, point to the ideas of the proper part or character of something and the absence of such. Every x is immanent to y if, and only if, x is a proper part of y or a character (proper or inherent property) of y. This belongingness and interrelatedness (interdependence) is reflected in fertility-symbols and the mystery of life & childbirth. Every x is transcendent to y if, and only if, x is not immanent to y and there is a z immanent to y serving as an indicator of x. The notion of x being superior, more exalted or ontologically higher may be added, but this is more a kind of theological compliment. This otherness and sacred separation-from is found in all forms of paternalism, conservativism, authoritarianism, centralism & royalist. This is the mystery of the hunt & the kill. Immanent metaphysics strives to realize a comprehensive view of the whole spectrum of actual occasions displayed by the two outstanding ideas of reason : reality & ideality, both rooted in transcendental logic. It dares to speculate and seek out the periphery of the objective world, as well as the frontiers of the mind and its cognitive possibilities, including the realization of the absolute & relative minds of enlightenment for all sentient beings. Attentive of critical thought, immanent metaphysics, remaining close to science, merely assists in the introduction of transcendence. Although still conceptual, it cultivates the creative mode of cognitive functioning. This mode invents speculative conventional knowledge inspiring the advance of science and inviting the final frontier. It serves the conventional. Pre-critical, it affirms the inherent existence of the world and its actual occasions. Doing so, it superimposes the mere illusion of inherent existence upon the world. To strip ontology from this will be the task of ultimate analysis, the conceptual device ending the reifying tendencies of conceptuality, stopping its substantial instantiations. As the muse of science, immanent metaphysics does not accept determinations like First Causes, to operate from outside the world. In fact, the world is not determined as finite or infinite. The world is merely that what is, the set of all actual occasions or actualizations of potentialities. The highest creative hyper-concepts are limit-concepts, always referring back to conditions remaining part of the world. To define the latter, the results of experiments and the outcome of argumentation prevail. Given the condition of immanence, this situation of being within and not going beyond a given domain, is left and -inviting an infinite regress- a First Cause is posited ad hoc. Then a grounding explanatory principle outside the world ensues. There are no valid arguments to back this and therefore transcendent metaphysics cannot be conceptually elaborated without obfuscating reason. We cannot move beyond the view of an explanatory principle lacking any self-sustaining properties, empty of itself and full of the manifold of architectures of interdependence and interconnectedness, of actual occasions entering the creative life of other actual occasions. This is the impact of ultimate analysis on conventional knowledge. This use of the word "immanent" reminds of the distinction mentioned by Aristotle (Metaphysics IX, viii 13), namely between an actuality residing in a thing and one not abiding there. Is the realization of the end of an action part of the action or does it transcend it ? The intent of this realization is always immanent to the action, but is the realization of its end ? For Kant, the use of an idea can either range beyond all possible experience or find employment within its limits (Kritik der reinen Vernunft, A643/B671). For Husserl (Logische Untersuchungen, 1900), the act of consciousness is deemed intentional, i.e. directed to an object. This directedness, intentionality or "prise de conscience" is immanent to the act of consciousness, the object intended is not. Immanent metaphysics must be able to argue a comprehensive rational picture of the metaphysical horizon, integrating a wide variety of scientific data. Insofar as transcendent metaphysics, being nondual, cannot be verbalized, all efforts to stretch beyond immanence must be deemed futile and, at best, of sublime exemplary poetic value only. Can validation have meaning in nondual terms ? As authenticity perhaps ? Then only in what one does and in what one does not may traces of it be found ... In a "Diesseits" metaphysics staying within the limitations of possible experience, the world is all there is and the existence of something is only the instantiation of its non-inhering properties. Science observes and argues a series of predicates ascribed to objects, and pours these transient connections in non-eternal, probable, approximate synthetic propositions a posteriori. Using this information alone, no necessary Being can be inferred. Cognition is empty of substantial self. The highest being to be inferred a posteriori remains proportionate to the world. Only an immanent natural theology is possible. As nonduality is cognitive but non-conceptual, it merely leads to a theognosis, not to a theology proper. In a classical, Platonizing transcendent "Jenseits" metaphysics, there is more than the world of experience, for the latter, in phenomenological terms, i.e. as revealed by the things themselves, is merely the theophanic contraction of absolute Being. Hence, each fact reveals more than the series of property-predicates ascribed to it, for each fact is (also) an epiphany or substantial self. To supersede the world, is to stand in one's own essential Being or being-there ("Dasein"), self-sustained with inhering properties existing from their own side, self-powered. The a priori arguments of Anselm of Canterbury, backing the ontological proof of God, aim to posit this transcendent Being as an existing Being analytically, thus including the finite world in infinite Being. They fail to deliver this (cf. Criticosynthesis, 2008, chapter 7) and, in order to book any success, need to axiomatise (a) substantial existence and (b) a semantic adualism between the subjective mind and the extra-mental, called "outer" world. In the radical nominalism of critical thought, such a substantialist, essentialist axiom is not retained. In a first movement, metaphysics is immanent and a heuristic, speculative, suggestive, innovative and spiritualizing system of arguable & totalizing statements about the world. In particular how the cosmos came about, how life emerged and what the nature of sentience is ? In a second and final movement, metaphysics moves beyond the world. If so by positing a "higher" ontological self-sufficient ground of any kind, i.e. a positive concept, then the apex of cognition has been reified and one enters the domain of nonduality as a substantialist, leading to the extremes of radical non-affirmation (of anything) and radical affirmation (of an eternity of sorts). This is a return to the tragedy of pre-critical metaphysics. However, the "essence" or "substance of substances" aimed at in such a traditional transcendent approach cannot be found. What can be experienced is not a substance, but a process and it is ineffable. It may be shown as an object of art or possibly given as the sacred or the holy in direct mystical experience and its religious superstructures. Never conceptual object-knowledge, it is born from the light of activity, i.e. performed, acted, done. If transcendent metaphysics avoids positing a self-sufficient ground outside the world -accepting there is but the world and that is it- and merely points to the set of "all possibilities", it may introduce the transcendent, absolute, ultimate nature of all phenomena as (a) the absence of substantial ground and (b) the set of all potentialities, virtualities, open possibilities manifesting as actual occasions. And these non-temporal formative elements or abstracts are themselves not actual occasions. So in the meta-nominal, meta-rational stage of cognition, two modes are distinguished : • the immanent : the contemplative, creative activity of the arguable, non-factual ideas (hyper-concepts) of the (higher) self, perceived by the intellect (cf. immanent metaphysics) and • the transcendent : the nondual activity suggested by the direct discovery of the unconditional core of all what is. Immanent metaphysics looks at objective reality & subjective ideality. Its only merit is being comprehensive in an intelligent way. Both reality & ideality, sensate & mental objects are actual events, or a set of moments defined by differentials, i.e. immeasurably small droplets part of the ongoingness of the worldstream. To divide this stream into these two sides or banks reflects the conditions of cognition as they exist since the onset of semiotical functions. In the mythical phase of cognition, only differentiations in the coordination of movements prevailed. Pre-rationality sees the birth of duality as a mental construct. Duality is reified in concept-realism, affirming the substantial existence of things. From material coordinations of movements, the material operator or functional signature of physical actual occasions complexified, allowing a creative advance introducing logical & efficient coordinations and with them the informational operator. Both sets of actual occasions worked together, producing the product of differences characterizing energy and with it life. These highly complex, dissipative & chaotic living systems became sentient the moment they consciously began to coordinate their activities and use signs to modify themselves & their environments. In this short universal ontogenesis, a complexification & differentiation happens. Duality is at the heart of this. At the level of sentient organisms using conceptual thinking, the distinctness between object & subject is so prominent, it easily gives rise to the wrong view of their difference. Duality is not the problem, but its reification is. Things are not different, they are distinct. Given the dualistic structure of conceptual cognition, immanent metaphysics formulates an onto-categorial scheme featuring objective & subjective aspects. The scheme describes the basic operators of the existents, i.e. that what exist or is. In the immanent scheme, the ongoing world-process is considered given and not questioned. Access to this process is by the senses and the mind. The senses provide us with sensate objects, the mind with mental objects. Both objects are possessed by an object-possessor, the mind. When, thanks to observation (testing) and communication (arguing), facts are cast in empirico-formal propositions, and valid conventional knowledge or rational object-knowledge is acquired, the conventional condition of all possible direct experience is satisfied. Both vectors producing factual knowledge have done their job. Then, backed by the propositions of science, a broader, more speculative horizon may be argued. This is the exercise of a critical metaphysics never stepping outside the limitations of possible experience mediated by concepts. To format the objective side of our proposed immanent metaphysics, we devise a framework directly derived from the structure of the sphere of observation. This structure is universal and so holds for all possible observers. It is also a necessary empirico-linguistic framework without which no observation would be possible ! Take away a condition, and the possibility of observation itself vanishes. All empirico-formal statements of fact made by an observer about the observed are always & everywhere necessarily framed by the local rotating sphere of observation of the observer, universally & globally defined by a horizontal plane with four cardinal points of reference (East, South, West, North) and a vertical plane with two points of reference (Nadir, Zenith), i.e. by six directions in space. Counting the intermediate directions yields 10 directions and one direction in time. This sphere is not merely a static spatial reality but a continuous, ongoing process in time. Frozen, it represents only a single moment or instance of the mundus. the mundus : the sphere of all possible observers • horizon of observation = circular field representing the consciousness of the observer, defined by divergence, namely of four quarters rooted in O, the neutral origin of the sphere (0,0,0) and of the interconnectedness evidenced by all objects possessed by the observer ; • prime vertical = evolutionary field of an observer moving upward and doing so enlarging the local horizon from origin or nadir, to final aim or zenith, reflecting the convergent evolution of each single observer ; • actual orientations P1, P2, ... = actual positions of observation taken by the observer within the boundaries of the sphere at any given moment in space-time ; • diurnal hemisphere = the realm of consciousness awareness ; • nocturnal hemisphere = the realm of unconscious awareness ; • the sphere as a whole = the totality of all immanent realities and idealities or possible actual occasions happening to any observer - the object of immanent metaphysics ; • the periphery of the sphere = limit-concepts defining the boundaries of the sphere positively and its transcendence negatively ; • the beyond the sphere = the ineffable transcendent, knowable but non-conceptual, non-separation of potentiality & nature of mind. Although each observation is as unique as its local sphere, its geographic analogies are universal as in a global sphere. If the local sphere of a single observer provides the semantic architecture for a particular "reality-for-me", then the conventional sphere of a multitude of observers reflects "reality-for-us". The horizontal plane being associated with (a) the diversity of beings and the way they interconnect despite their divergence and (b) their respective "horizon" or limitations. The vertical plane involves the evolutionary process of each, moving from nadir to zenith, calling for the dynamical convergence and the ongoing creative resolution of both Epimethean and Promethean interests. On the subjective side of our proposed immanent metaphysics, an open, bimodal & dynamic subjectivity is designated. Albeit one more extended than what the empirical ego has to offer, even in its non-substantial, intersubjective format. Although no longer substantial and solitary (Cartesian), epistemology confirms the empirical ego to need the transcendental I to maintain the unity of the sensate & mental manifold constantly arising in the consciousness of any observer. For Kant, and rightly so, this was an empty self "for all times". The self argued by immanent metaphysics is a dynamic continuum of higher states of consciousness grasped by a higher ego, wrongly designated as an inherently existing self or soul. This yields a bimodal structure with (a) an empirical ego at the centre of a circle or field of consciousness grasping sensate & mental objects and (b) a higher self grasping at hyper-concepts. In this scheme, the proposed higher self, acting as a kind of bridge to the nondual, is not a way to gain direct access to "reality-as-such" or absolute reality, nor does it cause the latter. In the past, this self was endowed with an "intellectual perception" or "intuition" giving it access to the absolute nature of any phenomenon viewed in terms of substance. Although such access is not denied, it is not projected on this higher self, but found in the intimacy of the emptiness of all concepts and the direct experience of the original and very subtle level of selfless awakening which is the mind's deepest potential or generative capacity, the mind of Clear Light*. Neither is the higher self rejected, but found to be a less complex mode of cognitive functioning. When nondual cognition dawns, this self looses its "ontic" grip and transforms into a truly transparant higher self acting as a bridge between the non-conceptual and the conceptual, between the formless and form. Access to this pinnacle of cognition cannot be given by conceptuality, even not by creative hyper-concepts. The latter only lead to the idea of an Author of the world, not to a transcendent Creator-God. The higher self merely produces a series of totalizing creative concepts enabling the integration of a vast set of views concerning the objective & subjective sides of the world of actual occasions. It is the centre of awareness apprehending the limit-ideas & hyper-concepts of the creative mode of cognition. It is constantly invited to step beyond certain thresholds and usually does -as long as its gigantic reifications endure- contaminate its "natural" context. This reification is the source of its tragi-comedy. Displaying a whole range of meaningful (semantic) presences like signals, icons & symbols, the interdependent consciousness of sensate, cognitive, affective & actional experiences synthesizes an inner, panoramic perspective. This is grasped by the "I am" or the higher self of creative thought, transforming, through its inner vision, the dual tension of formal & critical conceptuality into the hyper-conceptual experience of life as a single meaningful conscious event hic et nunc, for me. Creative thought is the optimalization of : • self-reflection, or the inner dimension of the higher self ; • free thought, acting on the human right to exhaust its potential as an autarchic individual ; • encompassing finitude and a panoramic, overlooking view (completing immanent metaphysics). Although the higher self is untestable but arguable, its presence is undeniable in an existential sense. Most human beings needs to invoke a sense of "I am" to be able to exist. Thanks to this creative operator, a series of totalizing, unconditional thoughts or hyper-concepts is designated. These are apprehended by the self and are part of its ongoing "making of the mandala". They are sublime, imaginal, artistic constructions of the mind, and seem limitless, substantial & permanent. They occupy the end of finitude, and define the borders of the ontic subjectivity at work in immanent metaphysics. They are the illusions necessary to keep conventional reality going. Immanent metaphysics retains the division between object and subject. It may reify this on purpose, "as if". The former being a totalized picture of the outer world and the latter an inner mandala having the higher self in its centre, or an elliptical consciousness with two foci of I-ness : one empirical ego and another trans-empirical higher self. Ultimate logic is the reason why any claim of inherent existence is void. In their practice of knowledge-production, scientist, for reasons of methodology, adopt a realist or idealist stance. So to grasp the total picture and view the world, pre-critical immanent metaphysics posits a real world "out there" and a supermind "in here" or "up there". While both are illusions, they merely help to totalize all possible phenomena. When, under ultimate analysis, these are finally unmasked, all reified concepts are burned in a single "prise de conscience". Thus liberated from afflictions, the mind is ready to awaken to its original state (with the higher self being a true process self). The own-self is realized in five stages : • building : on the basis of the super-ego, the "summum bonum" invented by the empirical ego, a total & totalizing icon or "Gestalt" is generated. It is comprised of sense data, consciousness, cognition, affection and action. This is a vibrant grand picture, a sublime summary or "mandala" of what the empirical ego is able to perceive as its own ultimate constructive self-representation. This stage is purely empirical and does not escape the confines of the formal & the critical modes of cognition ; • concentrating : once the mandala made, prolonged concentration on it decenters the ego, and "purifies" all which does not belong to the mandala, allowing the ego to take on the form of its own ideal, and distinguish itself clearly from its negative, the Shadow (cf. Jung). This form is not yet the higher self, but a ladder to the plane of creative thought ; • becoming : insofar as the mandala indeed represents the best the empirical ego is capable of, this vast representation is internalized and perceived "from within". Instead of visualizing the mandala "before" the ego, it is observed with "the eye of the mind" and realized as an inner object of consciousness. When this happens, the mandala, or visualized correct self-knowledge, is seen from within, with the direct experience of I-ness, of "my" soul or self placed at the center ; • actualizing : self-realization initiates the production of self-ideas, more than a projection of the super-ego, but the living experience of an individual, historical being experiencing itself directly as an inherent self witnessing (integrating) all empirical & mental states of consciousness in its being-there ("Dasein"). The higher self is still an ontic (own) self ; • annihilating : the last stage of the higher self is the end of its reification, namely when its own root is directly discovered as the nondual light of consciousness, the natural state of the mind, the mind of Clear Light*. When the subtle illusion involved is pierced, the ontic self is destroyed insofar as it was an ontological illusion. No longer a someone-on-its-own, the individual becomes a fully participating, dependent & awakened. The higher self transforms into the transparant selflessness of awakening. When the reality & ideality of the world have thus been totalized, the present immanent metaphysics poetically posits the "optimum" of limit-concepts : the Anima Mundi or "soul of the world". This is the "form" of the world, its entelechy, and one being with it. As a "feminine", receptive principle (linked with the double movement of inspiration & expiration from the world-ground), She is wholly "of the world", not a transcendent Creator outside the totality of actual occasions. Her immanence mirrors the pataphysical, the hidden Divine of the world. But as She only brings into actuality what is potential, She is the entelechy of the universe itself and does not transgress its boundaries. In all points of the universe, She encompasses everything all the time. In process thought, She is the immanent way God* deals with the world. Immanent metaphysics, arguing the existence of this Great Soul and focusing on its conservative and designing nature, cannot explain Her, except if reference is made to the world as a whole, and nothing more. God* as the immanent Divine present with all actualities God* as the transcendent "Lord of Possibilities" Finally, after all these speculative efforts, immanent metaphysics prepares the end of reifying conceptuality and, by way of an ultimate analysis, undermines the affirmation of the substantiality of all phenomena. This is the purification of the conceptual mind. A. The Limit-Concepts of Reason. § 1 Finite Series and the Infinite. α. In mathematic, limit L is value V a function or a sequence "approaches" as the input or index I approaches some value. I and V may be finite or infinite. When V tends towards infinity (because I approaches zero or infinity), a point at infinity is approached. And endless, asymptotic increase is given, but an actual value is never defined. This is a point at infinity, not an actual infinity (an infinite value actually part of the set of real numbers). It merely acts as an indicator of a point transcending every possible sequence of numbers or functions between quantities and their momentum. α.1 Likewise, in transfinite calculus, infinite numbers like Aleph0 and Aleph1 are not Absolutely Infinity (or "Omega"). These numbers are the rungs of the ladder of infinity. These are transfinite numbers belonging to the transfinite set of actual infinities. α.2 The ultimate, absolute nature of phenomena, the absolutely infinite, is the ineffable object of transcendent metaphysics. All other relative infinities are contingent and so limit-concepts, returning back to the world and thus constituting its periphery. β. For Kant, the category of the ultimate called "God" is derived from the category of relation. The interconnectedness of the manifold cannot be denied. It evidences architectonic unity & scope. This leads to the limit-concept of the Architect of the World or an Anima Mundi, not to a transcendent Caesar-God. Nothing conceptual warrants such a move. Stepping outside possible experience, we transgress the conditions of conceptual knowledge. - Kant, I. : CPR, B350. γ. Those who devised apologies for their version of the singular theist God all made the same mistake : they objectified beyond all possible experience the unconditional unity of all possible predicates, filled the gap "per revelatio", passed beyond the conditioned, and inevitably ended their legitimate rational quest for the most perfect being ("ens perfectissimum") by affirming a hypostatized "ens realissimum". While the former is a possible concept, the latter reification is a transgression. Conceptual reason is not equipped to cross the borderline of the world. Conventionality is all it has. It must settle with that. It cannot move outside the world and experience it like any other object. Transcendence is not outside the world, but the same world observed without conceptual elaborations. The old Platonic topological view must be abandoned and replaced by an ontology based on the notion of a universal dynamical flow (of matter, information & consciousness). δ. Totality of immanence and infinity of transcendence are the two major leading ideas of metaphysics. Totality, as a limit-concept, aims at all possible actual occasions, the complete & full apprehension of the world. Infinity, as a transcendent signifier, does not border totality, as in the topological view, but penetrates totality. In fact, in every single moment, totality and infinity happen simultaneously. When duality turns absolute, nonduality ensues. Immanent metaphysics encompasses all possible actual occasions, i.e. all spatiotemporal building-blocks of the known. An ongoing series X (for example x = 1, 2, 3 ...) is not stopped ad hoc and only the limit of this series -with x going towards ∞- is accepted as a point at infinity, suggestive of the "periphery" of the immanent sphere of observation. This is not an actual infinity, nor an infinite number part of the sphere of observation. Aggregates of actual occasions are finite series, but entering the togetherness of other aggregates, they eventually merge in the quasi-infinite series of the huge sea of process, the vast ongoing architectures (with their differential formulas) of momenta of all kinds hic et nunc ! Attributing qualities to this point at infinity, we violate the principle of immanence. The sum of this quasi-infinite sequence of expressions can never be made, for the series continues to accumulate endlessly. This is what is meant by the "totality" of the world as a system. The moment we end the ongoing accumulation of the quasi-infinite set of mundane happenings, and posit an actual infinite, then a transgression has taken place. Concepts cannot enter the non-conceptual. The fire of the highest mode of cognition cannot be stolen. Promethean zeal only ends in eternalism (positing substances) or ontological negationism (positing the absence of order in the ongoing processes of the world). Despite objective & subjective transgressions, transcendent metaphysics accepts infinite objects. Emptied by ultimate logic and inspired by transfinite calculus, these objects are the absolute sufficient ground of the totality. They are the infinite embracing totality, the infinite piercing the finite, moving along with the finite, and thereby bringing paradox, bewilderment & wonder. This is the perplexity of the rational mind before the original light of the mind as directly experienced in the nondual mode of cognition. α. Before Kant, substantialism was axiomatic. The existence of a self- sufficient, fundamental ground is discussed but not truly questioned. Metaphysics is substance-based. What makes the beings be ? "To ti ên einai", literally, the "what it was to be" is primordial substance ("ousia"), an "hypostasis" or "hypokeimenon", an underlying thing.  Process, movement, change, motion & transformation are accidental and supposed not to affect the essence ("eidos") of this self-powered ultimate being ("causa sui"), this "substance of substances" sustaining the being of the beings, offering them their permanency. All things merely participate in this ground. Find this permanent,  eternal, unchanging thing-of-things and all the rest is supposed to follow ... β. To eliminate the root-cause of the ontological illusion, Kant attacked the three main substances thematized before him : (a) the soul : interacting with extension, the "res cognitans" is rightly given a distinct main role to play, not one of merely being an auxiliary of "res extensa" or God, but is defined as a substance with inherent properties ; (b) the world : the extended is filled with matter and movement, the basic ingredients of physical objects, but their are defined as independent and separate from each other, operating in absolute space and absolute time as a gigantic clockwork, self-powered & "out there" ; (c) God : the transcendent absolute as defined by fundamental theology fails in logic. Affirming the substantial God of theism is stepping outside the boundaries of conventional experience, and doing so is ineffable for non-conceptual. While "Credo quia absurdum" may inspire believers, it cannot satisfy philosophers. An understanding of God* beyond intellectual embarrassment is called for. This begins by grasping the conservation of the world, its design and the possible existence of its Author, the Grand Architect or "ens perfectissimum", a most perfect being. The latter being merely the immanent aspect of God*. A right view being the first step, one would expect the age of mere faith to soon be over ! γ. These transgressions, typical for essentialism, lead to antinomies & paralogisms. A variety of ontologies ensue. In each, the finite series did not remain continuous. Its continuation was aborted ad hoc and made static by the axiomatic affirmation of an autarchic, inherently existing substantial self-sufficient ground or underlying (absolute) eternal thing. A logically unacceptable jump from the finite to the infinite order was made. The argument fails. Criticism understands the Ideal, the Real and the absolute from another vantage point. No longer seeking the static, eternal core, one focuses on the dynamic stream of interconnectedness between all things. The study of and meditation on this stream reveals the ultimate nature of all phenomena not to be outside or beyond this stream, but precisely this stream experienced as empty of an own-self or substantial core. Knowing something is an illusion should prompt us not to be fooled again. Understanding the soul, the world and God, and in that order, reflects the totalizing intention of metaphysics. But since the Greeks, it has never been bridled by our understanding of the limitations of conventional knowledge in general and of the conceptual modes of cognition in particular. Pre-Kantian philosophy embraced substantialism and concept-realism. Immanent metaphysics studies the objective (with as limit the Real) and the subjective (with as limit the Ideal) aspects or modes of all actual occasions. The latter are fundamental, for shared by both object & subject. With "soul" is meant the conscious observer, meaning-giver or sign-user, making choices and changing things. With "world" is meant all actual occasions happening at moment "t". With "world-system" is meant the totality of this world and the world-ground. With God* is meant the only abstract actual occasion bridging all possible potentiality and all spatiotemporal actuality. These are ontological objects, but process-based. Not a single objective or subjective own-thing can be found, can it not ? Find a substance with inhering properties and erect a substance-based ontology to see this fine structure demolished by ultimate logic. § 3 The Copernican Revolution. α. In the first predictive mathematical model of Heliocentrism, Copernicus  understood why the Earth had to turn around the Sun and not the other way round as his learned contemporaries believed. Neither did they grasp why the Earth would spin at all ! Heliocentrism ended the elect role of humanity in the worldview advocated by the "religions of the book". Before Copernicus, most scientists, loyal to the Hellenism of Late Antiquity, adopted a geocentric view of the world. The complex geocentric model of Ptolemy worked, so why abandon it ? Heliocentrism had been proposed by Aristarchus of Samos in the 3th century BCE, but this valid view had been put aside. Why ? Because he had retained circular orbits. Copernicus too was unable to let go of this. β. What was a solid geo-ontological ground, namely the Earth as the objective centre of the cosmos of the Caesar-God, became a mere point of view among many others. Decentration of objectivity invited a turning around to counter the crisis caused. This led to grounding the importance of subjectivity in nothing else but subjectivity (the Cartesian ego had indeed remained essentialist). It also invited intersubjectivity and the revolutions intended to achieve social justice (cf. 1789 & 1917). Indeed, the Solar kings -assumed to have received their crown by the God of revelation- were dethroned. β.1 We need to realize each observer (the Earth) knows reality (the Sun) from a unique & singular point of observation. The fact our relative point of view cannot be escaped points to the importance of subjectivity, of the knower in the act of cognition. Along with the known and knowledge, the knower is a necessary, co-relative but independent element of the process of producing knowledge. The knower is at the centre, not the absolute ground. This is a paradigm shift away from autarchic objects and figures of authority to a reflection upon the conditions & possibilities of selfhood, no longer viewed as a substantial, eternal (immortal) soul. β.2 The knower is no longer a passive gatherer but a creative participator. Shaping its own world, humanity can do nothing less than take up full personal responsibility for what happens on planet Earth. Only a global political system is able to solve problems, for nationalism will fail. The time of independent nation-states is over. The Copernican Revolution is the realization every moment each observer occupies a unique vantage point. γ. While we observe the Sun rising and setting, we know this is due to the rotation of the Earth. Our understanding of the phenomenon does not change our observation. Likewise, we may see the disk of the Moon change, while astronomy tells us otherwise. So also in epistemology. γ.1 The transcendental apparatus is not a property of the known, but a functional characteristic of the knower. The subject seems passive while it is actively designating & attributing labels to objects. We do not grasp perceptions, but sensations, and the latter are already interpretations of perceptions. γ.2 In ultimate analysis, the "Via Regia" to the end of reifying conceptual elaboration, the same reversal is found. While we observe conventional objects to exist independently & separately, both logic & physics teach they are, at the most fundamental level, dependent and non-local. They appear at the explicate level as solid & permanent, but are in fact vacuous & constantly changing. δ. Illusion is precisely this : appearing otherwise than what truly is the case. All conventional knowledge is such an illusion. Valid conventional knowledge (science & immanent metaphysics) are merely sophisticated formats of the valid but mistaken appearance of disconnectedness. Like galaxies, Solar systems, planets, mountains & large monuments, valid conventional knowledge may last for a very long time, but cease it eventually will. As creative advance is ongoing, what we know constantly changes. Any kind of institutionalization fails to follow the tide. A lost battle against cultural lag animates the ranks of the academia. Eventually, even the Himalayas will crumble to dust. Conventional knowledge is valid but mistaken. Absolute knowledge is beyond validation and unmistaken. The former is outspoken, the latter silent. The observer is not merely a "passive intellect" taking in sense-data and organizing them post factum by way of an "active intellect". Observation takes place in an already established framework of names, labels & identifications (negations). The latter is the outcome of the slow complexification of our cognitive texture, expressing itself in various modes and, in due course, establishing various mental operators. What was initiated as coordinations of movements, becomes internalized functional processes and various "intelligences". The object of knowledge is not "naked" as naive realism wants it, but the end result of interpretations made by the cognitive apparatus. While Kant supposed these interpretative structures were universal, and part of them are, the idiosyncrasies of observation are noteworthy. The Copernican Revolution is a decentration. § 4 The Linguistic Turn. α. Having accepted the importance of the knower, we focus on our human capacity for language, i.e. the meaningful manipulation of signs like signals, icons & symbols, to impute, designate, label or attribute. In the process of this differentiation undergone by our cognitive texture, generating the conceptual mind calls for semiotical functions. They are crucial in placing labels and in identifying sensate & mental objects. They are communicated to the intersubjective milieu of sign-interpreters. β. The actual use of languages defines a set called "userware", operating in immediate, mediate and general contexts. This is a kaleidoscope of choices, but also a calendar fixing an itinerary & the rites of passage undertaken by humanity in its sentient evolution. In the most general way, this set brings together all possible sentient activity at work in the world as a whole hic et nunc, including the infinitesimal possibility of sentience potential in every actual occasion, the building-block of ontology. γ. Insofar as the object of immanent metaphysics is concerned, the world is the totality of all actual occasions taking place in a single moment of the existence of the universe, encompassing all actual occasions ; material, informational and sentient. The world-system may generate countless consecutive worlds. The "breath of the worlds" being the flux of the ongoing arising, abiding, ceasing and re-emerging of the world out of its ground. δ. The subject of knowledge claims the object by naming. Designating labels, imputing fixed characteristics, properties & relations, this conceptual, conventional knowledge -if valid- is the right tool to solve functional problems dealing with activities involving instruments, strategy or communication. But this relative knowledge is not absolute and so mistaken. δ.1 The impure, reifying conceptual mind posits an independent, separate & substantial object, superimposes a category-mistake upon perception & reason. δ.2 If a substance-based object is found, can it be anything less than massive ? It must therefore be easy to ostentatiously identify such a substance, self-sufficient ground or essence, must is not ? If we check all the rooms of the house for the presence of this hippopotamus and after thorough investigation none is found, then may one not safely assume there is no hippopotamus in the house ? Perhaps not for the full measure, but surely very likely enough. Only in our imagination do hippopotami become invisible ... ε. The Linguistic Turn is a deepening of the Copernican Revolution. The latter argues the necessity of the observer, the former creativity & awareness.  Indeed, now sentience itself, the consciousness of the observer, becomes the crucial symbolizing part of the process of acquiring knowledge, as it were emancipating the self-reflective activity of the "ego cogitans" begun by Descartes. In this self-reflection, conscious critical awareness and the production of symbols (leading up to Artificial Intelligence) are integrated. The meaning of language depends on how its signs, the units of meaning, are consciously manipulated. Meaning-shifts happen constantly and only by repetitive use can certain glyphs (or well-formed, meaningful states of matter) endure over longer periods of time. Thus turned into cultural objects, they face the rise, fall & rebirth of civilizations. Creativity (novelty) and symbol-production move with conscious intention, choice, meaning, sense, sentience and functional activities involving sensation, volition, emotion and thought. Understanding the importance of signals (waymarks), icons (meaningful images) and symbols (denotative & connotative referents) results from placing the subject of knowledge at the centre. The knower grasps or possesses the object and signs as outer manifestations of this mental apprehension. They allow this to be communicated to the milieu and add objects to the domain of information. The latter is comprised of natural and artificial data. Insofar as these conditions & determinations reflect the architecture of the cosmos and life, "natural" software is at hand. Thanks to the sentient activity of humanity, cultural objects are added, and these are merely artificial designs put in by the creativity of Homo sapiens sapiens. α. The two regulative ideas of transcendental reason established by the critical mode of cognitive activity are derived from the two sides of the Factum Rationis pointed out by transcendental logic ; the condition of objectivity, implying thinking must imply the extra-mental, and the condition of subjectivity, implying one cannot eliminate the thinker and intersubjective communication. These ideas, called "the Real" and "the Ideal" respectively, do not constitute the objects known, but merely regulate the cognitive activities associated with the pursuit of objectivity and with mental clarity, acuity, focus and sense of truth respectively. β. "Extra-mental" means the object of knowledge must be considered as a separate, independent entity on its own, i.e. some thing "out there". If a reasonable account of the possibility & production of knowledge is to be made, conventional knowledge must imply this. β.1 Science must -a priori and methodologically- consider the reality of the object of knowledge as if representing absolute reality. Suppose this is not the case. Then scientific knowledge is never about some thing, but merely represents the objects of intersubjective consensus. β.2 So even the "statute-law" of theoretical epistemology provides one must accept facts to possess a theory-transcendent facet. This necessity shows how valid conventional knowledge cannot operate without the possibility of substantial instantiation or reification. β.3 The purification of the conceptual mind, or the end of reifying concepts, is a special mind. Science perfectly functions without it, but metaphysics -if it wants to delve deeper than the world- cannot. The temple of transcendence can only be trod by this purified mind. Theoretical epistemology stops the reification of the conditions of knowledge (the reification of the ideas of the Real & the Ideal), but it must accept facts carry the weight of the absolute, even if this would not be the case ! Practical epistemology introduces the "as if" mentality, substantializing idealism and realism for methodological reasons. γ. Thinking the thinker implies the subject of knowledge must be grasped as a transcendental "I think" for all times, the capstone of a cognitive system in three stages and seven modes. In order to guarantee the unity of the manifold of objects apprehended by the knower, this formal focus necessarily accompanies every cogitation of the empirical ego. It is independent & separate from it and is a formal principle posited by necessity. Hence, even transcendental thinking, purged from essentialism, must accept this subject of subjects, a desubstantialized absolute ideality. Creative thought turns this formal self into an ontic own-self. 1 Mythical libidinal ego 2 Pre-rational tribal ego 3 Proto-rational imitative ego barrier between instinct and reason 4 Rational formal ego 5 Critical formal self barrier between reason and intuition nondual selfless (transparant) self δ. Because at this critical level of thought unsolved tensions and delusions remain, the creativity of the higher self is necessary. The appearance of the Clear Light* is the outcome of the final purging of reification, leading to the selflessness-in-prehension, the end of self-cherishing and self-grasping in its coarse & subtle forms. Annihilating this ontic self brings about the transparant selflessness of awakening. The operations of the conceptualizing mind are regulated by the ideas of reason. These limit-concepts tend towards the optimalization of valid conventional knowledge. The idea of the Real regulates by presenting the correspondence of valid conceptual knowledge with absolute reality, the idea of the Ideal by bringing the "consensus omnium" of all sign-interpreters to the fore. They merge and form a point at infinity, a "focus imaginarius" never itself conceptually known. These ideas are not transcendent, but the transcendental conditions of objectivity & subjectivity respectively. α. The objective & subjective sides of the Factum Rationis -ruling all possible cognitive activity- are self-evident, and by necessity regulated by the ideas of the Real and the Ideal respectively. Likewise, the object of immanent metaphysics, namely the world, also evidences objective and subjective limit-concepts. Only the object of transcendent metaphysics, namely the world-ground, is beyond limit-concepts. As conceptualization stops, signs attempting to grasp this a forteriori imply paradox and inconsistency. Transfinite calculus, advancing actual infinities, although indicative, cannot bridge this and so remains inconclusive. Can one therefore speculate on the transcendent ? β. The world is a sea of actual occasions acting out matter, information and consciousness, the three fundamental aspects of every single momentary actual occasion. The question arises : how is this order possible ? Ignoring the extraordinary radiant brilliance of this dynamical architecture, even over very large periods of time, is inept. Moreover, mere stochastic views run against the high unlikelihood of the parameters of this cosmos, with its life & sentience ... β.1 To call for a transcendent cause to explain the world is going too far. Logic forbids the direct, uncritical use of an absolute self-sufficient hypostasis, signals the use of a transcendent signifier, and deconstructs it. Transcendence posits at hoc an end to the endless progression deemed possible by the immanent view. Indeed, the actual finitude of the world cannot be demonstrated (while its quasi-finitude may be accepted). Neither should the possibility of an infinite series be rejected beforehand. β.2 A transcendent & infinite absolute towering above a finite world, a Pharaonic "substance of substances", cannot be posited without logical problems. Even non-substantial, process-based speculations about infinity are not without paraconsistency. The non-conceptual cannot be grasped by any concept. Indirectly, poetry may translate this direct awakened experience of the world in the nondual mode of cognition. If this is the case, then a hermeneutics of the signs used by mystics is possible. γ. To be rationally established, the order of the world does not need a transcendent cause. This was proven by Ockham. γ.1 To avoid any problem with the infinite ingress in time of the horizontal series of interacting and interdependent efficient causes, jump to the actual, vertical order of events hic et nunc. So not as they are happening in the horizontal, temporal, functional, physical order, but as they are happening in every succeeding moment. By doing so, one always avoids an infinite regress. Is it not a solid axiom to affirm the world is not infinite in each actual moment ? If not, how to avoid blatant absurdities ? γ.2 The revised a posteriori argument from efficient causes : Case to be proven : "A first conserving cause exists." • Major Premise : in the contingent order of the world, nothing can be the cause of itself or it would exist before itself ; • Minor Premise 1 : an infinite series is conceivable in the case of efficient causes (existing horizontally one after the other), but very unlikely in the actual (vertical) order of conservation "hic et nunc" ; • Minor Premise 2 : an infinite regress in the actual, empirical world hic et nunc would give an actual infinity, leading to absurdities like being born before one's own mother ; • Minor Premise 3 : a contingent thing coming into being is conserved in being as long as it exists or abides - being contingent and so impermanent it eventually ceases ; • Minor Premise 4 : as only necessary beings conserve themselves and the world contains contingent things only, every conserver depends on another conserver, etc. ; • Conclusion 1 : ergo, as there is no infinite number of actual conservers, there is a first conserver ; • Lemma : if we suppose an infinite regress in the actual, empirical world hic et nunc, then an actual infinity would exist, leading to absurdity ; • Ergo, at least a first conserving cause exists. QED γ.3 The (supposed) finite order of the world of contingent actual occasions cannot be conserved without a first conserver. Thinking an actual infinity may and often does lead to rationally unacceptable inconsistencies. δ. The argument from design runs as follows : Case to be proven : "The world has an intelligent, proximate cause." • Major Premise 1 : the world is an organized, contingent whole, evidencing variety, order, fitness & beauty ; • Major Premise 2 : it is impossible for this arrangement to be inherent in the things existing in the world for the different entities could never spontaneously co-operate towards definite aims, not even over very long periods of time ; • Minor Premise : definite aims need a selecting and arranging purposeful rational disposing principle ; • Conclusion 1 : ergo, there exists a sublime and intelligent cause (or many) which is the cause of the world, not only in terms of natural necessity (blind and all-powerful), but as an intelligence, by freedom ; • Conclusion 2 : the unity of this cause (or these causes) may be inferred with certainty from the unity of the reciprocal relation of the parts of the world as portions of a skilful edifice so far as our experience reaches. Ergo, the intelligent cause or causes of the world forms or form a unity of design ; • Lemma : if this cause is projected outside the world to explain its activity, then the domain of reason is left and the argument from design becomes the refuted argument from necessity (cf. the cosmological argument). Ergo, the argument from design does not prove an ultimate, but a proximate cause. • Ergo, the world has an intelligent, proximate cause. QED δ.1 For Kant, the argument from design led to the "stage of admiration" of the greatness, the intelligence and the power of the Architect of the World, who is indeed very much restricted by the creativity of the stuff with which to work. And this unlike the Creator-God of monotheism, who as an Author, both self-sufficient, necessary and transcendent, can do whatever He likes to change things immediately ! δ.2 This Architect of the World, "God of the philosophers" or God* is not omnipotent, neither powerless. Omniscient of what happened and what is happening now, not of what will happen in the future, this Anima Mundi or entelechy of the world is receptive and generative of order ... But perhaps also of orders inimical to life & sentience itself, pre-crystaline architectures close to the seminal state of the world. δ.3 Understand the order and beauty of the world points to a final end, namely to actualize all its possibilities, itself an ongoing, endless process regulated by limit-concepts. The conserving "soul of the world", or intelligent proximate cause of the world, does not transgress the boundaries of the world. δ.4 In all points of the world (both momentarily as temporally), this Architect, Great Soul or Great Mother encompasses everything all the time, keeping all actual occasions in her fold, passing by each single one of them. ∫ Seek to affirm conservation and (intelligent) design in harmony with the Big Bang, relativity, quantum, chaos & natural selection. ε. On the subjective side, the world displays subtler (deeper/higher) levels of consciousness. The empirical ego observes the display of sensate and mental objects it possesses on the surface of the "mirror of the mind", in other words, as part of the circular field of consciousness with this ego at the centre. This is the coarse, empirical mind. ε.1 This coarse mind receives five sensate objects and identifies them by imputing conceptual labels & names on them. The five sense-consciousnesses associated with them can be established by this conceptualizing mind as long as (a) the sensitive surfaces of the healthy sense organs receive stimuli, (b) these inputs are properly decoded and transferred to the thalamus and (c) the thalamus projects this afferent information on a well-functioning neo-cortex. ε.2 The coarse mind also possesses mental objects. These are used to communicate information with other minds and label sensate objects. The ontic ego has a strong sense of inherent identity, with feelings of autarchy and an innate freedom of choice. It seems to exist separately and independently. It is a special mental object, namely a sentient one, a consciousness displaying emotional states, intentions, thought and self-consciousness. ε.3 Given the empirical ego is the root of the direct experience of sensate & mental objects and also the origin of conceptualization, naming & labelling, the realization of its impermanence is crucial to make it pliant enough to establish the subtle mind. Because the magnificent, sublime & blissful character of the subtle mind leads to the subtle delusion of identifying it as a higher, eternal self (a new ontic, own-self), unsolved tensions remain. This subtle mind, established by observing the insubstantiality of the coarse mind, also needs to be totally desubstantialized, leading to the higher self and then to the selfless transparancy of the mind of Clear Light*. ζ. The subtle mind no longer establishes the inherent, substantial ego based on sensate and mental objects. To observe the lack of inherent properties in the subtle mind and the three root-causes of all conceptual activity properly prepares -so transcendent metaphysics claims- the awakening of the mind of Clear Light*. This is the original, natural state of the mind, the very subtle mind or fundamental stratum or layer of mind. But insofar as immanent metaphysics is concerned, this ultimate mind*, based on an ineffable but actual nondual experience, can be nothing more than a limit-concept. Only full-emptiness, the union of bliss and wisdom endures. Immanent metaphysics should not posit an absolute entity, Deity or Supreme Being outside or behind the world. Theology should abandon Platonic topology to convey transcendence. Outside the world, this "Urgrund" or Unmoved Mover is a forteriori something radically different from creation. Hard to imagine how such a Being would communicate with the world. Insofar as the Architect of the World remains part of the world, immanence prevails. Immanent metaphysics (backed by valid argumentation) can go no further. Sublime poetry, but this falls outside philosophy, may inspire a hermeneutics of salvic poetic signs. Positing a transcendent Being feeds the illusion of a self-sufficient ground. The Architect of the World, the immanent approach of the world by God*, is not a creator "thinking" the world before its incipience, fashioning it as it were "ex nihilo". The Architect of the World is not beyond the world but with every possible actual occasion. Transcendent metaphysics merely affirms a realm of sheer potentiality, but this is not to be confounded with a theo-ontological, self-sufficient Absolute Being or Creator-God. Such a "God-as-Caesar" is not found to exist. This makes one ask what kind of God* process metaphysics does envisage ? Subjectively, another limit-concept is introduced. The unity of conscious experience cannot be explained by the coarse mind. Formally, as critical thought explains, this necessitates a formal self "for all times", one merely accompanying every cognitive act of the conceptual mind. A deeper stratum is reached as soon as the coarse mind is emptied of itself, i.e. of its own identification as a substantial, independent and separate entity. This identitylessness of persons leads to the formation of a new, higher focus of conscious awareness. At first, this focus grasps at itself and generates an ontic self (an eternal soul or "âtman"). While offering a panoramic perspective producing creative concepts and a cosmic awareness, the ontic self does not exist from its own side. Once this is thoroughly realized, the subtle mind is no longer caught in its subtle delusions and, in the poetical language of the mystics, the Clear Light* of the original mind or very subtle awakened mind shines through. B. Diversity & Convergence in the World. α. Considering the mundus, the horizon represents the ongoing complexification of all actual occasions, events & entities part of the world and distributed over the four cardinal directions. These are not only constantly interconnected, but also enter each other's history and therefore shape the fabric of an organic togetherness based on creative advance. The manifold, or the world disjunctively, is a sea of process. β. The horizontal plane displays diversity, variety, multiplicity and differentiation. On an explicate level, this manifests as the vastness of physical space and the nearly endless temporal flow of events taking place somewhere. On the implicate level, this is the universal quantum plasma connecting all momentary actual occasions. β.1 The ultimate or primordial ground of the world or world-ground is not a substantial Real-Ideal underlying all actual occasions, but a realm of pure possibility, of formative abstracts covering what is needed for the next moment of the world to happen. The world-ground is the sufficient ground of the world, but not a substantial self-sufficient one. β.2 World and world-ground constitute the world-system. The ground of the world is the potential out of which all possible actual occasions constantly emerge, eventually return to and reemerge from. γ. The temporal, sequential & efficient togetherness of actual occasions and their aggregates also happens horizontally. Efficient determination is the direct physical impact of actual occasion A on actual occasion B. If this temporal "flow" would be the sole determining factor of the togetherness, materialism ensues. But then no creative advance would be possible. Adding architecture & sentience makes diversity possible. The world is a set of actual occasions. These feature a temporal stream of interconnected moments. All possible interconnections fall into different categories of determination or lawful contact between actual occasions like causality, interaction, statistical correlation, etc. These determination & conditions contribute to the diversity of the world and are called "horizontal" because they all invite a succession of states or moments of existence. All these are instances of efficient determination, or the determination between actual occasions on the basis of their functions & temporality. If only efficient determination would rule the world, no creative advance would be possible, for actual occasions would by themselves add nothing to the succession of happenings. The universe would be "dead bones", nothing but a "nature morte" of elements. This is clearly not the case. Science teaches the well-formed nature of the choice of natural constants and lawful activity in the physical universe. The laws of Nature suggest an immanent "logos" thinking these architectures. α. Again, in the mundus, the prime vertical represents the continuous complexification towards unity, from hidden & simple ("nadir") to overt & complex ("zenith"). This coming out into the light of unity-out-of-diversity, heralds the return of the world to its original singularity, to its last expiration (or evaporation) at the end. Because of final determination, the manifold becomes the one actual occasion, the world conjunctively, an organic sea of process. This results from convergence between societies of actual occasions, an attunement of their participations in each other and the establishment of a cosmic participation throughout the members of the world. β. The organicity of the world is the case not only thanks to the (material) temporality of efficient, physical connectivity and interdependence, but also because of the ongoing informational and sentient activities of conservation, design & Clear Light*. β.1 The material aspect, defining the horizontal plane, is -at every moment- indeed crossed by a non-material aspect at its vertical, an intelligent focus or "vis a tergo" reorganizing the probabilities of materiality and thus indirectly co-directing the material manifestation of particles & fields. The total available information provides the "mandala" of choices manipulated by sentient decisions. The latter, ex hypothesi, alter the structure of the probability-fields ruling material manifestation (cf. the collapse of the wave-equation of Schrödinger). β.2 Of course, this vertical co-direction is hampered by the free choices of all other actual occasions. γ. Information & consciousness define intelligent focus, or the combined activities of totalization, generalization, overview & sentience characterizing final determination. The teleology of the mundus fosters unity & the largest possible harmony. The vertical adjustment, balancing or finalization by any actual occasion enters and influences the efficient stream of the next. Thus efficient & final determination cooperate in every single instance of the mundus. δ. In direct nondual experience, the very subtle mind of Clear Light* finds itself inseparable from the world-ground, the absolute ground of pure potentiality. The unity of all possible minds of Clear Light* or the prehension by a single supermind of the world-system as a whole is called "the primordial mind", "Âdi-Buddha" or "the mind of God*", the ultimate, omniscient, total & infinite prehension of the momentary. ∫ Of what cannot be conceptualized, only melody can speak. Besides efficient determination, the mundus also features finality. This means the unity, creative advance and harmony between the various efficient characteristics point to a singularity, namely the world as a unity, a whole, a "mandala" of actual occasions. This is not merely a compound of disparate elements, but an organic unity consisting of all possible actual occasions. This engages the most comprehensive form of participation ; the unity & harmony of the manifold as apprehended by intellectual focus. This is a unity conscious if itself, i.e. the unique society of societies of actual occasions. Thus the world displays material efficiency hand in hand with informational organization (architecture) and the results of sentient, conscious choice. The latter two define its final determination, adjusting the horizontal flow of functional efficiencies by altering the structure of the propensities involved in the process of material manifestation. Finality, involving unity & harmony, emerges together with the conservation and the design of the world. This calls for God*, a supermind imputing its superobject and apprehending the world-system as a whole, i.e. the potential world-ground and the actual world. Immanent metaphysics cannot move further and -in the context of a process metaphysics- merely points to the transcendent signifier as a category of potentiality, virtuality, possibility (emptiness) and its simultaneous manifestation as a vast network of interconnected actual occasions (fullness). But such a possible Grand Architect is never an Author, not a Caesar, nor a Creator. C. The Alliance between Science & Immanent Metaphysics. § 1 The Alliance of Form. α. Science produces valid empirico-formal propositions. These are necessarily statements referring to facts. Facts are valid but mistaken. Simultaneously, they are extra-mental and determined by mental objects. Because science works with propositions, it obeys formal logic. The latter defines the form of science. Of all logical operators, the negation is the most basic. Of all axioms, non-contradiction is the most elegant. β. Metaphysics argues a comprehensive view of the world. It does so in metaphysical systems integrating scientific knowledge and the history of speculative thought, if possible world-wide. Because it is argumentative, it presents an organized, architectonic mental object. Having formal outlines, logic is implied. This is also the case for the procedure to settle arguments (the rules of argumentation). If metaphysics is contradictory and makes no efficient use of contradiction, it cannot be valid. The correctness or well-formedness of the argument is as crucial in science as it is in critical metaphysics. Logic is the corner-stone of both science and critical (immanent) metaphysics. By adopting certain rules conveying order and abstraction, an architecture ensues. Both disciplines focus on the world, science in detail, metaphysics in general terms. Accepting logic is to confirm that if arguments fail, the conditions of well-formedness have not been met. An incorrect form is being applied. Of course, logic also assumes a series of axioms, logical operators and rules of argumentation. One cannot change these at random, but decide beforehand what is going to be used. Organizing the field of logic, distinguish between formal, semantic and pragmatic logics. The first deal with the form of statements, and derives their truth-value on the basis of this alone, i.e. without taking contents into account. The second type is contents-based, using natural symbols (like cosmological or biological cycles and processes). The third type is used in certain practical contexts, like dialogue or argumentation. It is quite useless to apply formal rules to contents-based reasonings, or define the latter in terms of practical applications. Each type has its own domain and applies its own kind of rules. A variety of logics have ensued (non-formal, non-linear, quantum, etc.). § 2 The Alliance of Contents. α. Science solves problems and understand Nature in its diversity. Critical metaphysics totalizes Nature, understands the world insofar as the world goes and points to the transcendent world-ground understood as a process-based sheer potentiality. Sensate and mental objects are "natural", i.e. belong to Nature. Their horizontal aspect is their tendency to disperse their momentum, while their prime vertical triggers a balancing-out of extremes by altering the propensities ruling efficient states of matter, manipulating the virtual totality or set of "all possibilities" speculated to be present before any kind of actual manifestation, i.e. before the actual collapse of an infinite number of possibilities -the primordial sense of matter, information & consciousness- to a single actual occasion hic et nunc. β. Science and immanent metaphysics are natural allies. Their aim is to understand Nature, the world. But this alliance is conditional. On the one hand, immanent metaphysics must acquire sufficient information before starting to speculate about a "mandala" or totality. In terms of the current scientific paradigm, it must accept three fundamental facts : (a) the origin of the observable universe in the Big Bang some 13.7 billion years ago, (b) a 4.6-billion-year-old Earth and (c) the evolution of life-forms by means of (neo-)Darwinian natural selection. On the other hand, science must keep out of metaphysics and leave speculative activity to philosophers. γ. Clearly science and transcendent metaphysics are not allies. A critical transcendent metaphysics posits a process-based, ultimate world-ground as inseparable from or in unity with the mind of Clear Light*. While this cannot be argued definitively (by valid conclusion or affirmative negation) and this direct experience of such a primordial unity or wholeness is non-conceptual and nondual, it is nevertheless a known, a datum of knowledge, part of a cognitive act. γ.1 This special experience & knowledge ("gnosis" or "prajñâ") or living mystical awareness & insight ("Da'at"), arising in the awakened ("bodhi") or ultimate, very subtle mind of Clear Light*, may be prepared by any pliant mind realizing the fruits of ultimate logic and hence purified from conceptual reification. As a direct experience and a cognitive act, it is nevertheless beyond validation and unmistaken. Beyond validation because it involves a profound, undeniable, more certain truth than any other truth or prior belief ; the ultimate Eureka ! or "Aha !"-experience ; but it is nameless. Unmistaken because it apprehends what is as it is, nothing more and nothing less, without any conceptual elaboration. γ.2 In this awakened mind, selflessness merely prehends its objects, conceptual & non-conceptual alike. If concepts arise, they are merely logical & functional entities, nothing more. The suchness of all phenomena is the thatness of their arising, abiding, ceasing and reemerging. The absolute mind only entertains the existential instantiation, attending the non-separability of fullness of togetherness and emptiness of own-nature, of compassion and wisdom, bliss and absence of inherent existence. Here the absolute nature of duality is directly experienced. Science and immanent metaphysics both focus on the world. The former seeks empirico-formal propositions about the manifold, while the latter articulates its speculative statements, aiming at a general perspective and the unity of the selfsame manifold. This is not a God's-eye viewpoint from outside the world, but a tangential appreciation of the whole. Both disciplines, when working together and not against each other, will enhance the production of knowledge and lead to a better appreciation of both the manifold and the unity of the world. The latter points to the activity of a higher intelligence, a Grand Architect of the World, designing & conserving the world-order. Either this, or a mathematical miracle explains what is at hand. This is not a Creator, for such a transcendent Being, posited as radically different from its creation, cannot be conceived without mystification, paradox and contradiction. However, transcendence can be conceived, but not in terms of an ontological difference, but as (a) an continuous process and (b) a sheer potentiality that just was, is and will be. The relation between the actual quasi-finite world and the pure, infinite possibility is not a causal one (for spacetime as physically conceived starts with the arrival of the cosmos with the Big Bang), but a holistic determination (the greater whole encompassing the lesser). § 3 Empirical Significance & Heuristic Relevance. α. To arrive at any scientific truth, i.e. a valid empirico-formal proposition in the realm of conventional, conceptual knowledge, significance is needed, implying the facts, results or data referred to by this truth are unlikely to have occurred by chance. Randomness is the non-order in a sequence of symbols or steps, a process lacking intelligible pattern(s) and their combinations. High, medium and low significance prevail. In this sense, on the scale of scientific truths, Schrödinger's wave-equation is the most significant. β. Significance covers the objective realm, but significant facts may have no relevance, i.e. subjective importance. Relevance is the relation of something to the matter at hand as viewed by subjective & intersubjective intent. Insignificant statements may be highly relevant. The concept of "intelligent design" as proposed by monotheist creationists is unscientific and insignificant. But to many communities of fundamentalists this idea or mental object is highly relevant. In the context of process metaphysics, intelligent design harmonizes with cosmology & evolution. Relevance cannot be "tested" but only argued. The most sophisticated system of answers wins the day. significant insignificant relevant science serving metaphysics of hope irrelevant science serving randomness, chaos γ. Because metaphysics is not testable but only arguable, it cannot produce significance. Scientific validity calls for both experimentation and argumentation leading up to theory-formation. The phrase "metaphysical experiment" involves a contradictio in terminis. So it follows all speculative inquiries done by theoretical philosophy are simultaneously insignificant and highly relevant. Metaphysics holds a very special place. As a heuristic of science, valid & critical theoretical philosophy is crucial in providing totalizing frameworks and in letting the scientists do they jobs, i.e. produce facts using tests & theories. Its insignificance is not factual, but the consequence of metaphysics being untestable. As soon as the philosopher becomes a scientist, inspiration vanishes. As soon as the scientist becomes a philosopher, subtlety is out. δ. Metaphysics articulates a totality. Critical process metaphysics grasps this as impermanent (dynamical) and interconnected. There is much hope in both. δ.1 Absence of permanence means all things can enter all things, for the absolute isolation given with the permanent thing is not present. This fluidity of the impermanent stream of actual occasions optimalizes the possibilities of change & transformation. The low can turn into the high and vice versa. Optimalizing duality, this extreme heralds the coming of that extreme. We are never stuck. δ.2 As all actual occasions are interconnected and produce novel togetherness, the singular ego has "a place to move to", namely to all those countless suffering others. ∫ A metaphysics of hope fosters unity & harmony. Non-substantial, unity is a perfect style of movement, whereas harmony is the cosmic law, "Maat" or "Dharma" ruling interconnectivity between all possible actual occasions, shaping negentropy, non-redundancy & reduced randomness. Scientific propositions are significant because they reflect the objective findings of the community of sign-interpreters. They may be relevant or not, i.e. appeal and be of (inter)subjective use. Metaphysical statements are not significant but not necessarily pre-Baconian, i.e. picturing the world we would like instead of the way science thinks it is. Immanent metaphysics stays near (or next to) the findings of science and tries to place these in a general picture. But valid metaphysics is highly relevant, allowing us to grasp the possible unity and harmony of the world. D. Limitations of a Possible Speculative Discourse. § 1 Logical Limitations. α. Because metaphysics cannot be tested, it must present strong arguments. But these are based on logic, involving certain choices like logical operators and rules of argumentation. These must be accepted beforehand. Formal and informal logics prevail. Although identity, non-contradiction and excluded third figure in most, this is not always the case (cf. paraconsistent logics and intuitive logics with included third). β. Any kind of arbitrariness forms a limitation. The validity of metaphysics cannot be absolute. Not only because new facts constantly emerge, but also because the axiomatic choices demanded by logic are (inter) subjective. Unlike science, metaphysics can never actually test its hypothesis. This is the unavoidable logical limitation of metaphysics. All conceptual elaborations are based on logic. Down the centuries Aristotelian logic (not unlike Euclidean geometry) has  been considered as the only possible way to establish the truth-value of statements. But just as Riemannian geometry showed two parallel lines indeed may intersect, non-formal logic and alternative formal logical theories provide evidence of the importance of establishing the logical rules to be applied beforehand. Certain phenomena investigated by science, like the particle/wave paradox or the superposition state of the wavefunction, defies the principle of non-contradiction deemed the cornerstone of correct thinking. Indeed, quantum logic calls for a different set of first principles and so cannot be approached with classical formal logic. These limitations apply to any kind of conceptual system and so in that respect, both science and metaphysics share the same limitation. § 2 Semantic Limitations. α. The contents of scientific knowledge is based on sensate & mental objects. The contents of metaphysics on mental objects only. There is no way to test speculative statements. Their relevance is heuristic, inspirational & inventive. The semantics of science leads to a better understanding of the manifold and so to technology. The semantics of metaphysics leads to an understanding of the whole based on speculative statements derived from the best of science and so able to inspire the latter. β. Creative concepts throw a vast number of meanings together, shaping powerful symbols. These ingredients of the grand story of the world-system are pertinent mental objects. The need of a critical metaphysics is most pressing here. No sufficient ground can be invoked. Mental objects are not inherently existing substances, possessing their properties from their own side, they are other-powered. This means their properties derive from the process of interdependence & wholeness, not from absolute isolation and autarchy. Past metaphysical system were substance-based, not process-based. They included the ontic ego and/or ontic (higher) self existing independently and separately. γ. A valid critical metaphysics works with the absence of sensate objects and the unwanted tendency to reify mental objects. Not a science, metaphysics is not bound by scientific (experimental) methodology. Theoretical philosophy is not to copy the ways of science. Remaining irreversibly interlinked, both are distinct domains of conventional knowledge, the one aiming at particularities, the other at generalities. The semantic limitations of science and metaphysics differ. The former are primarily defined by sensate objects. If all swans are deemed white, the discovery of a black swan indeed introduces a considerable shift in meaning regarding the word "swan". Metaphysical statements are limited by the discoveries of science and the ability of the speculative system to grasp the whole in a comprehensive, non-reductive and arguable way. Of course, an advance in these only calls for better mental objects, and does not entail the discovery of any novel sensate object. § 3 Cognitive Limitations. α. The activity of science is conceptual in a formal sense. Valid scientific knowledge stands between the knower and the known. Thanks to theory & testing propositions of fact come into existence. This production leads to a complex hierarchical network of scientific propositions with a central core ; the current scientific paradigm. β. Immanent metaphysics cannot be eliminated from the background of argumentation and experimentation. But its mode of cognitive activity is creative, not formal or critical. Immanent metaphysics (using hyperconcepts) brings science to greater unity, inspires it to pursue the production of valid (significant) scientific knowledge and invents a possible panoramic view of the world. γ. Transcendent metaphysics is altogether a different matter. Here an ultimate mind is posited, one able to directly know the absolute in its absoluteness. This unveils the world-ground of the world-system as apprehended by an ultimate mind of Clear Light*, namely the mind of God*. Science and metaphysics do operate in another mode of cognition. Formal and critical thought apprehend their objects as possessed by an empirical ego. The latter is not a substantial entity, nor are the objects of science in any way substantial (although they do tend towards essentialism). The propositions of science merely reflect a truth-for-the-time-being, and so cannot have any definitive pretence whatsoever. Being conventional knowledge, they aim to solve problems to enhance the functional efficiency whilst dealing with objects. The ultimate nature of these objects is not under investigation. In that sense, science should always entertain a high dose of humility, not stepping outside the domain of appearances. Contrary to this, creative thought apprehends an ontic self trying the grasp the totality substantially. Here thought seeks a self-sufficient ground and cannot find any ! The tendency of conventional knowledge to reify is actualized, leading to the apprehension of an underlying reality behind the mental & sensate objects of formal & critical thought. Lastly, while selfless nondual cognition does away with this substantializing approach, discovering the impermanence of all possible objects of thought, it does lead to a direct experience of the ultimate truth of all possible phenomena, namely their impermanence and interconnectedness. This ineffable experience, which cannot be conceptualized, is nevertheless very definitive in a non-conceptual way, leading up to the mind of Clear Light* apprehending the absolute nature of all phenomena. 1.3 Transcendent Metaphysics. While immanent metaphysics, by positing a series of limit-concepts to define the so-called "periphery" of the world, stays within its confines, critical transcendent metaphysics identifies this endeavour as rather artificial. How can the world have a periphery ? If the world is all there is, then there is no "outside" of the world. The Platonic division, so cherished by classical transcendent metaphysics, between a finite, derived world of becoming and an infinite, primordial world of being is devoid of sense. Is this not more based on cognitive limitations than on ontological divisions ? The world, insofar as conceptual rationality is concerned, is indeed quasi-finite (i.e. limited). So how can an actual infinity exist as part of the world ? But in terms of nondual cognition, the world-ground is infinite. So the distinction is epistemic, i.e. rooted in the way the subject of experience cognizes the objects it possesses. Moreover, conventional knowledge posits a world of seemingly independent objects, and only in this context has "periphery" any actual meaning. Realizing, by way of ultimate logic, no inherently separate entities exist does immediately away with any fixed notion of "outer" and "inner", for both are interdependent and so arising simultaneously. Viewing objects conventionally, they are limited (quasi-finite). Viewing the same objects ultimately, they are unlimited (infinite) ... Substantalizing the distinction brings about the apory between an inherently existing finite world and an inherently existing infinite transcendent self-sufficient ground "outside" the world. To ask how the world looks like when nobody is apprehending it cannot possibly be known, for object and subject also arise or coexist together. Conventional knowledge and its conceptual rationality cannot move further than designating a limited world and a series of limit-concepts like designer, conserver and the mind of Clear Light*. Suppose it imputes an Author or Creator, then it moves beyond the possibilities of conceptual reason. Non-conceptual nondual cognition directly experiences the world-ground as infinite and inseparable from the mind of Clear Light*. It also prehends the ultimate mind of God*. So from the point of view of conceptuality and its immanent approach, the world-ground is transcendent and infinite and so is its (ultimate) apprehension or prehension of it. Insofar as nondual cognition and its transcendence is concerned, conceptuality is immanent and finite and so is its (conventional) designation of the world. In terms of nondual cognition, the ground of the world is infinite, but the exceptional direct experience on which this is based is ineffable. If we limit ourselves to conventional and conceptual knowledge -shared by most-, considering this to be the norm, then we say the world is finite, for the common experience on which this is based can be articulated both by science and immanent metaphysics. But the latter are, although valid, mistaken, for the ultimate nature of the world, its ground, is infinite and beginningless. Indeed, conceptuality conceals the ultimate nature of phenomena, and if it tries to grasp this absolute without the benefits of ultimate logic, this ultimate will be defined as inherently existing, i.e. as independent and separate (self-powered from its own side). Then the world-ground has been reified. Traditional transcendent metaphysics, defined by Platonic or Peripatetic ontologies, posits a supreme substance "outside" the world-order. Pre-existing this unchanging, permanent, static supersubstance is the Creator-God fashioning the world "ex nihilo". Critical transcendent metaphysics introduces the transcendent, absolute, ultimate nature of all phenomena as (a) the absence of substantiality, (b) an infinite number of material & informational possibilities, virtualities & potentialities manifesting as finite actual occasions prehended by (c) the absolute or ultimate mind (of God*). And these non-temporal formative elements are themselves not concrete actual occasions. The world-system is then both potentiality (the world-ground of pure possibilities empty of substantiality) and actuality (the world as interdependent phenomena), both mere possibility and actual occasion, both world-as-potentiality and world-as-actuality. Of course, this difference is merely epistemic, i.e. depending on the mode of cognition with which the world-system is apprehended. Valid conventional knowledge apprehends phenomena as interdependent but -given scientific methodology- reifies them. Invalid conventional knowledge posits objects which cannot be validated by science. These too are grasped as existing from their own side, possessing their properties inherently. Here the degree of delusion of truth-concealment is optimal. To simultaneously grasp the world-system as, on the one hand, conventional, limited (quasi-finite) and interdependent and, on the other hand, as ultimate, infinite and empty of inherent existence, is apprehending it as it is, i.e. in its suchness/thatness. This is a bewildering paradox for reason and an enlightened Divine phenomenon designated by the mind of Clear Light*. The direct experience of this can only be prehended by power of nondual cognition ... and remains ineffable. A. Jumping Beyond Limit-Concepts. Conventional knowledge is always conceptual. It cannot move beyond. But concepts are deceptive. While valid conventional knowledge correctly identifies efficient operations, it nevertheless tends to grasp the properties of mental and sensate objects as subsisting in its objects. They are then deemed independent & separate from other objects. The universal interdependence of all phenomena is not clearly seen, if at all. So conventionality, devoid of the fruits of ultimate analysis (uncovering the non-substantiality or process-base of all possible phenomena), leads to the illusion concealing their ultimate truth, namely the absence of inherent existence. This illusion is the result of mental obscuration or ignorance. This ignorance is the root-cause of suffering. Ultimate knowledge is always non-conceptual and so ineffable. Although a datum of direct experience, it cannot be cast into the mould of conceptual object/subject relationships. It cannot undo the un-saying of its prehensions. Ultimate knowledge no longer grasps at objects as autarchic, but simultaneously observes their interdependence and lack of substantiality. This is called the "prehension" of the ultimate truth, the union of bliss & emptiness, of compassion & wisdom, of dependent-arising and the lack of self-power. Mental obscurations and epistemological transgressions always walk hand in hand. These lead to ontological transgressions, the mistaken identification of entities as possessing their characteristics from their own side, i.e. without being other-powered. These wrong views on entities build transgressive metaphysics. By identifying the correct object of negation, namely inherent existence, one deconstructs the objects of the mind and remains aware of the margin to be drawn next to the ongoing stream of conventionalities. In this margin, the false exits are identified as reifications, annihilating the disruptive influence on the mindstream. Then one may accept the functional ongoingness of conventional reality as apprehended by conceptuality while simultaneously prehend their fundamental lack of inherent existence, i.e. directly experience or "see" their being empty of own-self or own-nature in the light of them being full of otherness. § 1 Epistemological Transgressions. α. To grasp at sensate & mental objects in terms of valid empirico-formal propositions and valid speculative statements always implies a certain amount of reification. α.1 Epistemology (together with ethics & aesthetics), decrees rules one cannot deny without using them. These are transcendental and so critical concepts. This critical system of knowledge production is not grounded in anything. It is pre-ontological and pre-scientific (but not pre-logical). Transgressions happen when the objective & subjective conditions of the game of true knowing are rooted in a reified, self-sufficient, substantial (essential) ground before knowledge, in a "being" preceding "knowing". There is no epistemology without object (idealism) or without subject (realism). Both ideas of reason regulate and operate two interests in truth, one focused on correspondence and the other on consensus. α.2 Coarse, subtle and very subtle obscurations endure as long as, using substantial instantiation, self-power or essence is attributed to objects. Even the conceptual structure in which conceptuality unfolds (like space, time & the categorial schemes of normative philosophy) should also be viewed as not existing on its own. Lastly, lack of inherent existence or emptiness is merely a property of objects, and so not an object on its own. This emptiness of emptiness is only realized with great difficulty. Hence, as long as there is reified conceptuality, there is mental obscuration and so suffering due to the ensuing supposed isolation of objects and/or subjects. Positing emptiness as a substance is indeed destroying the antidote to ignorance. β. To reify the object of knowledge is to consider any sensate thing as existing from its own side, independent & separate. The identification or imputation of any sensate object is always dependent of a cognitive act from the side of the conceptual mind. This happens because of a failed attempt by this mind to stabilize properties as inhering, which, after in-depth ultimate analysis, are merely found to be changing or impermanent (although interconnected). β.1 Given object A, one may ask : is this a compound or not, can this be further subdivided or not ? As all objects of the conventional mind are compounds, the same question may be posed regarding the various subdivisions etc. In this way, nothing final is found. A regression ensues. β.2 If the regression is stopped ad hoc, then a hardly convincing ontological (reified) self-sufficient ground is designated. It cannot pass the test of ultimate analysis and so this hippopotamus cannot be found. γ. To reify the subject of knowledge is to understand the mind and its empirical ego as existing from its own side. But if we ask where the mind or the ego is, nothing is found except sensate objects, volitions, emotions, thoughts and moments of consciousness. These are found to be impermanent and hence no inhering, self-sufficient stability can be traced. Again the reification fails and the empirical ego (with its sense of permanent identity) as well as the ontic self (designating itself as a mental substance) cannot -under analysis- be found. δ. Due to the power of these mental obscurations, scientific propositions or even some speculative statements seem to be correct. Validity or truth-for-the-time-being is confused with absolute truth. Because of reification, sensate & mental objects merely appear as independent and permanent. Believing our own imputations, we create a reality/ideality of our own making and then blame the illusion not to remain ! Thinking things are independent, we temporarily make them so. But because they are ultimately impermanent, we are bound to suffer from our own mistakes. ε. Even the formative elements of the world-system (the world-ground composed of primordial matter, primordial information and the transcendent aspect of God*) are not permanent. God*'s impermanence does however not preclude His continuity as symmetry-transformation. ε.1 Empty of themselves, they are full of an impermanent material & informational pure possibilities and an ongoing process of Divine evaluation and adjusting. These properties do not act as pre-existing substances inhering in the "primordial", but pre-exist as possessed by the virtual togetherness of the propensities of the world-ground. ε.2 The emptiness of emptiness is precisely this : the lack of inherent existence is not a superobject, nor an underlying self-sufficient ground. The world does have a ground or fundamental stratum, but this too is empty of itself and so in no way substantial. It is sufficient, but not self-sufficient. The first step is a wrong view. Start with that, and end in confusion, ignorance, obscuration & distraction. Reification is the great culprit. This is the ultimate epistemological mistake. Once identified, one needs to return and return to the ultimate logic of its undoing, for the mind entertains a strong habit of grasping at inhering properties. § 2 Ontological Transgressions. α. Reifying the object of knowledge at the level of ontology, i.e. considering the absolute touchstone of that what is as existing on its own and separate from the subject of knowledge, makes it easy to argue realism, the ontological view accepting objects exist from their own side as part of a real world "out there". The most fashionable of these ontologies, materialism or physicalism, adds all objects are fundamentally nothing more than physical things, i.e. compounded material aggregates composed of particles, waves, fields & forces and their relationships. Although non-material stuff like information or consciousness may be accepted (as in emergentism), reductionism brings them back to matter. This is the case of epistemologies articulating how the object constitutes the subject. Classical examples : Aristotelism, empirism, materialism, (logical) positivism & physicalism. α.1 Consider any macroscopic material object. Composed of a large number of molecules made out of atoms, the influence of gravity is paramount and so this cancels the effects of quantum uncertainty (cf. the principle of indeterminateness of Heisenberg operating the atomic & subatomic levels). On this macrolevel, position and momentum behave in a conventional, common sense, "classical" or Newtonian way. The object is not between everywhere and nowhere. But this continuity & definiteness are illusionary. Dividing the object into smaller and smaller pieces will eliminate the effect of gravity and eventually, at the atomic and subatomic levels, the constituent parts are only probability-waves yielding specific quantities when observed by an observer. At this point, the conventional, physical object/subject relationship breaks up, and the separateness, definiteness & locality of objectivity is gone. α.2 Only when a subject of experience interacts with the probability-wave does it collapse, turning an infinite number of possibilities into a single one. As all macroscopic objects are erected upon their atomic foundation, conventional realism is merely apparent and the difference between classical mechanics and quantum mechanics depends on temporal & spatial scaling. On the fundamental level, object and subject cannot be defined as independent, separate and local. The deep-structure of matter calls for the intimate, continuous interaction between the observer and the observed, between the knower and the known. α.3  Lacking objective mooring, i.e. without a definiteness independent of and separate from a subject of experience, the conceptual mind has no way to grasp, impute or possess its object. Like waves on water, mental elaborations cease. This is the beginning of the purification of the conceptual mind, ending in the exhaustive, thorough arrest of all substantial instantiations ; the annihilation of reification. α.4  Considering the apparent solidity of macroscopic objects, realize atoms consist of space without mass. The atomic core (of neutrons and protons) is good for 99.9% of the atomic mass, but it occupies as much space as a grain of rice hanging in the centre of a football station. The reason why macroscopic objects appear as continuous (as solids, liquids or gases) is the electro-magnetic bonds between the constituent atoms, not because of the presence of "solid" mass. To build relationships is like bonding togetherness. α.5  Consider the apparent immortality of electrons, photons & neutrinos seemingly left undisturbed. As all particles interact with other particles, this absence of disturbance is relative. Not a single material thing part of conventional reality subsists forever, for all phenomena arise, abide, cease & reemerge. Interconnected (organic) impermanence & absence of inherent existence are fundamental to all possible actual occasions. Even the world-ground itself, although not nothing, lacks own-nature and is therefore without properties inherently existing in it. The primordial domains are the properties of this virtual world-ground. The virtual is the father of the concrete. The possible is the mother of the actual. β. Reifying the subject of knowledge at the level of ontology, i.e. considering the subject or community of sign-interpreters as existing on their own ontic (noetic) plane and separate from the object of knowledge, leads one to argue idealism, the ontological view the object is constituted by the subject, the community of subjects and/or their mental operations (like arguing and establishing a consensus). Although material objects are accepted, they are merely the reflection of non-material, mental activities. This is the case of the subject constituting the object (cf. Platonism, rationalism, psychologism, transcendental idealism, existentialism, etc.). γ. Realism reifies matter. Idealism reifies the mind. The former reification turns the conventional world into a subsisting materiality, the latter brings in a supermind transcending the world, originally creating it and sustaining it. Realism reduces the world to the order of the actual world. Idealism deems the latter to be the creative result of an original, primordial supermind eternally existing from its own side. The second step is a wrong intent. Once a wrong view is realized, either in terms of a reified object of knowledge or a reified subject of knowledge (in epistemology), the reification (or substantialization) needs to be reified itself (in ontology). Finally, substance is essentialized. The seal is sealed. This by letting the subject establish the object (based on an epistemology without object) or by inviting the object to establish the subject (based on an theory of knowledge without subject). The solution is to never grasp the object or the subject as permanent. § 3 Transgressive Metaphysics. Building complete worldviews on the basis of epistemological & ontological transgressions leads to static, uncompromising, unworkable, inefficient and unscientific approaches to the major questions of life : Why something ? What about the universe, life & consciousness ? β. A metaphysics of idealism fixates a supermind and attributes an inherent existence to it. It thus turns the activities of the mind into either a perfect, ideal "true" reflection of this supermind, or into an imperfect approximation of it. Non-physicality is pivotal. The distinction is between an absolute mind and a totally useless, imperfect and thus rejected physical state of affairs. Rather, one should make clear facts are not exhaustively intra-mental. The ultimate distinction is between, on the one hand, impermanent mental states and moments of consciousness and, on the other hand, the imposed (projected, imputed, attributed) inherently existing properties of the (super)mind. γ. A metaphysics of realism posits a real, objective, external & substantial world "out there". Physicality plays a crucial role. Despite possible emergent properties, the role of physical, molecular, atomic & subatomic events is emphasized, and complex phenomena are -if possible- reduced to their material parts. Realistic activities of the mind correspond with the Real. The distinction is between an absolute objectivity stimulating a receptive cognitive apparatus, and thus between what is Real and what is merely subjective or unreal. Rather, the difference between perception and sensation should be reminded, as well as the constituting activity of the subject. In the co-relative activity of producing conventionality described by the valid empirico-formal propositions (of science), the organizing & intending work of the Ideal is at least as important as the Real. δ. Metaphysical idealism turning religious will invent an omnipresent, omniscient & omnipotent supermind. These qualities inhere in it and are absolute. Hence, this supermind must be a superbeing, a Creator-God. As the subject constitutes (imputes) the object, this God creates the world "out of nothing", i.e. as an act of His Free Will. This worldview fails to understand such a supermind cannot be found and if it would, it could not create, produce, cause or effectuate anything. ε. Metaphysical realism turning materialist will invent an objective, physical world producing all possible phenomena. The latter are physical. The non-physical is rejected. If accepted, as emergent properties, then the non-physical is caused by the physical (downward causation is deemed absent). Materialism cannot be articulated without a subject of knowledge. Moreover, perceptions are not sensations. Finally, the non-physical interacts with the physical, and both matter, information & consciousness are aspects of every single actual occasion. The third step is a wrong object. Having reified the conditions of knowing and secured the justificators (the ideas of the Real and the Ideal), these two objects are totalized. This results in either a static, substantial, eternal mundus or gives birth to the idea nothing really exists (while all things are merely empty of themselves, not of something). Metaphysical transgression is not primarily the polarization of what exists in the vertical and horizontal vectors of the mundus, but follows from the need for reification. Finding a ground is not enough. Not even a sufficient ground suffices. Indeed, a self-sufficient ground is designated. In this view, the world has to be finite in an inhering sense ! But if the world-ground is not a self-sufficient ground, nor an actual occasion, it must be a process, a dependent-arising, a coherent symphony of abstract possibilities. Then world and world-ground are not different, but distinct entities ; the former actual, the latter abstract. § 4 Deconstruction & the Margin. α. Deconstruction does not destroy its object, but merely its reification. Weaponed with ultimate logic, all possible inflexible, static, solid, eternal and substantial objects are investigated and found not to exist as they appear. Found to be impermanent, they are non-substantial. Eliminating their tendency not to move, pushing away their inertia, is to realize the absence of own-nature in each of them. They do not exist as separate and independent objects, but merely as interdependent happenings or display of actual occasions. β. Radical postmodernism (as the end of the "grand stories") remained dependent of modernism. As modernism lacked internationalism and multi-culturalism (being mostly Western), moderate postmodernism integrated the global perspective. Building a deconstructed worldview is the task of hypermodernism, multiplying a global perspective with ecological & social sustainability. γ. The margin is an imagined space defined by a dividing-line drawn parallel to any text. This space is used to mark all reified concepts present. They are identified and marked. These are the transcendent signifiers one cannot avoid but -to satisfy parsimony- must keep to the bare minimum. Two are identified : the mind of Clear Light* and God*. δ. Deconstruction is not a passive analysis post factum, but happens in the heat of the action. Like a swimmer or a singer, complex forms emerge in and by the action itself, not by anything "from the side". The moments constituting the stream are never identical and never return. All is constantly permanently lost. ∫ Finding not a single substance, the wise dine & wine on wisdom. Avoiding three wrong steps, namely wrong theory of knowledge, wrong ground and wrong totalization, deconstruction focuses on all possible reified objects. Both on the side of the subject of knowledge, as on the side of the object of knowledge, the solidification, isolation, fixation and substantialization of the Real or the Ideal are identified. At some point, when this has happened repeatedly, the mind stops to impute independent & separate existence and stops grasping at the supposed own-nature of things. The "seal of emptiness" is placed on all sensate & mental objects (cf. the "mahâmudrâ"). As a result, objects no longer appear as they do, but unveil their other-power, i.e. the fact they merely exist because of determinations and conditions outside themselves. They are something, i.e. not nothing, because they are functionally related. Without this efficient bonds, they do not exist, and if they appear to exist from their own side, the mind is necessarily deluded & obscured. B. Conceptuality & Non-Conceptuality. When the mind cognizes, it grasps at an object and possesses it. Nearly simultaneous with this, to further identify it, the mind conceptualizes and so imputes a concept, name or label. Between the act of cognizing and the moment of conceptualization, a small gap occurs. Between two moments of conceptualization, another isthmus, "bardo" or interval is at hand. Cognizing and conceptualizing are not simultaneous. Grasping the object and naming it are indeed two consecutive steps. This can clearly be felt in ante-rationality, in particular mythical and pre-rational thought. In these early modes of cognition, the concept is not stable. In mythical thinking it is psychomorphic, taking on the shape of subjective experiences. In pre-rational thought, it has a certain kind of stability, but still vanishes quickly due to a plastic semantic field. While proto-rationality works with mature, stable concepts, they are not abstract but concrete and so are defined by the context in which they appear. This gives them a semantic multiplicity, a fluidity prone to confusion. Ancient Egypt and pre-Classical Greece feature these kinds of opaque conceptualizations. Clear meaning can only be established by lengthy comparisons and minute studies of all available contexts. Even then, precise meaning can only be suggested, not inferred. The empirical subject knows the momentary field of consciousness as (a) the direct, experiential, phenomenological horizon with its central ego cogitans, (b) conscious contents and ongoing fluctuations, (c) together forming the mindstream consisting of consecutive moments of sentient activity, mental activities organized and ruled by the mental operators associated with the various modes of cognition. Now thanks to the abstract concept, all mental operations are boosted by the application of context-free relations between stable concepts, leading to conceptual elaborations and the correct & valid use of conventional knowledge. The noetic aspect of the "Greek miracle" is precisely this comprehensive use of abstraction, leading up to the concept-realism of Plato and Aristotle. The latter is an exaggeration unwarranted by critical reason. Kant did not accept the non-conceptual (cf. his rejection of "intellectual perception"). He considered this not to be given to everyone and so too exceptional to be part of a criticism of pure reason. The notion nonduality is a mode of cognition calling for a cognitive act (featuring object & subject) is based on the direct experience born out of study, reflection and meditation on ultimate truth. With enough effort, this is the share of every human being wishing to end ignorance on the most fundamental level possible. § 1 Conceptual Thought. α. When, from specific instances, a general idea is inferred or derived, this abstract is called "a concept". With the "Greek miracle", the ante-rational stage of cognition, with its strong pragmatic mental closure, had been superseded. Formal rationality imposed both contents & form. β. Ante-rational concepts are either a-conceptual, pre-conceptual or concrete. In myth, they are psychomorphic and make no distinction between inner & outer, obscuring the distinction between sensate & mental objects. In pre-rationality, concepts are unstable and therefore mere pre-concepts. In proto-rationality they are stable but concrete, defined by context only. In all cases, a confused type of cognition ensues. There is no stable mental form, except in the immediate coordination of movements. Symbols only persists for brief moments or as part of designated (and unstable) contexts. Signals & icons persist (especially in the earliest two modes of cognition). With the coming of rationality, ante-rationality is pushed in the unconscious. The more a culture is refined, the less instinct & emotion need to be subdued. The outstanding feature of Western rational culture is to dominate instinct & emotion for "a good reason". This is the origin of pettiness & silliness. γ. Conceptuality overlays or superimposes a general name, label or symbol on sensate and/or mental identifications of spatiotemporal variations in a set of actual occasions (caused by a finite number of sensuous impulses and/or mental cogitations). This involves a logical mistake, for how to justify the leap from a finite number of concrete sensuous and/or mental elements -leading up to a pre-conceptual identification- to an infinite number of such elements in the three times as defined by an abstract concept ? Both what is identified, the identifier and the process of identification are impermanent and so prone to change. δ. The distinction between the pre-conceptual apprehension of sensuous impulses projected on the neocortex and the moment of conceptual overlay is crucial to understand how the name or label associated with what has been identified differs from the latter. These pre-conceptual sensate objects, indeed resulting from the earliest moments of interpretation, are nevertheless not yet concepts, i.e. abstractions, generalizations, static names or labels. And they are certainly not the reification of such concepts as in concept-realism, attributing own-nature or substantial sense to concepts. ε. Note these distinctions. The mechanism of the conceptual process involving sensate objects involves three phases : in the first, the sensate objects (projected by the thalamus on the neocortex) are pre-conceptual and identified by way of a variety of actual occasions present in the direct, phenomenological field of the observer during the act of (total) observation ; in the second, this concrete information is generalized and so named and labelled. Here the conceptual mental operation is at hand, one identifying a universal and its instances ! In the third, the subject of knowledge apprehends the general concept or name, superimposing it on all subsequent manifestations of a similar sensuous stream of actual occasions. In all forms of pre-critical rationality, the third step leads to reification, positing a substance existing from its own side, keeping its own inhering properties, separate and independent from others. Conceptual thought operates abstract concepts and brings these together in opinions, notions, hypothesis, theories & speculations. Thanks to generalization, the cognitive act is liberated from context. Eventually, the structure of conceptual thinking itself can be apprehended, leading to a logic devoid of contents, i.e. formal. Despite the fact classical formal logic is not the only possible logic, concept-realism is thoroughly dedicated to it. Of all the basic principles, non-contradiction rules supreme. The Newtonian world also ran in absolute, linear terms. But this proved to be a good approximation only. Indeed, the fundamental nature of physical objects involves quantum logic defying strict non-paradoxality. And most living systems, including the human brain, has an architecture, a software executing a chaotic logic. So although conceptual thought is crucial to escape context & content, it is not an absolute tool, but merely a relative waymark to keep track of the conventional, common-sense worldview. This sobriety gives the power to climb the mountain of meta-rationality, if such an undertaking is deemed necessary at all. Like ante-rationality, rationality has mental closure. Moreover, because of the limit-concepts of immanent metaphysics, the creative mode of cognition is not necessary to solve the problems of conceptualization (namely reification). So the leap enabling us to face absolute truth is an act of freedom. From the side of reason, it can be nothing else but a leap into the absurd ... So be it ! § 2 Ante-rational Regressions. α. The realization of rationality does not guarantee the absence of unwanted returns or regressions to the earlier stage of cognition. It is crucial to grasp ante-rationality, although made unconscious, is still prevalent in instinctual and emotional matters, i.e. those areas where context plays a important role. Signals and icons are defined by our ante-rational mentality, given shape by libidinal, tribal & imitative foci of consciousness, by an antique ego fed by the memories of the earliest experiences of conscious life as a human being. β. In the chaotic sea of ante-rational thought lurks the Leviathan of irrationality. The absence of its reemergence needs to be checked again and again. If this effort is unrelenting, regressions can be avoided. But due to habit, the mind settles down and breeds bad defences. γ. Ante-rationality, because it has mental closure, can fabricate a number of fantastic stories and implement the terror of concrete words. Without rationality, a single deity turns into billions ; each with its own silly walk or Moon dance. With rationality, formal and critical, the substantial God is unmasked and the God* of Process dawns. Aware of the presence of instincts and emotions, the integrated rational mind, formal & critical, no longer subdues nor renders unconscious the various processes stemming from an ante-rational approach of the world. Training these eventually leads to emotional intelligence as well as to a gut-feeling assisting the proper functioning of the mind. Of course, at the end of the day, in this mode of cognition, only reason judges. But because even the critical mind cannot eliminate the need to reify, such judgments may be mistaken. Only absolute truth brings to light the fundamental true nature of all possible phenomena. Because of this reifying tendency, reason cannot completely compensate for instinct & emotion. Only wisdom realizing emptiness can. § 3 Meta-rational Transgressions. α. The complexification of cognition moves beyond rationality. Creative and nondual thought make way for cognitive horizons far beyond the capacities of the mind working out in the rational stage of cognition. To limit the mind to what seems to be given to the majority, is to make the infinite serve the finite ; an absurdity. Both define their own domain, the finite world finding its infinite potential in its own world-ground. The intellect crowns reason. Where reason apprehends, intellect prehends. Abstraction has to be paid by lack of inventivity, creativity & novelty. Situated between ante-rationality and meta-rationality, rationality represents the Middle Way between instinct and intuition. Without the latter, rationality lacks the ability to create novelty. With too much of this, cognitive activity is either confused or lacking purity (i.e. a perspective on the end of reification). γ. Creative thought prepares intellectual prehension by serving as a purgation for the subtle forms of reification. Totalizing and the reification of a totalizing object need to be distinguished. Creative thought first allows reification to explode. Positing the ontic self in its "mandala" it then annihilates the reified totality. This is like ending ignorance with one single blow. δ. Insofar as creative thought posits an ontic self, its creativity is sullied, leading to brontosauric statements. The latter are not devoid of dramatic exaggeration and have no other use than to totalize the creative object of knowledge. They do reflect the power of novelty and inventivity, the ornaments of consciousness. They evidence the establishment of a higher-order mental level, albeit one covered by the fixating imposition of an ontic self possessing itself and its properties from its own side, inherently, imputed as an eternal self-powered, self-identical & nondependent mental substance. It goes without saying that to the ante-rational layer of mind, such megalomanic display is very appealing, stimulating the re-emergence of instincts & emotions, signified by signals & icons. However, it merely serves -by way of paradoxical intention- the end of reification. The ontic self makes way for the transparant self, ending in selflessness. ε. The higher ontic self is not a strong object of negation, but its emptiness is. This self needs to be thoroughly identified before it can be emptied of itself, thus not leaving naught, but the very subtle transparant self-reflection present in the cognitive act. This "prise de conscience" is a totalizing awareness of consciousness as object and so if not reified, the portal to the selfless self-awareness of nonduality. The creative mode merely prepares the mind, refining it to the point it apprehends the totality of its sensate and coarse mental objects as empty of itself, i.e. as a process without own-nature ("svabhâva"), with "no self" ("anâtman"). This means they are not themselves, neither are they not something ! Avoid both extremes of eternalism and nihilism. ζ. The reification of the higher self, designating an ontic, substantial self or subjective own-nature, can also be a stepping-stone to the reification of the transcendent object itself. ζ.1 When emptiness is designated as a ground not to be emptied of itself, absolute truth is raised to become a different ontological entity, plane or level, giving birth to the idea of the absolute being high up (Heaven) versus the relative being down low (Earth). ζ.2 For emptiness to be empty of itself, the absolute must merely be a property of every possible actual occasion, existing conventionally in every possible apprehension of sensate & mental objects. When Two Truths become a single Truth, how can the shepherd mind his flock ? Ante-rationality needs reason to solve its problems, but reason cannot silence instinct & emotion. While for a rational human being reason has the final say, the decisions of reason lack the capacity to encompass the various semantic connotations invoked by instinct & emotion. Rationally, these signals & icons seem outlandish and irrelevant, but as far as these ante-rational mental imprints are concerned, reason speaks a foreign language and so imposes a misunderstood rule. Rational analysis cannot integrate ante-rational information. Another false path is to replace reason by meta-rationality. As if the latter is not imputed on the basis of conceptual stability ! To make the choice to totalize, ontologize & then desubstantialize is the prerogative of free study, in particular metaphysical studies. Meta-rationality does not yield a superobject nor a supersubject, but merely a panoramic perspective on the process of the mundus and a philosophical reflection on the transcendent object based on its (direct) prehension. As soon as a speculative discourse invoking the absolute becomes an eulogy of the "thing of things", possibly inventing a theo-ontology, it transgresses the "ring-pass-not" of critical thought. The transcendent object being empty cannot act as nondependent or ontologically different from the relative. § 4 Direct Experience & Cognitive Nonduality. α. To introduce nondual thought, reason & contemplative experience have to be distinguished. Ultimate logic only eliminates the reification of the concept. It does not end conceptuality, for the latter belongs to the valid processes of the conventional world ruled by relative truth ; valid but mistaken. Compassion and meditation on the emptiness of all possible concepts, involving a deep reconditioning of the mindstream, bring about the end of the reification of concepts. This is the purification of the conceptual mind. Concepts are not the problem, their reification is. Prehension no longer grasps, but finds objects as they are. β. A direct introduction to and discovery of the natural light or the mind of Clear Light*, does not cause something, but rather, as a perfect mirror, reflects, when secondary causes manifest, the movements of energy and the processes appearing in it. The natural light of the mind cannot be observed, for it is the very thing observing, perceiving only the suchness/thatness of the actual occasions without conceptual interpretation. This light is a potential, an open space of possibilities. It is the nature of the mind as it is by itself, its witnessing clarity. γ. Nondual thought is not discursive, nor conceptual. In other words, the apex of thought is non-verbal. Myth, the beginning of cognition, is also non-verbal, but opaque & non-reflective (and, mutatis mutandis, non-reflexive). Nondual thought, the end of cognition, on the contrary, is highly reflective (dynamical, differential, energetic) and sublimely reflexive, with the absolute subject prehending the absolute object. But this is no longer "inner" knowledge, nor even arguable (immanent) metaphysics, for it lacks all forms of conceptual duality and cannot be symbolized, except in sublime poetry. Perhaps as a direct, self-liberating, self-transforming, wordless, instantaneous awareness of the unlimited wholeness of which one's nature of mind is part. If this highest, nondual awareness is called "wisdom", then wisdom transcends the concept, be it concrete, formal, critical or creative. δ. Because the nature of mind is ultimate reflectivity & reflexivity (the absolute I knowing the absolute Other), the original mind of Clear Light* is thus (a) self-clarity, like a Sun allowing itself to be seen or as a lamp in a dark room lighting up the room but also itself, (b) primordial purity, or the absence of conceptual elaboration, (c) spontaneous perfection, self-liberating all reifying flux within consciousness, (d) unobscured self-reflexion, as in a polished mirror, transparency in variety, like a rainbow or as water taking on the colour of the glass and, as space accepting all objects in it, (e) impartiality. ε. Although without conceptual object, this subjectivity is "aware". It is the "awareness of awareness", self-settled, wordless, open and reached by a pathless path leading to a pathless land. It is clarity, but without differentiating anything. The fundamental nature of the mind is not part of consciousness. This nature is simply always present to and aware of the state of absolute absoluteness it finds itself constantly in. This is an absolute & blissful selflessness only aware of its absolute object, the lack of substance in all things, itself included. This is an absolute experience of duality, and therefore a nondual dual-union, non-conceptual and so paradoxical. Although not a consciousness, it is a mode of cognition and so definable in terms of the transcendental duality, but then in an absolute sense. But in the case of nondual cognition, a special dual-union pertains. Nondual awareness is not induced by any immediate prior condition. It has no cause. It cannot be determined by a previous moment of consciousness. It is a self-settled, wordless, non-conceptual, open awareness, without a place ("epi") on which a subject might stand ("histâmi") and so pre-epistemological. These ideas are not the result of any reasoning, but poetical elucidations. ζ. This original nature of mind is absolute. So it will, if not deconstructed, act as a transcendent signifier. Hence the distinction between immanent & transcendent metaphysics. Despite non-conceptuality, direct experience apprehends this open, clear awareness or very subtle mind of Clear Light* present in nondual cognition as a direct encounter with the something not found among sensate or mental objects, with i.e. absolute reality nakedly, purely & primordially united with absolute ideality. η. The display of phenomena arising out of the empty all-ground or world-ground features (besides primordial matter and primordial information) a cognizing luminosity, presenting (a) an original nature of mind and (b) a primordial enlightenment-being ("Âdi-Buddha") or God* (not to be confused with the self-sufficient ground of classical ontology). η.1 The non-separation between the absolute all-ground and the original nature of mind is the experiential fruit of directly experiencing this ultimate nature. η.2 While this experience is ineffable, mystics never stop talking about it. When they do they are not scientists, nor philosophers but merely poets. When pursuing absolute truth, conventionality is not considered a negative, like something imperfect or useless. Why ? Because there is nothing outside the world-system. The world-system is all there is. Its infinite, absolute ground is not a self-sufficient ground, but a dependent arising empty of itself, but full of an abstract "something" shaping the possibility of all possible concrete actual occasions. Together, this primordial consciousness or mind of Clear Light* (of God* and of all other beginningless mindstreams), pristine information and virtual quantum plasma, make out the set of formative abstracts. They represent the world insofar as it is merely potential, virtual, possible. Although an infinite truth transcending the relative, finite world, it is nevertheless not a different kind of being, not another "class" of actual occasions. Hence, unmistaken absolute truth is revealed in every cognitive act and this simultaneously with its valid but mistaken relative appearance. The absolute exists conventionally. Not in a "higher" topologically distinct from the actual world, but precisely at the very, momentary instance when the actual world is observed. The absolute is always-with-the-world. § 5 The Epistemological Status of Nonduality. α. The experience of nonduality is a first person prehension of the nature of mind, recognizing its Clear Light*. This hidden & ineffable observation is "mystical" and cannot be described. This prehension by the absolute subject (the mind of enlightenment) of the absolute object (the lack of inherent existence in all phenomena) is the observation of its suchness/thatness, or momentary presence with nothing more. This is unmistaken, without obscurations, veils or concealments. β. The logic of the tetralemma ("catuskoti") offers the best conceptual approach of n onduality. This tool frees consciousness from all possible reifying conceptualizations, namely by negating all substantial views, introducing all phenomena as without inherent existence, eternal substance, absolute identity or immortal essence ; impermanent but not random. In logic, the particle "not" has no other function than to exclude a given affirmation. The tetralemma therefore excludes everything by exhaustively analyzing what emptiness is not : 1. it is not as it is (identity) : things are always connected with other things and if change by way of determinations & conditions is accepted, then all identity is impermanent and devoid of inherent existence, own-nature or substance ; 2. it is not as it is not (negation) : likewise, the negation of anything cannot be done without negating other things, making what is being negated interconnected and thus impermanent ; 3. it is not as it is and as it is not (mixture) : to say this clause has meaning is to utter a meaningless "flatus voci", except if differences in time, space & persons are introduced. In the latter case, the mixture is a new identity, and (1) applies ; 4. it is not beyond as it is and as it is not (included middle) : only if (1) & (2) cannot be clearly defined may this clause apply, but it is rejected as invalid. Denying the included middle implies the excluded middle. γ. Using the "reductio at absurdum", the tetralemma negates the four options given by formal logic. Accepting the first two is "nominal", and no valid path to liberation, for suffering is what is common to everything. Identity has to be renounced and its emptiness realized, i.e. conceptualizing the impermanence of everything results in the end of reifying conceptualization. Accepting the last two is "irrational", for in classical logic, non-contradiction & the principle of the excluded middle are necessary (although many-value logics do not accept the principle of the excluded middle). δ. By restriction ("nirodha"), each clause removes, dissolves, evacuates & drives calm the final obstructions of knowledge (cf. "jñeyâ-varana"). The result being a conceptual mind close to or approximating the nondual state. The tetralemma expresses the inapplicability of ordinary, nominal conceptual language to the absolute. The idea behind the tetralemma is to establish a view without concepts, i.e. employ logic to reach beyond logic. This can only be prepared, leading to the purification of the conceptual mind. Indeed, the "wisdom" of meditative equipoise cognizing emptiness is not induced by an inferential consciousness segueing into emptiness. The conceptual "operation" of the tetralemma is not a process by which conceptual thought is spontaneously transformed into the highest possible wisdom. ε. Conceptuality cannot be the cause of non-conceptuality. Ultimate logic proceeds to eliminate reification but does not and does not need to annihilate the concept. Hence, there is no conceptual "operation" establishing the nondual view, no path to the final step, the apex of cognition. ε.1 One needs to completely use up the fuel of the "fire" of reifying conceptual elaboration (this is "nirvâna"). So negating what must be negated, namely inherent existence, is the supreme antidote to cancel the poison of ignorance. ε.2 Only prolonged spiritual exercises (combining calm abiding or tranquility with insight or analysis) are able to properly prepare the mind to experience emptiness directly. This is not like propelling it into "seeing" emptiness, for non-conceptuality arises at the precise moment the highest, purest veil of the conceptual approximation of emptiness is pierced. The fabrication of suchness/thatness by applying the rules of ultimate logic is the ultimate preparation approximating "seeing" full-emptiness, the union of dependent-arising & emptiness. This preparation is however conceptual and so not yet nondual. No doubt advanced, it is not yet direct, seedless, without means, unfabricated. After having made the mind supple, conceptual preparations must be exhausted. A generic concept of emptiness is then realized. But this is not the same as unfabricated suchness/thatness, the direct, unmediated experience of the absolute nature of all possible phenomena. So epistemologically, the transcendent holds no conceptual truth-claim and has no conventional validity, but only ultimate validity (in terms of the act of prehension, itself beyond validation). It is not an object of science nor of immanent metaphysics. Neutral to both, it cannot enforce. There is no coercion in salvation. Nevertheless, by directly observing the ultimate nature of all things, thus entering the wisdom realizing emptiness, an unmistaken, non-conceptual experience is possible. In the teachings ("dharma") of the Buddha this experience is nothing less than awakening ("bodhi"), establishing the mind of enlightenment for the sake of all sentient being ("bodhicitta"), the unity of bliss (compassion, method) and emptiness (wisdom). Such an enlightened mind is omnipresent and omniscient (aware of past and present). Although superpowerful, it is not omnipotent. The mind of Clear Light* is valid because reality-as-such is prehended. Because it does not make things appear differently than they are, it is also unmistaken. C. Irrationality versus Poetic Sublimity. If nonduality cannot be conceptually appraised, it must be understood as a highly subjective experience. Relevant no doubt, it has no direct significance whatsoever. So is it irrational ? This would be the case if nonduality would eliminate the conceptual mind. But just as rationality does not eclipse ante-rationality, non-conceptuality does not preclude conceptuality. Awakening does not stop one from thinking in terms of conceptual relationships. Devoid of the reifying tendency so active in the rational mode of cognition, such a mind simultaneously prehends emptiness & fullness, absolute (ultimate) & relative (conventional). Precisely because nonduality is non-conceptual, it cannot argue and so through argument validate the experience of the ultimate. Therefore, as soon as one tries to argue nonduality, irrationality lurks. Apologetics are off. Only direct experience is at hand. This can be prepared, no doubt, but not a single correct preparation causes nonduality ! It can merely be pointed out, introduced or recognized. If not, nothing else can be done. Nondual experience impacts conceptual thinking and therefore proves its significance indirectly, namely in the behaviour of those in which such a profound state is fully realized. Indeed, great compassion or limitless charity is the activity of the mind of Clear Light*. Aware of the vastness of suffering, such a mind engages to alleviate the pervasive suffering present in conventional existence (or a life defined by the determinations & conditions of conventional knowledge). Hence, such a mind has a very powerful intent to end the suffering of all sentient beings and the unmistaken, realized & forceful potential to do so. § 1 Featuring Irrationality. α. Irrationality cognizes without the inclusion of rationality. Its spirit is not dampened by the diabolus in logica, non-contradiction. It either lacks universalia (as in ante-rationality) or does not appreciate the validity of concepts (as in invalid transcendent metaphysics) and so lacks the capacity to identify its mistakes. It has not yet arrived at the cognitive level introducing concepts (as in myth), is unable to establish a stable concept (as in pre-rationality), is bound to context (as in proto-rationality) or cherishes a dogmatic view held true for no good reason, as in blind faith and pre-critical forms of conventionality. β. The many forms of irrationalism all try to undermine reason, introducing absence of sense. In general, nonsense does not accept the power of logic to decide between valid and invalid, between true and false, between mistaken and unmistaken. Making use of logic to defend its dogma, as a form of apology, it mostly tries to seduce others into uncompromising salvic moves. ∫ The deceptions of irrationality may fool some for some time, but never succeed in bamboozling everybody all the time. γ. Like myth, nonduality is non-verbal. But while myth is a priori non-reflective and non-reflexive way, the ultimate mind is highly reflective and sublimely reflexive. Precisely because of this, the indirect influence of this mind is very powerful. When turned towards others without enforcing anything, triggering spontaneous attunement & metanoia, it identifies ultimate truth in every moment of its awakened mindstream. This is not scientific nor metaphysical, but existential in a poignant, instantaneous way. Spontaneously liberating all ignorance in every moment of the mindstream, suchness is complimented having its own index of truth. Possessing the ultimate clarity. Very subtle reification needs to be avoided, for the absolute is empty of itself ! The awakened mindstream prehends the absolute object. This is like the son jumping into the lap of his mother. δ. Irrationality always tries to limit & darken the rational mind. This disruptive activity is ongoing, for the imprints left by the ante-rational mind are powerful emotions & instincts. Come to its own, the mature rational mind cannot eliminate the latter. They provide the vital emotionality with which the desperate search for a self-sufficient ground is clothed. If coarse irrationality leads one to overt insanity, then subtle irrationality is the power of the grip clinging to substance. Very subtle irrationality is making the self-sufficient ground transcendent & eternal, the ultimate spiritual stabilization in self-contradiction. ε. Due to its coarse irrationality, the ante-rational mind becomes confused and stays in permanent, unresolved conflict. The rational mind mediates the contextual problems with abstract concepts and defines the finite world by way of tangential limit-concepts. Here, irrationality feeds on the tendency of the rational mind to reify. Substance-thinking being the subtle form of irrationality. Lastly, when the mind of Clear Light* is reified in terms of an absolute mind-substance (eternal soul) or an absolute object-substance (God), very subtle irrationality is introduced. ∫ Do organized religions hold a monopoly on very subtle irrationality ? Coarse irrationality is often associated with afflictive emotions and violent instincts. These can be identified with ease. Mental disorders like schizophrenia provide case-studies proving how those minds lack the ability to even take care of themselves in the most essential ways. In psychosis visual, auditive, tactile hallucinations occur. Mental retardation or the uncontrolled activity of ante-rationality also display irrational intentions, volitions, affects, thoughts and states of consciousness. Subtle irrationality, because of its pervasive activity, is more difficult to identify. Here the hallucination is mental, in particular the projection of the imago of the eternal substance. It always involves fixating some object, some subject or both. It can be conscious, as in metaphysical realism or metaphysical idealism, or unconscious, as in the uncritical, untrained conventional mind of "homo normalis". But one cannot introduce an abstract without a logical leap from a finite set to an infinite set, without the "deus ex machina" or "trick" to save the corrupt plot. Very subtle irrationality hallucinates a hallucinating being. However, in critical philosophy, no reified concept of emptiness or reification of emptiness are possible, for the world is a sea of process. § 2 Transcendence & Art. The sublime is beyond excellence & exemplarity combined. As an intensity of meaningful presence, it captivates every moment of consciousness. Offering clarity, it puts interdependence to the fore. Empty of itself, it is the all-comprehensive prehension of otherness. β. Sublime works of art unfold a unique evolutionary process of spiritualizing states of matter and testify of the continuous process characterizing the natural, nondual, Clear Light* mind. They are our grand ancestral examples. They do not coerce, nor do they unfold in any hesitant way. They are more enduring cultural compounds, bringing the laws of beauty to their highest efficiency & finality. γ. If art, the making of beautiful objects, is a medium for the direct experience of emptiness, then the tale of un-saying can indeed be told, not only with symbols, but also with icons & signals as in a "Gesamtkunstwerk". In terms of the written text, the poetic style excels as a potential carrier for all possible mystical elucidations. Poetry, in addition to, or in lieu of, its apparent meaning, adds aesthetic features to any text. Sensate aesthetic features are denotations based on sensation. Evocative aesthetic features are affective, volitional, cognitive & conscious connotations based on denotations. Excellent poetry combines these features in an exquisite, functional whole. Aesthetic judgement of excellence is not based on the aesthetic features themselves, integrated as they are in an excellent organic whole, but on their total or partial aesthetic meaning. Turning free creativity into symbols, icons & signals, excellence points to qualities beyond the conditions imposed by sensation. A higher-order form is at work. All what matters, is the way these differential changes in exquisite aesthetic features are an expression of consciousness. One does not seek beauty (as in pleasure & satisfaction), but shows how beautiful beauty is (as in excellence). The exemplary moves further. ζ. Poetry moving beyond excellence is exemplary. The aesthetic judgement of example is based on a spectrum of possible abstract forms of harmony, ranging from the entirely subjective to the entirely objective. These abstract forms, rooted in transcendental aesthetics, are necessary and formal (cf. Criticosynthesis, 2008, chapter 5). The transcendental object is a sensate object, a text, the subject an expressive poet. All harmonisations necessarily involve this pair. Positing, comparing, denying, uniting & transcending are the five models of harmony. The sublime moves further. • Positioning : affirming the object without the subject or affirming the subject without the object ; • Comparing : considering the object more than the subject or considering the subject more than the object ; • Denying : rejecting the object or rejecting the subject ; • Uniting : identifying object with subject and subject with object ; • Transcending : zeroing out of all harmonization, without object or subject. η. Beyond excellence and exemplarity, poetry is sublime. When an artist displays his or her natural mind of Clear Light*, sublime realizations result. In these, everything is permeated with the open potentiality present in the mind of the sublime artist. Poetically thinking this Clear Light* is the object of a transcendent metaphysics, backed by an arguable philosophy of totality and inspired poetry. Clearly nothing truly valid or arguable can be said about the sublime. Because all sentient beings possess the potential to awakening, they all can respond to sublimity. θ. Given the sublime harmony of the mind of Clear Light* cannot be conceptualized, it stands to reason only poetry and great compassion are left. The former is suggestive of its profoundness, while the latter brings about its most cherished intent : to awaken all possible sentient beings. At their best, the holy scriptures of the organized religions, and the "sûtras" of those trying to say something about what cannot be put into words, are examples of such sublime poetry. If not, like all forms of katapathic transcendent metaphysics, they are merely dangerous deceptions. And the same goes for the present speculations ... The value of a poem is for the actual reader to decide. Given nondual cognition is non-conceptual, nothing can be said about the phenomenology of prehension, the cognitive capacity to think in a nondual way, fully entering the wisdom realizing the empty truth of all possible phenomena. Only direct experience remains possible. Breaking silence is merely for apologetic reasons ; as the history of the religions shows. Then the highest level of cognition is monopolized by a katapathic soteriology. To designate the "highest name" to the "highest Being" was a way to conjure it, to allow rationalizations of what cannot be rationalized. Beyond being, non-being, being & non-being and neither being and non-being, this level of cognition does not allow for any labelling or name-giving. Working at the level of direct perception, this prehension is beyond conceptual description. Though it can be felt and though it can direct action, no valid, in this case, arguable statement can be made concerning it. Transcendent metaphysics is not rational but meta-rational. This means it must be poetical, for only poetry allows the sublime to be prehended in a written text. Like music, it has the capacity to evoke a "mandala" or "Gestalt" and its interdependences. Like mathematics, it is a fluid and sensitive structure born out of mental balance. But poetry has no truth-claim, no conceptual stability and no a priori logic. Swimming the free style, sublime poets merely point out, but do not instruct. This medium is excellent for all possible spiritual elaborations using conceptual reason (and so text). Never dogmatic but ever discreet, sublime poetry is only revelatory in the sporadic spur. It builds no Babel. 1.4 Ontology. "Philosophers can never hope finally to formulate these metaphysical first principles. Weakness of insight and deficiencies of language stand in the way inexorably. Words and phrases must be stretched towards a generality foreign to their ordinary usage ; and however such elements of language be stabilized as technicalities, they remain metaphors mutely appealing for an imaginative leap." - Whitehead, A.N., PR, § 6. Let us take heed of this warning. The speculative study of those features shared by all possible actual occasions is not a science. It does not advance any sensate object, but, when valid, merely brings greater order in and larger scope to our mentality or set of mental objects. This is provisional and dependent of the advancements of science. Process philosophy devised a very specialized technical language to explain the phenomenology of actual occasions, making it for example suitable for metaphysical inquiries into quantum mechanics. This is possible because despite technicalities, metaphysics in general and ontology in particular call for an imaginative leap. Grand stories are told because they inspire, not because they are eternally true. In his Physics, Aristotle deals with material objects or entities. Metaphysics, "what comes after Physics", takes as its object the immaterial, non-physical entities (beyond or behind the physical world), with theology at its core. Moreover, Metaphysics also studies being in general or being as such, i.e. the study of what is shared in common by all possible entities. This "first philosophy", dealing with the most basic principles based on what all possible things share, is a study of being qua being, leading to the most general concepts or categories of being. What being makes beings be ? Christian philosophy (sic) forged an alliance between theology and this first philosophy. The God of scripture was deemed that Being. He sent His holy word for humans to follow and all the rest of it. In the XVIIth century, first philosophy divorced theology and became general metaphysics. In 1613, the term "ontology" was coined as another name for "metaphysica generalis". And so this became the task of ontology : what do all possible beings have in common ? Process ontology asks : what do all possible mental & sensate processes have in common ? And when this is established : What is there ? and What is truly there ? These questions inevitable leads one to ask : What is the absolute ? Theo-ontology is thus merely an instance of ontological inquiry. A. Defining Ontology without the Nature of Being. Before Kant, "General Metaphysics" or ontology was substantialist, essentialist and so seeking a self-sufficient ground, i.e. the self-sustaining & final substantial level of all what is. The presence of such an independent, autarchic "hypokeimenon" was not in doubt. To seek a "ground" goes with the territory, for ontology determines the common features of every possible thing. But to define these general concepts covering all possible phenomena as (a) existing from their own side and (b) forever remaining the same or permanent, is the cul-de-sac of pre-critical metaphysics. We need a sufficient ground, but not a self-sufficient one. In an absolutist view, valid science & valid metaphysics are eternal. So the absolute nature of all possible phenomena must be eternal too. Hence, the ground of this understanding concerning the general features of all what exists must be something permanent, substantial, essential. Criticism unmasks this eternalization assumed by substantialist foundationalism as an illusion. The common ground argued by ontology is a speculative understanding of what all phenomena share. This metaphysical knowledge, even valid, is not lasting, but, as all conventional knowledge, valid or invalid, provisional, relative and likely to change. Pre-critical metaphysics, unwilling to embrace radical nominalism, was unable to conceptualize a non-substantial ground. The origin or "arché" was eternal, unchanging, own-powered, with a nature existing by its own, with inhering properties. This own-nature is either an objective "substance of substances" or a subjective "self", both possessing their own-form or isolated, essential, unique & unchanging character. Single, dual or triadic, the first principles of the ontology of old were substances. Thinking a non-substantial ground is affirming it is not self-powered but other-powered and "present" since "beginningless time". Given physical space & time came into existence with the Big Bang, the ground of the totality of the world, also called the ultimate ground of phenomena or world-ground is virtual or potential, i.e. nothing with the potential to become something. It is not a primordial or ultimate cause of the world, but its mere possibility. This virtual world-ground is the infinite set of propensities making the finite actual next moment of the world possible or likely. The world-ground is not another ontological order, "hidden variable" or a different substantial & deterministic world behind, beyond, before or within the world, for there is only one ontological order, namely the world of actual occasions. It is more like an abstract, virtual world preparing concrete actuality. If there is only one world, as naturalism extols, then the ground of Nature cannot be an ontological explainer, an ultimate self-sufficient cause abiding in a "Hintenwelt", for there is no Platonic "chorismos" or rift between two ontological worlds. Hence, there is no God creating the world "ex nihilo". Paying compliments to God* is one thing, but taking them serious is quite another ! Before the world physically existed, the primordial quantum plasma pre-existed as one of the three non-temporal, infinite formative elements characterizing the infinite world-ground (together with primordial architecture and primordial sentience). In process cosmology, these designate the limitless possibility, potential, likelihood or propensity of creative disturbance, deflection or "clinamen" of selected probabilities, making another Big Bang (after a Big Crunch or Big Evaporation) very likely. Both world and world-ground make out the world-system. The world is the sea of concrete actual occasions rising from infinite possibilities, featuring primordial matter, abstract forms of creativity (unity) & absolute sentience (the dual-union of the nondual mind of Clear Light* of the absolute mind of all possible enlightenment of all possible mindstreams). The world-ground is called "ground" because of these formative elements, covering potentialities pre-existing outside space and time. This non-temporal & non-spatial order of propensities is grasped in terms of the fundamentals of the possibility or probability of process, but then in absolute terms : absolute sentience, the creative laws of the world and the primordial quantum field. In this way, these three formative elements of the world tie in with the three ontological aspects of every actual occasion at work in the mundus ; matter, information & consciousness. The world of actual occasions hic et nunc is like the "music of the spheres", the actual ongoing cosmic symphony of togetherness of countless interdependent actual occasions. The world-ground is then like the "voice of the silence", the material, creative & sentient probabilities or potentialities making possible the next moment after this moment in the infinite histories of the worlds. § 1 Place of Ontology in Metaphysics. α. Critical process ontology asks  this : What do all possible mental & sensate processes have in common ? The answer to this question, aiming at what all objects share, directly influences the outcome of any metaphysical inquiry. It determines the fundamental concepts of the worldview in question. Any error at this level harms the precision of the arguments targeting specific objects. But given a well argued ontology, the general argument dealing with the totality of the world cannot fail (for if not derived from this, dependent on it). β. No theoretical philosophy features strong, coherent unity without a valid ontology. A general perspective cannot be derived from a finite set of specifics. It has to be solemnly inducted. This is an intuitive, creative moment. Eliminating ontology from philosophy is like painting without paint. The soundness of ontology reflects on the coherency of the worldview. Logic and argument are all what are left. In both cases, the choice of logic is paramount. This brings in the question of style (cf. supra). γ. Ontology makes a fundamental choice. It designates the ultimate object, namely the one object or ontological principal shared by all possible phenomena. Reifying the object/subject relationship necessary in all possible cognition, classical ontology invented substantial objects and/or substantial subjects, acting as self-sufficient anchors to stabilize their foundationalist systems of being. This resulted in (a) the substantial, ideal (super)subject of subjectivism and its spiritualism, (b) the substantial, real (super)object of objectivism and its materialism (or physicalism) or (c) the substantial duality of rationalism, with matter interacting with the non-physical mind. δ. The fundamental choice is intuitive. Singling out a common feature calls for a creative act explained in the course of its well-formed elaboration, defining a hermeneutical circle. This cannot be avoided. The Eureka !-moment cannot be caused. Neither is it void of determinations and conditions. In the past, the extremes of spiritualism & materialism excelled in the drama of knowledge. Derived from reductionism & foundationalism, these metaphysical extremes have had their best time. Instead of identifying their first ontological principle with either the object (materialism) or the subject (spiritualism) of the concordia discors, ontologies of the extremes are avoided by asking what both object and subject have in common ? Hence, materialism & spiritualism are unmasked as incomplete answers derived from an unsuccessful reduction (of mind to matter or of matter to mind). ε. If something exists, it does not merely exist because it appears to an observer to exist. Absolute idealism is rejected. If something does not exist as a substance with inhering properties, is may exist as some thing in process. Relative (conventional) realism is retained. The Middle Way fares well between the extremes of absolute affirmation and absolute negation. Because all objects are deemed to share a finite set of first principles, ontology is "first philosophy". These first principles orient all further possible speculation. Process ontology seeks a series of concepts dealing with the fundamental properties of all possible phenomena. The latter are deemed processes, not natures (or substances). Given the two sides of the transcendental spectrum of conceptual rationality, classical ontology reduced either subject to object (eliminating mind, as in absolute objectivism) or object to subject (eliminating matter, as in absolute subjectivism). Subject-ontologies fail because they cannot explain the tenacity of some sensate objects. Object-ontologies fail because they cannot operate without a subject possessing its object. Process ontology wants to establish the common ground between subjectivity (mind) and objectivity (information, matter). It finds this in the concept of "actual occasion" or isthmus of actuality. This is a moment x of that what exists hic et nunc with differential extension x.dt § 2 Objects of Ontology : What is There ? α. When the view, in casu process-based phenomena, has been established, ask : What is there ?  The exactitude of objects, their quality of having high accuracy & consistency, refers to their ontological status, namely to what kind of object is at hand. Four categories of objects are distinguished : (1) absolutely nonexistent objects, (2) fictional objects, (3) relatively existent objects and (4) absolutely existent objects. β. Absolutely Nonexistent Objects : That Which Is Not. When an object does not exist, nothing can be identified corresponding to it and so nothing ostensibly refers to it. Absolutely nonexistent objects are always analytical nonexistent objects involving a contradictio in terminis. They are a forteriori nonexistent in an absolute sense. A square circle, a triangle with four angles, a curved flat space etc. cannot correspond to anything, although by themselves the words "square", "circle", "triangle", "angle", "four", "curved", "flat" and "space" do make sense. But when combined, a mental clash occurs eliminating any possibility of even imagining something associated with the combination. The void is not the empty set of potentialities, of nothing (infinite emptiness) becoming something (finite fullness). γ. Fictional Objects : That What Deceives. Fictional objects like Hamlet are deemed not to exist, although in Shakespeare's play called "Hamlet", the Prince of Denmark is a leading character. Nobody versed in English literature agrees with the statement nothing is aimed at when the name "Hamlet" is mentioned, but when asked where Hamlet precisely lives, no answer can be provided ! He is not in Denmark, nor does he "exist" in the text of the play named after him. But when the play is actually performed, no member of the skilled audience will have any difficulty identifying Hamlet. γ.1 In the case of the unicorn, we assemble two existing objects (namely a white horse and a large waved horn) and this combination exists in our imagination. Sometimes these objects are merely a private fantasy, sometimes they can -through trickery- be made intersubjectively available. Indeed, before recent times, the horns of a rabbit, the hairs on a fish, the wings of a turtle, a unicorn or a pink flying elephant, etc. could not be pointed at as moving and/or three-dimensional objects. By rapidly projecting digitally manifactured pictures on a white screen, any fiction conjured by our imagination may be generated on it. Even depth can be holographically manifactured. In that way, what used to be merely private imagination can be made intersubjectively available "on screen" repeatedly. While nothing more than tricks with artificial light, these objects may move us, influence us and prompt us into action. γ.2 Fictional objects are either private or public. Dreams and personal fantasies, ranging from the fruits of a fertile imagination to psychotic hallucinations are not available to others. They can only be identified by the subject to which they appear. Nobody else is available to grasp at them. They nevertheless exist as fictional objects. Intersubjective imaginal objects, like fictional characters, cinematographic objects, artistic objects, collective projections or objects appearing as the result of collective hypnosis, also exist because one can indeed aim at them, but this identification is intersubjective, very limited in time, unstable and, most importantly, based on a trick, i.e. an intended deception. γ.3 Fictional objects exist because a conscious agent intends to fool. To do so, elaborate trappings are introduced. These may be physical (mechanical devices or electronic systems), or psychological (as in suggestion, hypnosis and placebo). Without this intent to trick, i.e. to misrepresent reality, positing something which cannot possibly be there, fiction would not exist. Summarizing : fictional objects are relatively nonexistent objects. Relatively Existing Objects : That Which Conceals. Relatively existing objects are those apprehended by the normal waking consciousness of most, if not all, human beings. These are sensate objects and non-fictional mental objects. Their "normality" is defined statistically (a majority apprehends them as they appear), normatively (given all necessary conditions, they must be apprehended as they do) and existentially (their apprehension is co-relative with a particular observer). They are mostly intersubjective, relatively stable, nominal, conventional and independent of conditions put in place with the explicit intent to deceive. They can also be intimate & private, or reflective of automatic & unconscious activity. Except for non-fictional mental objects (like accurate memories, the activity of imagination, volitions, affects, thoughts and states of consciousness), they are always shared with other conscious agents. Although they change as a function of spatio-temporal conditions, these alterations may be slow, small and nearly imperceptible, as in the extreme case of a mountain, the life of a star or the existence of the universe. They may be quick, large and obvious, their existence deemed ephemeral, fleeting or transient, as is the case for climatic conditions or the position & momentum of observed atoms. These objects define what we understand by "normal" reality, one shared and delimited by others, and hence conventional. These objects are not fabricated or manifactured by any human intent to deceive others. They are what is nominally "given". δ.1 Among these conventional objects, some misrepresent physical reality without the artificial intention to deceive. They may be optical illusions one can eliminate, as when a stick immersed in water -merely appearing as very large- is removed from the water. Maybe they cannot be turned around, as the apparent daily movement of the Sun, actually the rotation of the Earth on its axis, or a Hunter's Moon. Maybe these objects are no longer validated by science, like caloric fluid (phlogiston) flowing from hotter to colder bodies. Among conventional objects, some temporarily represent existence in a valid way. These are the objects of science. The validation of these objects is defined by the principles of logic, the norm of theoretical epistemology and the maxims of the process producing valid knowledge about relatively existing objects. δ.2 The objects of science constitute the valid paradigmatic knowledge of the historical era in which these conventional objects appear. They represent the common ground between experimentation and argumentation, between being regulated by, on the one hand, an idea of truth focusing on the supposed correspondence between theory and conventional objects, and on the other, an theory of truth regulated by the idea of the consensus between all involved sign-interpreters. A sign-interpreter is a conscious, cognizing consciousness operating signals, icons and symbols in a well-ordered way, according to principles, norms & maxims producing meaning by way of meaningful glyphs, or states of matter infused with information. δ.3 Relatively existing objects or conventional objects appear as inherently existing outside the subject apprehending them, inviting the division between "inner" and "outer". In this valid but mistaken view, they seem independent, self-powered, and existing from their own side, by their own "inner" nature, essence ("eidos"), substance ("ousia"), or own-form ("svabhâva"). But as ultimate logic proves (cf. infra), this is merely an appearance concealing their suchness/thatness or that what they truly are. These conventional objects do not appear as they truly are, and so conceal their ultimate, implicit process-nature lacking inherent own-form. This is the case for all fictional and conventional objects. Even when the stick is removed from the water, and thus smaller than it was when immersed, its conventionality still conceals its suchness/thatness. While (a) a deception, (b) the subject of an optical illusion (immersed) and (c) a valid scientific object, the solid stick continues seemingly not to depend on conditions outside itself to appear as it does, independent & localized. It still manifests as an object "out there", cut off from its observer. But when prehended in the nondual mode of cognition, each object is simultaneously cognized as empty of substance and fully interdependent. This means the absolute nature of each object is nothing more than one of its properties. ∫ Again : the ultimate exists conventionally. ε. Absolutely Existing Objects : That Which Is What It Is. These objects are apprehended by the wisdom mind of Clear Light* no longer bewitched by the illusion posed by any objects. Such a mind directly sees the suchness/thatness or full-emptiness of all phenomena, i.e. simultaneously apprehends how all phenomena (a) are empty of themselves and (b) full of otherness. Classifying what exists brings about two broad sides ; the conventional and the ultimate. Conventional truth is conceptual and rational, based on experimentation and argumentation, on valid science and valid metaphysics. Ultimate truth is non-conceptual and intuitional, based on direct nondual prehension, on argumentation and sublime poetry. Transcendental metaphysics does not argue, but merely points at the Moon. In philosophy, both truths are in fact epistemic isolates (of the conventional and the ultimate aspect of every object, of its full and empty properties). In mysticism, they are the datum of a direct and unitary experience, prehending them simultaneously and this ongoingly (swimmingly). § 3 Monist, Dualist & Pluralist Ontologies. α. The fundamental ontological choice is either monist, dualist or pluralist. Only one, only two or more than two fundamental ontological principles prevail. Mindful of Ockham, the monad is preferable. By adhering to parsimony, the number of ontologically different entities is limited. β. The monist posits a single fundamental ontological principle. This is the most clear-cut and economical choice. If such a principle can be found and argued, a well-formed ontology ensues. With a single principle, all possible entities share the same fundamental ground and so can fully participate in each other ; their differences are nothing more than a measure of their distinctness. No ontological differences exist. γ. With more than a singularity, difference and distinctness are no longer the same. Ontological differences divide the world up in as many fundamental principles as designated. Dualists, like Plato & Descartes, settle for two fundamental ontological principles. Leaving the monad, his ontology mirrors the epistemological dyad of knower & known characterizing knowledge. From this point on, a dangerous confusion creep in : How can two ontological different principles explain the unity of the world ? If two things radically differ (grounded by separate principles), how can they exist together or form any relationships ? How can they ever interact ? γ.1 In neurophilosophy, this question is rephrased. How can the non-physical mind interact with the brain without breaching the energy-conservation law of thermodynamics ? A dualist ontology mirrors the ongoing tensions of conceptuality and does not succeed in explaining the unity of the manifold. In Platonism this problem is more or less solved by identifying the world of becoming as an illusion, a pale reflection of the true world of ideas. γ.2 In Cartesianism, the problems related to this duality eventually results in reductionism, privileging the physical (as in the realism of materialism & objectivism) or the non-physical (as in the idealism of spiritualism & subjectivism). The pluralist tries to solve the basic ontological problem of dualism by introducing a "tertium comparationis". A closure is at hand, one leading to a triune concept, reflecting, to invoke synthesis, the third factor back to the first, the triad to the monad. Without this return to unity, only the triad is given and by addition of unity the "Ten Thousand Things" follow. Moreover, adding one or more fundamental ontological principles does not eliminate the basic ontological problem facing duality. On the contrary, to explain the difference between two elements with a third invokes another difficulty : how can two different factors be bridged by another different factor ? This seems like multiplying problems. ε. The proposed process ontology is a monism. Only a single ontological building block is assumed and called "actual occasion", the momentary actuality characterized by extensiveness. This moment, instance or droplet of Nature has properties. These can be understood when the temporal extension of any duration is progressively diminished without arriving towards a duration as its limit. Such understanding is an abstractive set converging to the concept of Nature at an instance. ζ. Something is always going on everywhere, even in the so-called empty space of Torricelli. Nature abhors the void. Both the electromagnetic field & the lowest energy state (or uniform zero-point field) evidence the absence of an absolute vacuum in physics. η. "Actual occasion" is the building-block of process ontology, the differential phenomenal moment (as particle) starting the stream of moments (as wave). All entities share actual occasions. In ontology, the monist has the advantage. The totality of all phenomena in actuality is understood in terms of a single ontological constituent, thereby simplifying the basic ontological scheme. The issue here is not to explain difference, but to assure the complexity of the manifold can be prehended from the vantage point given by a single constituent. To ensure actual occasions are conceptualized to accommodate rather than to hinder their creative togetherness with other actual occasions, process ontology seeks a phenomenology of the actual occasion. The void does not exist. In empty space, energy is present. Substance cannot be found. The fullness of the mundus is given as the interdependence between all actual occasions entering each other's histories. The emptiness of the world is the absolute absence of self-powered, inherently existing objects with their likewise eternalized properties. This emptiness is not an entity, but merely a property of every actually existing thing. § 4 Failures of Materialist & Spiritualist Ontologies. α. Reductionist monism cuts existence in half. Add essentialism and one half is imputed as the self-sufficient ground, the other half is denied or deemed illusionary ("mâyâ"). All possible subjects of knowledge (knowers) possess objects belonging to two and only two mental categories, namely "sensate" or "mental". α.1 Materialist (realist) monism considers sensate objects to be fundamental and mental objects merely derived or emergent (with no downward causality). In its essentialist version, matter exists from its own side, independent & separate from the subjects apprehending it. α.2 Spiritualist (idealist) monism considers mental objects to be fundamental and sensate objects constituted by the former. In its essentialist version, the "Geist" exists from its own side, independent & separate from the objects it constitutes (as a Creator-God of sorts). Both reductionist strategies fail to explain the totality of the world-system, both as actuality & as possibility. Materialism cannot explain the transcendental unity of apprehension with the manifold, and spiritualism cannot explain the manifold by way of the intra-mental alone. β. Materialism fails to apprehend the intra-mental subject of experience correctly. The impact of conscious choice on material process is either non-existent or of no importance. If it does accept the reality of the non-physical in its own right, it cannot deliver a material (efficient) process to explain these non-physical (final) determinations. Moreover, the unity of the manifold cannot be explained by matter alone. If materialism is "true", then neither are logic & argumentation possible ! Hence, a priori materialism cannot provide its own apology. Bound to become dogmatic and in alliance with the media power and money, materialism is as grotesque as the ecclesiastic powers of old. γ. Spiritualism fails to apprehend the extra-mental object of experience correctly. The efficient determinations of material process on the non-physical are evident. To be cognizing, the mind has to possess an object. This is not an intra-mental but an extra-mental entity. To explain the working of free will without the laws of matter, or worse, to allow matter to be constituted by mind, cripples our understanding of the reality of the physical. Moreover, the variety, differentiation & multiplicity of Nature cannot be explained by the unity of the mind alone. Tending towards unity, mind cannot be made responsible for all possible physicality without damaging the rational understanding of the world. In its dogmatic form, spiritualism verges on the irrational. What can be worse than fools & folly running the world ? Process ontology does not seek its fundamental principle in either the mental or the extra-mental. The object/subject dualism is left intact and a deeper common denominator is found : the actual occasion. All phenomena, objects, events, entities etc., in short : all in existence is basically an actual occasion. Objects are moments with certain extensive properties and creative advance. Materialism and spiritualism fail to face the whole world. These are ad hoc monisms. They stop their analysis by reduction, not by integration. The latter means as many phenomena as possible are made part of ontology. Exclusivism becomes inclusivism. There is always something going on then and there. This is the one unit factor in Nature. Whether objects are mental or sensate, both can be reduced to actual occasions of which they are merely aggregates. § 5 Voidness, Emptiness & Interdependence. α. The absolutely nonexistent is the category of the collection of nothing at all. The empty set thought of as absolutely nothing with no potential whatsoever to become anything is called "the void". β. The void is an empty set with no possible members. Emptiness is the set of nothing becoming something. The void does not exist. Emptiness exists as pure potentiality, possibility or probability (the likelihood of something). In what follows, the concept "empty set" only refers to emptiness. If the empty set with no possible members is meant, the term "void" will be used. γ. All numbers can be bootstrapped out of the empty set by the operations of the mind. Suppose the mind observes the empty set. The mind's mere act of observation causes the set of empty sets to appear. The set of empty sets is not empty, because it contains the empty set. By producing the set containing the empty set, the mind has generated the first number, or "1". Perceiving the empty set and the set containing the empty set, the mind apprehends two empty sets and has generated the second number, or "2" out of emptiness, etc. upward to infinity. δ. The entire natural number system can be generated by the play of the mind on emptiness and this in the absence of the need to refer to anything material or countable. Numbers are non-physical phenomena making no reference to physical systems for their existence. Numbers do not exist from their own side (as Platonic ideas), but dependently-related manifestations of the working of the mind. ε. Nothing comes out of nothing (the void) ; "ex nihilo nihil fit" ! Cosmology & physics cannot touch the question of the before of the Big Bang. As time & space commence with this singular explosion, to ask what was before is deemed nonsensical. But logically, any term is subject to a certain order or sequence. Ontologically therefore, the issue can thus be approached in terms of a logical progression and as such make perfectly sense. ε.1 If before the Big Bang nothing is identified (or identifiable), then the void logically precedes the becoming of the physical universe. But if this is the case, then the Big Bang could not have happened. The fact of this singular beginning of the physical universe and the void as absolute nonexistence are thus incompatible. If there was absolutely nothing before the Big Bang, not even the possibility of something, then the Big Bang would be nonexistent too. But this science tells us is not the case. ε.2 To consider the Big Bang ontologically, emptiness must pre-exist. Not as any thing, i.e. as any concrete, worldly actual occasion, but merely as the potential or virtuality of such actuality. The potential of the Big Bang lies hidden in the world-ground, the mere possibility of the next moment of the world. What primordial determination & conditions made the Big Bang possible ? These formative abstracts are primordial operators conditioned (not by their own-natures like in a co-substantial Divine Trinity) but solely by their primordial interrelatedness or virtual togetherness. ζ. The absence of substantial existence is the absolute property of all possible objects. This means the object is empty of an inherent nature or own-form, but this in full participation and togetherness with other objects. In this immanent approach, emptiness is merely a non-affirmative negation of substantiality. But for those having a direct experience of this transcendent signifier, emptiness is the potential to connect every thing with every other thing. And when the emptiness of the mind itself is seen, it is observed as the Clear Light* inseparable from the world-ground, the virtual pre-existence of the next moment of the world. Emptiness is not something, but nothing becoming something. When a concrete, worldly actual occasions emerges, there is no longer (virtual, formative) emptiness but full, actual interdependence. This nothingness of emptiness cannot be absolute nothingness (the nihilism of the void), but merely absence of own-form with the potential for infinite interactions shaping a unique plenum. Note this : the potential of emptiness, of form emerging out of the formless, cannot be apprehended but only prehended. Its experience falls therefore outside science and immanent metaphysics. "Seeing" emptiness is directly observing how absence of own-nature fosters creative advance through increased togetherness of actual occasions. Only non-conceptual, nondual prehension possesses such an absolute object. As identifying absence of own-form is conceptual, ultimate logic is no doubt "philosophical". Given "seeing" emptiness involves non-conceptual cognition, it may be called "yogic" or "intuitive". The former is given to all intelligent beings. The latter to those enjoying the hard work of their emancipation. B. Perennial Ontology ? Perennial philosophy cherishes the idea that within all spiritual traditions & religions, a mystical stream is present, acting as the repository of the wisdom of humanity after it made contact with a supernatural, basically non-physical higher-order reality. Although in general terms this is correct, a divide can be identified. The phrase "perennial philosophy" was coined by Agostine Steuco, a Catholic Bishop and Old Testament scholar, who, in 1540, dedicated his De Perenni Philosophia Libri X to an effort showing how many ideas of the sages & philosophers of Antiquity were in fact in harmony with the "magister fidei" of Catholicism in general and with the teachings of the Roman Church in particular. Later Leibniz would also reintroduce the phrase. It cannot be denied speculative activity has architecture & momentum. So certain recurrent regularities and logical organizations (software or information) can indeed be identified. Western philosophy is rooted in Antiquity, and -in the case of Europe- was directly influenced by the sapiential wisdom-teachings of the Ancient Egyptians (cf. The Maxims of Good Discourse or the Wisdom of Ptahhotep, 2002). Add to this the "Greek miracle" and the "wisdom" coming from the Middle East and the Far East via the trade routes, then a common Western vision may be discerned. The ante-rational, multi-millenarian storehouse of experience of the Ancient Egyptians (cf. Hermes the Egyptian, 2002), and their "magic" of sacred words (cf. the hieroglyphs and their power : To Become A Magician, 2001), inspired the "minors" of the syllogistic inferences loved by the Greeks, an activity spawning their concept-realism. This Greek synthesis formed a common tread in Western spiritual thought, Hellenizing Hermetism, Judaism, Christianity & Islam. Until recently, it remained even unchecked at work in materialism, instrumentalism, scientism & materialism. Western intellectuals maintain a common ontological interest. Likewise, Eastern philosophy (in India, Tibet, China, Japan, etc.) outlines a common metaphysical & ontological view. Perennial ontology, as a common view on things, can only operate if the common denominator covers what is shared by humanity East & West. No doubt this is a considerable amount of information, rooted in the perennial pre-Neolithic shamanistic  environment (involving the return to the "first time" of myth by way of mythical thought). Nevertheless, perennial ontology must also consider the "Dharma difference" between both visions. Grosso modo, the West tries "to save" a self-sufficient common ground. This is a substance possessing its properties from its own side, inherently, separately and independently from other things. The West emphasizes the objective features of this self-sufficient ground. This substantial own-nature is an essence ("eidos", "ousia", "substantia"), exists inherently, by ("causa sui") and on its own (absolute aloneness). A kataphatic theology (cf. infra) is possible. By and large, the East, foremost trying to clarify the subjective features of experience, turns inward. The experience of a "fourth state" ("turîya") of consciousness besides waking, dreaming & the dreamless sleep, dramatically shaped the speculative endeavours of Jainism, Buddhism & Vedânta. As a consequence, the impermanence of determinations & conditions leading up to subjective experiences was strongly felt and thematized. This gave rise to the important difference between "Dharmic" and "non-Dharmic" views. In the former, held by Taoism & Buddhism, all "dharmas" or existing things only possess interrelationality or togetherness, but no enduring substantial essence whatsoever. The presence of the Dharma difference divides perennial ontology in two sets of views ; on the one hand, the substantivist, own-nature view, on the other hand, the dharmic, process view. This distinction returns in contemporary philosophy as the divide between, on the one hand, materialism (physicalism, instrumentalism, scientism) and, on the other hand, the philosophy of relativity, quantum mechanics, chaos theory and process thinking. Process considers only architecture (software or information), momentum (hardware or matter) and sense (userware or consciousness). Besides the continuous ongoing togetherness of these three operators and the creative advance or novel togetherness of all aggregates of actual occasions, there is nothing. Not a single substance can be identified. Under ultimate analysis, all reifications perish. α. In the Old Kingdom (ca. 2670 - 2198 BCE), the virtual clause "n SDmt.f", i.e. "before he has (had) ..." or "he has (had) not yet ...", was used to denote a prior, potential nonexistent state, namely one before the actuality of that state had happened. To be nonexistent, precludes actual existence hic et nunc, but does not preclude the possibility of becoming existent (expressed by the verb "kpr", "kheper", "to become", which also means "to transform"). β. There is some thing before every thing, pre-existing before the order, the architecture and the life of creation. This is called "Nun" (cf. Liber Nun, 2005). The world manifested as a transformation or change from this nonexistent, virtual state to an existing actuality. The virtual state is therefore not actual, but informs possibility, latency and potentiality. As a potency anterior to creation, the Egyptian theologians of Memphis, Heliopolis, Hermopolis, Abydos and Thebes conceived this pre-existent state as something very special, a primordial state existing before "form", i.e. anterior to space and time, and so before the creation of sky, Earth, horizon and their "natural" dynamics. γ. The virtual, pre-existing state is not the origin of order. It cannot serve as a self-sufficient ground ! The emergence of the world, of light and life are envisaged as spontaneous (autogenesis) and without any possible determination ("causa sui"). γ.1 Precreation is the conjunction of this undifferentiated state and the sheer possibility of something pre-existing as a virtual, autogenous singularity called "Atum". γ.2 Precreation is this mythical dual-union of dark Nun and clear Atum, of and infinite, undifferentiated energy-field and a primordial atom, monad or self-powered and self-sufficient absolute singularity. Atum is the "soul" (or "Ba") of Nun ! The efficient power of pre-existence. Creation emerges from a monad, floating "very weary" in the dark, gloomy, lifeless infinity of Nun. Within the omnipresent oceanlike substance of Nun, the possibility of order, light and life subsists as a pre-existing singular object capable of self-creation "ex nihilo". Hence, although Nun is nowhere and everywhere, never and always, it is the primordial, irreversible and everlasting milieu in which the eternal potential of creation creates itself. δ.1 With this distinction, the Ancient Egyptians had divided what creates and is not created (Nun) from what creates and is (self)created (Atum). The next step, namely between what is (self)created (Atum and his Ennead) and what is created but does not create (the world) is also made. δ.2 The whole order of the world needs to "return" (by means of the magic of the "Great House" or Pharaoh, the divine king) to the primordial moment when Atum creates Atum and -within Nun- the world with its order (Maat) came forth. ε. The Greek philosophical mentality was unique, but it did not come forth "ex nihilo". It was the result of the network of forces triggering the so-called "Greek Renaissance", based on traditional Minoan & Mycenæan elements, but made explicit by a series of "new" concepts derived from Mesopotamia, Iran and, last but not least, Ancient Egypt. ε.1 According to Anaximander of Miletus (ca. 611 - 547 BCE), the cosmos developed out of the "apeiron", the boundless, infinite and indefinite (without distinguishable qualities). Aristotle would add : immortal, Divine and imperishable. ε.2 Within this "apeiron" something arose to produce the opposites of hot and cold. These at once began to struggle with each other and produced the cosmos. The cold (and wet) partly dried up (becoming solid Earth), partly remained (as water), and -by means of the hot- partly evaporated (becoming air and mist), its evaporating part (by expansion) splitting up the hot into fiery rings, which surround the whole cosmos. Because these rings are enveloped by mist, however, there remain only certain breathing holes that are visible to men, appearing to them as Sun, Moon, and stars. Comparative schemes were developed. ζ. The self-sufficient ground sought by the Pre-Socratics is "arché", "phusis", "kosmos", "aletheia" (truth) & "dike" (justice). For Homer and Hesiod, the sky or "Ouranos" is a brazen roof or a seat set firm. The Greeks, with a few exceptions like Heraclites  (540 – 475 BCE), could not grasp the continuity of the architecture at work in every momentum, of the style or kinetographics of movement. ζ.1 For substantivists, "solid" and "eternal" per definition imply lack of movement, absence of change or some kind of fixation in a self-sufficient, Olympian ground, an underlying reality ("hypokeimenon"). ζ.2 Seeking this out, irrespective of Platonic or Peripatetic inclinations, is the root of concept-realism and of the Western essentialist and thus eternalizing view on ontology. Serving this view has been the endeavour of Western philosophy until Kant. η. Although the cascade is never the same, it does have some unchanging patterns holding its dynamism away from sheer randomness. Likewise so for the swimmer or the ballet dancer. A stochastic ontology does not preclude eternal, unchanging form, albeit as a form of movement, as a differential equation covering all specifics of an actual dynamic flow of dynamic relationships between movements. ∫ Is the holomovement of a Buddha not the perfection of his or her unique form of movement ? θ. Discovering the sharp blade of the Sword of Wisdom brings the end of all possible reasons for substantialism. This does not leave us with nothing, for some thing is left after substance has been cleared ; this is sheer process, ongoing flows of actual occasions featuring momentum, architecture and sense. Distinguishing between pre-existence and existence, on the one hand, and, funerary ritualism, on the other hand, co-emerged. The first suggestive evidence of this is found in the Cave of Pech Merle (ca. 16.000 BCE). By it, the relative world, given to properly functioning senses and a modular mind, is distinguished from an absolute realm, one deemed to exist "before", "next to", "above" or "behind" these relative states of matter, information & consciousness. In the "natural" mode of cognitive functioning, one given to ontological illusion due to the constant (ab)use of the substantialist instantiation, pre-existence was envisaged as a deeper stratum of existence ; eternal, timeless, spaceless & undifferentiated. In this "dark ocean", a creative potential was afloat. Pre-existence is not a dead nothingness, a void, but filled with the (passive) potential to create (light, spacetime, life & love). In Hermetism, as well as in the Qabalah, pre-existence points to more than just a void. But these metaphysical systems, while abstracting the absolute as a category, fill it with the ultimate essence of God Himself. God is then the "substance of substances" (or "image of images", "power of powers" - cf. The Cannibal Hymn to Pharaoh Unis, 2002). Acting as the world's underlying self-sufficient ground, the ultimate level is substantialized. The same happened in the theologies of the three monotheisms, in Jainism and in Hinduism. The fact this crucial ontological distinction is brought into play is not the problem, its reification is. The world-ground cannot be a substance or the world would never have come into existence. No becoming would have been possible. The presence of this world need not to be explained. How this presence came to be is the question. Logically, what precedes the Big Bang ? § 2 The Logic of Being & the Fact of Becoming. α. Parmenides of Elea (ca. 515 - 440 BCE), inspired by Pythagoras and pupil of Xenophanes (ca. 580/577 - 485/480 BCE), was the first Greek to develop, in poetical form, his insights about truth ("aletheia"). In his school, the Eleatics, the conviction human beings can attain knowledge of reality or understanding ("nous") prevailed. But to know this truth, only two ways were open : the Way of Truth and the Way of Opinion ("doxa"). These are defined in terms of the expressions "is" and "is not". If a thing both is and is not, then this either means (a) there is a yet unknown difference due to circumstances or (b) "being" and "non-being" are different and identical at the same time. This answer is relative (circumstantial) or contradictory. If a thing is not, then it cannot be an object of a proposition. If not, non-being exists ! This answer is pointless. As the last two answers must be false, and only three answers are possible, so the first answer must, by this reductio ad absurdum, be true, namely : the object of thought "is" and equal to itself from every point of view. β. With Parmenides, pre-Socratic thought reached the formal stage of cognition. Before the Eleatics, the difference between object and subject of thought was not clearly established (cf. the object as psychomorphic). Myth and unstable pre-concepts prevailed. Moreover, the basic formal laws of logic (identity, non-contradiction & excluded third) were not yet brought forward and used as tools to back an argument. Logical elegance was absent, and a thinker like Heraclites was deemed "dark". The strong necessity implied by the laws of thought had not yet become clear. But with the Eleatics, the mediating role of the metaphor is replaced by an emphasis on the distinction between the thinking subject (and its thoughts) and the reality of what is known. γ. The idealism of the Eleatics, thinking the logical necessities of thought, nevertheless confused between a substantialist and a predicative use of the verb "to be" or the copula "is". That something "is" (or "Dasein" - x) is not identical with what something "is" (or "Sosein" - x). Properties (accidents) are deemed to exist apart from the "being" of the substances they describe. But as Kant would point out much later, the verb "to be" only instantiates the properties of an object, not a deeper sense of "being-there". For the substantivist, non-being is pointless. The empty set equals the void. Hence, only an all-comprehensive "Being" can be posited. We know Parmenides asserted further predicates of the verb "to be", namely by introducing the noun-expression "Being". The latter is ungenerated, imperishable, complete, unique, unvarying and non-physical ... He did not conceive the absence of certain properties as non-being, nor could he attribute different forms of "being" to objects. What he then calls "Being", is an all-comprehensive being-there standing as being-qua-being, as "Dasein" in all the entities of the natural world (and their "Sosein"). A view returning in the phenomenology of Heidegger. ε. Democritus of Abdera (ca. 460 - 380/370 BCE), geometer and known for his atomic theory, developed the first mechanistic model. His system represents, in a way more fitting than the difficult aphorisms of Heraclites, a current radically opposing Eleatic thought. Instead of only relying on the formal conditions of thought, the origin of knowledge is given with the undeniable evidence put forward by the senses. Becoming, movement and change are fundamental. Hence, non-being exists as empty space, as a void. If so, being is occupied space, a plenum. The latter is not a closed unity or continuum, a Being, but an infinite variety of indivisible particles called "atoms". The latter are all composed of the same kind of matter and only differ from each other in terms of their quantitative properties, like extension, weight, form and order. They never change and cannot be divided. For all of eternity, they cross empty space in straight lines. Because these atoms collided by deviating ("clinamen") from their paths, the world of objects came into existence (why they moved away from their linear trajectories remains unexplained). Objects emerge by the random aggregation of atoms. Things do not have an "inner" coherence or "substance" (essence). Everything is impermanent and will eventually fall apart under the pressure of new collisions. ζ. If all things are atoms, then how can rational knowledge be more reliable than perception ? Moreover, how can atomism describe atoms without in some way transcending them ? In epistemological terms : how can the subject of knowledge be eclipsed hand in hand with a description of this "fact" ? There is a contradictio in actu exercito : although refusing the subject of knowledge any independence from the object of knowledge, the former is implied in the refusal. This important problem is shared by all materialist & mechanistic models. It can be solved by positing a deeper ontological principal (encompassing both object & subject), like the actual occasion, and attributing to this both physical, informational & sentient properties. η. Concept-realism returns under many guises : objectivists versus subjectivists, realists versus nominalists, empirists versus rationalists, physicalists versus spiritualists etc. Every time either the subject of experience or the object of experience is eliminated, crippling one's understanding of the possibility & advancement of knowledge. The conflict is rooted in an ante-rational & substantialist prejudice seeking a firm, eternalized self-sufficient ground existing on its own, in an by itself. Such a ground can however not be found ! To clear obstructions to understanding the mind and its workings, it must be done away with. Critical epistemology realizes the discordant truce as the fundamental fact of reason. With the Greeks, the mythological element was put between brackets and so clearly identified. Science deals with sensate & mental objects only. These operate in a formal way, i.e. irrespective of context. Unlike ante-rationality, Greek rationalism was able to transgress the borders of its own geomentality, and establish international, panoramic perspectives. Discovering both the necessities of logic (operating our mental objects) and the importance of facts, its concept-realism forced it to seek an absolute, substantialist (essentialist) grounding of the objective and/or subjective conditions of experience & knowledge. As a substantial, self-sufficient ground cannot be found, this dramatic quest will never come to an end. For objects merely appear as independent & separate. § 3 Greek & Indian Concept-Realism. β. The Peripatetics reject the separate, Platonic world of real proto-types, but not the "ta katholou", generalities conceived, as concept-realism demands, in terms of the "real", essential and self-sufficient ground of knowledge, the foundation of thought. So general, universal ideas do exist, but they are always immanent in the singular things of this world. There is no world of ideas "out there". There is no cleavage in what "is" and there is only one world, namely the actual world present here and now. The indwelling formal and final causes of things are known by abstracting what is gathered by the passive intellect, fed by the senses, witnessing material and efficient causes. The actual process of abstraction is performed by the intellectus agens, a kind of Peripatetic "Deus ex machina", reflective of the impasse of realism : Where is the subject ? γ. With the gradual decline of Buddhism in India from around the beginning of the Common Era, Classical Hinduism emerged as a revival of Vedic traditions. The Advaita Vedânta consolidated by Shankara (788 - 821 ? CE), represents the pinnacle of the revival of Hindu intellectualism during the Gupta Period (4th to 6th centuries) in the North and the Pallavas (4th to 9th centuries) in the South. This was the "golden age" of Indian civilization. Between the 2nd BCE to the 6th century CE, the six systems of Hindu philosophy slowly emerged (viz. Sâmkhya, Yoga, Nyâya, Vaishesika, Mîmâmsâ, and Vedânta). δ. Considering the Absolute in its Absoluteness, i.e. Brahman, the Vedânta is consistent with what in the monotheisms "of the book" (Judaism, Christianity & Islam) is called the "essence of God", or God as He Is for Himself Alone. That God is a Supreme Being can be known (by the heart and by the mind), but what this Being of God truly is cannot possibly be known. His essence is ineffable and remains for ever veiled. The essence of God is only for God to enjoy ! He is the One Alone, for ever separated from His Creation. God and Brahman are the One Alone. Brahman exists as a well-known entity : eternal, pure, intelligent, free by nature, all-knowing and all-powerful. In the root "brmh" resides the ideas of eternality, purity, etc. The existence of Brahman is well known from the fact of It being the Self of all ... for everyone feels that this Self exists (sic). This is the pre-creational, pre-existent Supreme Being, creating the world "ex nihilo". The pivotal difference between Vedânta and the monotheisms is the idea the innermost "soul" or "âtman" is ontologically identical with Brahman, whereas in the West no creature is able to deify to the point of total, absolute identity with God. The realized Vedantin however proclaims : "I am Brahman !" ... ε. Considering the Absolute in its Self-manifestations, Hindu concept-realism makes way for henotheism, for Brahman, the absolute substance existing from its own side, manifests as Îshvara and the latter is grasped as a multiple variety of Deities, all epiphanies of Brahman, or aspects of "mâyâ", the magical force of Brahman. Brahman is a magician and involved in creation, fashioning, sustaining & destroying it. Îshvara (Brahmâ) is the personal face of Brahman, but this face is never singular, but involved with the world in terms of an endless variety of epiphanies. Although Brahman is "without a second", Its personal dimension ("saguna Brahman" or Îshvara) is, as the theology of Amun has it, "one and millions". In the Vedânta, realization is the removal of the superimposition of the illusionary forms on Brahman. In Classical Yoga, enlightenment or "samâdhi" is the elimination ("nirodha") of the last element of flux ("vriti") from consciousness ("citta"). In both forms, the mystic returns to the original, inherently existing station-of-no-station of the Absolute in its absoluteness. It pre-existed, exists and will continue to exist. It is absolutely removed from anything except Itself, completely independent, eternal, imperishable, permanent and therefore the sole "substance of substances". The drama of concept-realism spread over the globe. The objects of reason were ontologized, ideas became things. In the East, the notion of an absolute, inherently existing Supreme Being creating the world was also explained in categorial terms. The six schools of Indian philosophy provide ample evidence of this impact of substantial instantiation on Hindu thought. § 4 The Tao. α. The Tao (cf. The Tao, Emptiness & Process Theology, 2009), has one absolute (non-differentiated) and various relative (differentiated) stages. These stages represent the absolute, self-existent Tao in various moments of self-determination. Each of them is the absolute Tao in a secondary, derivative and limited sense. Great Limitless emptiness Mystery of Mysteries the absolute Tao The One potential non-being or WU The Two potential being or YU actuality dependent Tai Chi Great Ultimate The Five Forces β. The absolute Tao is non-local, non-temporal, non-differentiated, nameless, and empty of substance or inherent existence, without permanent and unalterable distinctions. This absolute Tao is beyond conceptualization and object of ecstatic, nondual apprehension. The absolute Tao is not turned towards phenomena, nor is it wholly self-referential. This "abstract of abstractions" cannot be conceptualized and named. It is Nameless. To reach the ultimate and absolute stage of the Way, we have to negate the opposition between being and non-being, positing "no no-non-being". This level can only be apprehended ecstatically, and this absolutely ineffable if for Lao-tze the "Mystery of Mysteries". Mystery ("hsüan") originally means black with a mixture of redness. The absolute, unfathomable Mystery or "black" does reveal itself, at a certain stage, as being "pregnant" of the "Ten Thousand Things" or "red" in their stage of potentiality. In the Mystery of Mysteries being and non-being are not yet differentiated. Although the absolute Tao cannot be said to be turned towards the phenomena, in this utter darkness of the Great Mystery ("black"), a faint foreboding of the appearance of phenomena lurks ("red"). The Mystery of Mysteries is also the "Gateway of Myriad Wonders". Hence, the "Ten Thousand Things" stream forth out of this Gateway ! γ. When Lao-tze introduces the Way as "the Granary of the Ten Thousand Things" (Tao-te Ching, chapter 62), he aims at a stage slightly lower than the Mystery of Mysteries, the absolute Tao. At this stage, the Tao begins to manifest its creativity. The image of a "granary" conveys the sense all things are contained therein, not actually but in a state of potentiality. He refers to this aspect of the absolute Tao as "the eternal non-being", or "wu". At this stage, the absolute Tao is potentially already Heaven and Earth, i.e. being. Hence, the non-being referred to is not a passive Nothing, pure negative absence of being or existence (naught or zero), but a "something" in the sense of an "act", the act of existence itself or Actus Purus. It exists as the very act of existing and making things exist. This is called "the One". This Actus Purus does not exist as a substance. In order not to reify it by way of concepts, the One can only be ecstatically intuited by "sitting in oblivion" (Chuang-tze). The One is darkness not because it is deprived of light, but because it is too full of light, too luminous, i.e. Light Itself. δ. When it enters its first stage of "pure" self-manifestation or mere self-determination, Lao-tze admits the One or active non-being assumes a positive "name". This name is "existence" or "being" ("yu"). The latter is also called "Heaven and Earth" ("t'ien ti"). The Way at this stage is not yet the actual order of Heaven and Earth, but only all possible things as "pure" being, i.e. again in potentia. The One begets the Two : Heaven ("yang") and Earth ("yin"), the cosmic duality. They are the self-evolvement of the absolute Tao, the Way itself. The One is the initial virtual point of self-determination of the Way, the Two bring about (as a mother) the possibility or probability of actuality and carries this over into actual reality. In this way, the One is the ontological ground of all things, acting as its ontological energy, while the Two develop this activity ("Ch'i Kung") into a particular ontological structure, Yin and Yang and the Three, i.e. the blending & interaction between these ("Tai Ch'i"). Hence Heaven is limpid and clear, and Earth is solid and settled ... In Chinese philosophy, especially in Taoism, a process-mentality was and is everpresent. Nothingness is posited, but again, within it, a very subtle creative potential is identified (cf. black with a mixture of red). A balance between natural flow & spontaneity (pragmatic naturalness) and emptiness (absence of inherent existence) is at hand. Where India & Tibet favoured the quick release from this world (represented by the dorsal "yang" channel), China focused on balancing the energy by letting it run in an orbit (making the upward movement of the "yang" channel flow into the ventral "yin" channel). This reinforces the life-force ("Ch'i") at the abdomen and aims at the Great Harmony between the powers of Heaven and Earth (at the heart). The wisdom realizing emptiness able to understand these "mechanisms of heaven" as dependent arisings, operates the complete spectrum of human possibilities, not just one. Here, the absolute truth is not the single focus. Hence, the conventional and ultimate truths cannot be turned into a Single Truth. The danger of moving to much upward (toward Heaven) without being firmly rooted (in Earth) does not exist. § 5 The Dharma Difference. α. The notion the world is composed of existing things or phenomena as it were carrying or holding their properties in accord with the cosmic law, i.e. of a certain characterizing nature (cf. "dharmata"), Buddhism shares with Hinduism. It differs though in terms of Buddha's Second Turning of the Wheel of the Buddhadharma, teaching the absolute truth ("dharma") about all phenomena, namely their lack of inherent existence ("shûnya"), the fact of their have absolutely no self-nature or essential own-nature ("nirsvabhâva"). β. Because a perfect understanding of Buddha's crucial wisdom teaching on the fundamental nature of all possible phenomena, one encompassing both the reality of sensuous objects as the subjective ideality of mental activities, is a difficult simplicity, it has led to countless attempts to save inherent existence in some way or the other. Only an absolute negation prevails (cf. the apophatic approach to mystical experience). β.1 Logically (and a forteriori philosophically), the strict Prâsangika-Mâdhyamaka approach found in the work of Nâgârjuna, Chandrakîrti, Shântideva, Atisha and Tsongkhapa is correct & definitive (cf. Emptiness Panacea, 2008 ; On Ultimate Logic, 2009). Hence, the non-affirmative negation of inherent existence eliminates all possible reified concepts. β.2 Experientially however (as Yoga & Tantra put into evidence), a direct non-conceptual experience, gnosis or prehension of the absolute nature of all things is possible. This involves a cognitive act of an absolute bodhi-mind apprehending an absolute object or totality "as it is". Nondual & non-conceptual, this experience is not without knowledge-content. The common treat in the poetical evocations on the basis of such graded meditative experiences involves a world of pure luminosity without shadows & edges, undefiled and unborn, pure and complete, much like "nirvâna", identified as permanent, constant, eternal and not subject to change. β.3 While philosophy remains immanent, yogis & tantrics dance on the rhythms of the poetical tale of the transcendent. These scientists & artists of the inner planes do not prove anything, they merely point out. What a community this would be if those who prove the end of proofs and those who experience emptiness were the same ! γ. In the Flower Garland tradition, in particular Fazang in the seventh century, Buddha's teachings on wisdom are lifted out of the Indo-Tibetan emphasis on the other-worldly, on absolute reality. Absence of inherent existence was laid to rest in the fertile Chinese soil of the magic of the natural world, the quest for longevity, social order and the actual operation of how things exist conventionally, namely interdependent & interpenetrative. γ.1 Because gold lacks inherent existence, a craftsman was able to make an object of it - say, Empress Wu's Golden Lion guarding her palace hall. This gold is "li", principle or noumenon, the gold qua gold. The shape it takes in this case (the lion) is "shih", or phenomenon. Suppose gold would take a bar-shape, then it would actually ceases to be gold in lion-shape. Gold is therefore equivalent to "gold in x-shape" ! Fazang's gold is not above or behind the shape it takes. The Golden Lion is gold, there is no gold behind the lion, nor is the lion an emanation of gold. Gold only exists as having some form or another, in this case Empress Wu's Golden Lion. When the lion shape comes into existence, it is in fact the gold coming into existence ! The shape does not add anything to the gold. γ.2 The phenomenon is the noumenon in its phenomenal form. The ultimate is not elsewhere but here and now, even in the smallest, meanest thing. Ultimate truth exists conventionally. In this brilliant analysis, Fazang makes use of the logical necessity between lack of inherent existence and dynamic (artistic) flow. He does so to integrate strict nominalism within the Chinese vision of enlightenment as living in harmony with the Tao, with the natural flow of all things ("Tai Ch'i"), and this based on the work of "ch'i" ("Ch'i Kung"). Indeed, the word "li" also carries a positive connotation, namely the "true thusness of mind", inherently pure, complete & luminous. The Dharma difference defines a crucial divide. On the one side, we find metaphysical systems seeking out substance and an unchanging, self-sufficient ground existing from its own side with inhering properties. They are "self-advocate" ("âtmavâdin"). Theirs is the substantivist approach. Its futility is unmasked by asking : "Show a substance as defined ?". On the other side, own-form or self-nature is totally relinquished and only the architectures of process remain. Its extreme accuracy is suggested by the precision of Schrödinger's wave-equation. This most fundamental of distinctions defines the ontological principal. This is not inherently existing substance, but interdependent process. The architecture of process implying change is fundamental but not random. If process were merely stochastic, then order would be impossible. Precisely because of the need to explain order did the Greeks and the Ancient Egyptians before them posit a self-sufficient ground. But seeking such a solid foundation has sidetracked Western philosophy since Heraclites, who's message was not understood. No two moments are the same, the "same" river cannot be entered twice. The way up and the way down are, by enantiodromia, the same way. While a cascade is never the same, it can be distinguished from another because of certain constant elements in the way its water moves ... Process thinking identifies the stages of the differential changes as well as their form or style. Random movement (white noise) has no style and so can carry no information. But as soon as movement is coordinated, a structure can be discerned and insofar as this has constancy it can be described and repeated. There is no need for a self-sufficient ground to "stabilize" form, for the stability of change is not a kind of substantial channel or invisible matrix in which flow happens, but merely the particularities or forms of definiteness (or predictability). These are the kinetosyntax of change, whereas the purpose of change is its practice (or kinetopragmatics) and its sense or meaning is the sentient activity suggested by it (or kinetosemantics). C. Against Substance & Foundation. The core insight underlying the philosophy of process is absence of inherent existence. Only this radical negation of substance or essence makes it possible to consistently think movement and transformation, in short change and impermanence. This cannot be thoroughly realized as long as some inherent object or subject prevails. If substance goes, so does a self-sufficient ground. The difference between ground-level, object-level and meta-level can be maintained, but the ground-level is not a permanent, inherently existing seat made firm ! Instead of trying to find an underlying reality, process thought focuses on the momentum, architecture and sense of the flow of actual occasions. As the links of interdependence expand throughout the entire universe and this all the time, in the totality of interdependence or in the world as it is, phenomena are mutually interpenetrating. Taking the world of actual occasions as the only possible world, the absolute nature of phenomena is not sought behind or outside it. The transcendent is a property of the ongoing flow of actualities in just the same way as the immanent is. § 1 The Definition of Substance. α. Substance ("substantia" or "standing under") is the permanent, unchanging, eternal underlying core or essence of every possible thing, a self subsisting own-nature or self-nature ("svabhâva") existing from its own side, never an attribute of or in relation with any other thing. Hence, a substance solely exists by the necessity of its own nature and intrinsic identity ("svalaksana"). Its action is determined by itself alone. Traditionally, it is the principal category of "what it is" (cf. "ousia"). For Spinoza, there was only one substance, namely Nature or God. This substance had infinite attributes, of which each expresses for itself an eternal and infinite essentiality (Ethics, Part I, definition VI). β. If a substance would be determined by something external to itself, then it would be not inevitable, compelled & necessary, but rather constrained. A substance is always Pharaonic. Without the presence of an absolutely free & omnipotent Caesar, the bond uniting things seems to be lost. Without substance, the properties of objects seem not be carried or inhere. But things are just be a dynamical flow with a certain kind of movement (momentum), shape (architecture) and intent (sense). And this the substantivists wrongly deem not to be enough for science, philosophy, ethics, economy & politics ... Substance is always linked with the idea of some thing existing on its own, by itself alone. Although objects can be isolated in a relative sense, they are never so in an absolute way. This means there is no self-identical core remaining untouched by change. But absence of substance is not absence of order. Order is possible because processes are not random and they are not so because movement can have coordination, structure, style etc. These kinetographic features are overlooked and identified as the vestiges of essential, non-accidental properties or essences. This is were the substantialist error creeps in. Logically, this difference is given with the distinction between the actualizing and the existentializing quantor. : "there exists" : affirming object x momentarily exists ; The actualizing quantor confirms x, or  x, the mere existence of x. A set of predicates attributed to object x is present to the senses or the mind. This presence is spatio-temporarily defined, and hence impermanent, i.e. featuring arising, abiding and ceasing. Merely existing object x arises when its presence is identified or registered by a subject or subjects of experience. It abides as long as this actuality, in all cases limited by space & time, continues. It ceases when it can not longer be apprehended or pointed at. : "there is" : affirming persistent existence of x ; The existentializing quantor confirms x inherently exists. A set of predicates attributed to object x is present to the senses and/or the mind, but these predicates are merely accidents of the substantial self-identical core of x, a universal of sorts x, hence x x. With x, the substantial or essential nature of x (or xs) is confirmed. If this xs = x changes, then x is not longer x, in other words, x can no longer be identified as such. § 2 The Münchausen Trilemma. α. The problems of foundational thinking are summarized by Albert's Münchhausen Trilemma. Its logic proves how every possible kind of foundational strategy is necessarily flawed. The trilemma was named after the Baron von Münchhausen, who tried to get himself out of a swamp by pulling his own hair ! An apt metaphor to indicate the futility of trying to find an permanent underlying base, i.e. satisfying the conditions of the postulate of foundation. The latter states valid knowledge must in all cases be absolutely justified, in other words backed by a self-sufficient ground existing from its own side, inherently. β. Every time statement A accommodates the postulate of foundation by way of an absolute justification, three equally unacceptable situations occur. Such an absolute justification of the propositional form P of A implies a deductive chain C of correct arguments C', C", C''' ... with P as necessary final inference. How extended must C be in order to justify P in this way ? Three "solutions" prevail : (a) a regressus ad infinitum : There is no end to the justification, and so no foundation is found (C', C", C''' ... does not lead to P). The whole process of finding a last ground (needed to back justification) is undermined. A point at infinity is however not a problem per se. But it becomes one each time a final ground is needed. Then a regression disproves the logical attempt to articulate a foundation. (b) a petitio principii : The end P is implied by the beginning, for P is part of the deductive chain C. Circularity is a valid deduction but no justification of P, hence no absolute foundation is found. (c) an abrogation ad hoc : Justification is ended ad hoc, the postulate of justification is actually abrogated, and the unjustified ground (C' or C" or C''' ...) is emotionally accepted as certain because, seeming certain, it is deemed not to need more justification. This is of course unproven. γ. The Münchhausen-trilemma must be avoided by stopping to seek an inherently existing absolute, self-sufficient ground for the possibility of knowledge and/or the cognitive act. This happens when one accepts critical science & metaphysics are terministic, i.e. fallibilistic and not eternalizing (nor nihilistic). But although the categorial system cannot be absolute, some of its general features (as given by normative philosophy) are necessary in a normative way (for we use them each time we think). Backing arguments to establish a certain conclusion is not the same as trying to find an absolute warrant. Logical inference can be absolute, but not absolutely absolute. Once the logical system (basic axioms, operators, truth-tables and rules of inference) has been established and accepted among all involved sign-interpreters, an absolute conclusion on a relative basis can in certain cases indeed be drawn, but not an absolute conclusion on an absolute base. Change the basic axioms (like identity, non-contradiction or excluded third) and what is certain in logical system A might not be in system B, etc. This is often forgotten. Classical formal logic is not self-evident. Just as in Euclidian geometry, changing a single axiom may introduce important variations. What at first seems impossible (like intersecting parallel lines), in the end exists both mathematically (as a mathematical object) and physically (as curvatures of spacetime). § 3 Avoiding Dogmatism & Scepticism. α. To avoid dogmatism is not to eternalize a position. No ad hoc abrogation is allowed. If a circular reasoning or a regressus ensues, then one must accept an absolute justification cannot be given and the aim of dogmatism (namely finding such an absolute ground existing in and by itself) is futile and so trivial. β. To avoid scepticism is not to eternalize a contra-position. When a hidden agenda is present, scepticism it but a form of dogmatism in disguise. To criticize is to draw clear distinction. To be sceptical is to overuse negation. At best, it is a dialectical move needed to outwit a dogmatic opponent, but cannot deliver a constructive tale about existence, nor give us any important answers. It is a wayfaring strategy, not a stable station. γ. The critic walks the Middle Way and has no affirmation or negation to defend a priori. Here only distinctions matter. They allow categories to emerge and organizations to unfold. These architectures or forms of information are always changing (have material momentum) and display intelligent design or conscious activity. The extremes of eternalism (accepting the substantial nature of objects) and nihilism (rejecting the existence of anything regular) are examples of respectively a dogmatic and a sceptic position. The eternalist stops the justification ad hoc, and posits an absolute justification on the basis of relative steps. The latter only lead to a relative justification. The leap made is logically invalid. Many strong relative reasons do not constitute an absolute base. Even a majority can err. So if an absolute justification is needed, then a self-sufficient ground must be found. The eternalist has negated too little. The nihilist accepts there is nothing substantial anywhere. But this does not lead to the kinetography of process and so a forteriori lacks the perfection of process. This sceptic has lost grip on all things because this conceptual apprehension of emptiness as lack of inherent existence, although correctly understood insofar as the negation of substantial instantiation is concerned, does not lead to the view of dependent-arising. Process as a dependent-arising is more than merely a stochastic display with no inherent existence, it is a spectacular magical show with, besides momentum (matter), also architecture (information) & sense (consciousness, sentience). The nihilist has negated too much. Distinguish between, on the one hand, the yogi of wisdom ("jñânayogin") and, on the other hand the sophist (sceptic), merely criticizing & arguing without speaking up for anything, and the dogmatist, who argues without staking his own view depend on the outcome of the debate. Dwelling in extremes is to be avoided. Things are not inherently something (x), nor are they nothing (¬ x). They are a something manifesting properties (x) in the isthmus between inherent being and void nonbeing. Existence covers the middle ground. D. Conventional Appearance. Ontology addresses the two epistemic isolates in existence : the conventional properties of any object x and its ultimate characteristics. These are called "epistemic isolates" because to identify them a special & crucial differentiating cognitive act is necessary, namely one clearly identifying what is merely given (to the sense and the mind), the appearance of x, and one sharply establishing (realizing) the process-nature of x, in other words, x's lack of inherent existence. These two "natures", the conventional and the ultimate, are merely properties of x. The ultimate nature is not deemed "another" reality standing beyond, next to or within x. Like in the case of the Golden Lion, the gold and its shape are simultaneous. The first isolate  is the conventional reality or conventional truth about x, the second its ultimate reality or absolute truth. Because the ultimate exists conventionally, there being no "ultimate" ontological plane or level, let us first analyse x conventionality. We already listed the objects of ontology, answering the question What is there ? We found absolutely nonexistent objects, fictional objects, relatively existent objects and absolutely existent objects (cf. supra). To draw the line between what is there and what is truly there will shed light on conventionality and its illusionary appearances. To add "truly" merely points to the possibility something might appear to be there while it is not. Object might appear and independent (inherently existing) & separate (isolated from other objects), while in truth they are not. Like optical illusions, this epistemological illusion (to be identified as an ontological illusion), can be grasped by conceptual reason but remains as long as this mode of cognition endures. Only nondual cognition takes it finally out. Then full-emptiness is (directly) prehended, namely "finding" the absence of inherent existence in all objects simultaneously with their universal interdependence and interpenetration, the union of bliss & emptiness. These considerations bring about the issue of universal illusion and the way this blends in with the valid conventional knowledge of science & immanent metaphysics. This is deemed valid, for producing functional knowledge, but mistaken, for appearing as substantial while this is found not to be the case. § 1 What is Truly There ? α. This question seeks the truth-value of objects, whatever their ontological status as absolutely nonexistent objects, fictional objects, relatively existent objects and absolutely existent objects. This is measured in terms of validity and the presence of a mistake. α.1 An object is valid when it can be identified, apprehended or grasped by a subject of cognition acting as object-possessor (note "prehension" is a special form of apprehension in that the subject cognizes in the nondual mode of cognition). An object is mistaken when it appears differently as it truly is, i.e. when it is incorrectly apprehended or misleading. α.2 Validity refers to the presence of objects. Hence, valid or invalid objects may be mistaken or not. Indeed, valid objects (such as those of science), may nevertheless be appearing differently as they truly are. In fact, all fictional and conventional objects veil their true, absolute, fundamental nature or suchness ("tathata") by the illusion of own-form or self-nature ("svabhâva"). β. Absolutely nonexistent objects are invalid and mistaken. They are invalid because nothing can be identified to correspond to them, not even logically. Hence, as logic precedes function, they have no functionality whatsoever. Although we understand the words "square" and "circle", the combination, i.e. a square circle is nonsensical. They are mistaken because they appear to be something they cannot possibly be. Indeed, although it seems the phrase "a triangle with four angles" conveys some information, namely the presence of an object with three angles which has four angles, it is impossible to apprehend or imagine such a object at all. The phrase is therefore merely a string of black pixels on a white surface. γ. Fictional, relatively nonexistent objects, are valid and mistaken. They are valid because, insofar as they are public, one can point to them. Because they move us, they are functional. But insofar as they are private, the act of apprehension is private too and so only valid for a single subject of experience (reality-for-me or the first person perspective). Fictional objects are mistaken because they represent something which is not as it truly is and this in a definite degree, i.e. by conscious deception. Conventional objects may be valid and mistaken. They are valid because they can be identified as logical and functional realities/idealities. Insofar as this validity is concerned, they are scientific objects. But they are mistaken not because of any conscious deception, but because they appear to possess a nature of their own ("svabhâva", "ousia", "eidos", "hypokeimenon", "substantia"), while they are truly other-powered, i.e. depending on conditions & determinations outside themselves. This is what ultimate analysis seeks to prove (cf. infra). Once this is established, the valid appearance of conventional objects is not changed, but only the mental obscurations or false ideation causing them to be experienced as self-powered has been removed. The elimination of this ontological illusion or substantial instantiation voids their ability to fool us and opens the way to actually see their dependence, universal interconnectedness with other phenomena & exclusively process-based nature. ε. Conventional objects may be invalid and mistaken. Invalid because they cannot be logically and functionally identified, i.e. in no way apprehended by way of logic, argumentation and experimentation. The caloric fluid theory of old, the four humours or the epicycles at work in the Ptolemaic & Copernican models are good examples. These objects of outdated scientific theories have been disproved and so disbanded from the arena of paradigmatic scientific objects. These invalid conventional objects are also mistaken, for regardless of the fact they no longer function, they -just as valid conventional objects- posit characteristics existing from their own side. ζ. Finally, among existing objects there are those which are beyond validation and not mistaken. They are beyond validation because they refer to something every subject of experience can potentially identify in every sensate or mental object but never name and correct because they appear as they are, i.e. do not conceal their truth. These ultimate objects are nothing more than conventional objects apprehended without any sense of self-power. They simultaneously reveal (a) absence or lack of independent existence  ("tathata") hand in hand with (b) dependent-arising ("pratîtya-samutpâda") or universal interconnectedness (interdependence & interpenetration). The objects prehended by the wisdom-mind of a Buddha are all of this category. η. Nonexistent & fictional objects are not the first aim of ultimate analysis. Nonexistent objects are not because their ontological and epistemic status is irrelevant to the question at hand. Fictional objects are not because their deceptive nature is apparent and so unconcealed. Conventional objects are the prime target of ultimate analysis, for the fact their true nature is veiled is not apparent. Quite on the contrary, to the mind of Homo normalis, they are self-evidently existing extra-mentally and substantially, i.e. from their own side. Their accidents (qualities, quantities, modalities & relations) are deemed to adhere to their own essences, and this inherent existence is self-powered, i.e. isolated from conditions & determinations outside themselves. If these objects really exist the way they appear to the deluded mind, then it should be possible to separate the quantities, qualities, modalities and relations entertained by these objects from their supposed substantial core or essence ("svabhâva"). What remains after we remove all the accidents from an object ? Objects can be logically identified and do have functional effects. These can be found. But ultimate logic seeks to prove no objects exists in accordance with our common ideas about them, i.e. such own-form cannot be found at all. Remove its accidents, and the object as a whole vanishes ! Remove the (logical & functional) properties, and the instantiation of the concept given by the copula "is" is out. Nothing remains. θ. Both natural and artificial conventional objects are deemed to possess characteristics independent of their observers. Indeed, we suppose these objects exist even if they are left unobserved. And of course, on the meso-level of reality, they do exist in a logical and functional way. But not substantially, i.e. without being subject to change. Indeed, the pivotal feature ultimate analysis seeks to disprove is the substantial, inherent permanency of conventional objects. So in terms of ultimate analysis, the fact these objects are found to be independent of conscious observers is not problematic per se, but the notion this independence is somehow an inherent feature of these objects is. Hence, inherent existence is the proper object of negation, i.e. the core feature of objects ultimate analysis disproves. The duality between objects & subjects is not a target, for suchness is directly apprehended by a nondual, non-conceptual, awakened mind. What is truly there ? After having identified what exists, one divides the lot in valid & invalid, unmistaken & mistaken, ultimate truth & conventional truth. conventional truth valid & mistaken invalid & mistaken ultimate or absolute truth beyond validation & unmistaken A valid object works efficiently. A consensus about the theory abstracting the outcome of experiments with the object is present. Facts concerning it are repeatedly confirmed. This tenacity of subjectivity & objectivity makes object x appear as independent & separated from other object y. But is this the case ? The world-ground cannot be found as a fixed, solid, inherently existing object. If so, valid objects are mistaken because the appear differently than how they truly are. An invalid object does not work efficiently. It either lacks the logical conditions for efficiency or does not actually operate efficiently. Acquiring the conditions for efficiency is giving logic to the architecture of process. This is applying form, rule, code, algorhythms, notion, idea, concept, theory, paradigm, etc. When these conditions are fulfilled -in order for the process to operate efficiently- semantic organicism must be present. Objects with style may lack overall order, i.e. a given organization of the meaningful features of their process. While (unconsciously) instantiating it, conventional understanding can be neutral as to accept substantiality. The conventional mind may ignore the idea of substance and continue to function. But although this grasping at a substantial "self" is indeed acquired, it is also innate. The latter reflects the ongoing -unconscious- activity of the ante-rational modes of cognition, the mythical, pre-rational & proto-rational mentalities of the mind. In this modes of cognition, substantial instantiation was the "natural" way to stabilize the pre-concept & the concrete concept. In the course of the development of the human mind, this reifying tendency was so basic & strong, it even leaped into reason, deceiving formal cognition with concept-realism and its substantialist ontological prejudice & semantic adualism. For applied epistemology (the highest abstract mode of studying & reflecting upon the production of knowledge), methodological realism (at the side of the object of production based on experimentation) and methodological idealism (at the side of the intersubjective community of involved sign-interpreters communicating with each other) are maxims without which no valid conventional knowledge can be produced. This proves conventional knowledge may well theoretically, in a transcendental inquiry, "purify" itself and attain critical understanding, practically it cannot purge the pragmatical substance-obsessions of researchers & thinkers. In the critical mode of cognition, truth, beauty & goodness are no longer ontologized. Although the object continues to appear differently than it should, it can no longer deceive us and so the so-called "safe house" of self-powered substance cannot be rebuild. Only absolutely true objects are unmistaken. They appear as they truly are. There is no deception anywhere. They are the truth of their existence. Ultimate truth and absolute reality/ideality are identical ("dharmakâya"). They are therefore unmistaken. These absolute objects are beyond validation because absolute objects perfectly work but this activity is nameless. The architecture of their process is a holomovement. To alter the world in terms of unity & harmony, they manifest propensity-fields of form ("rûpakâya"). § 2 Concepts, Determinations & Conditions. α. The "object" of the "natural standpoint" of conventional knowledge dictates (a) a reality "out there", existing independently (extra-mentally) and with a solidity from its own side and (b) an ideality "in here", likewise substantially established. The physical body is the first of these natural objects. Although part of the "subject" it nevertheless behaves in the same "objective" way as do outer objects. Moreover, objects "out there" seem even more to escape conscious manipulation, and so manifest tenacity, permanence, solidity and an unchanging character. These sensate & mental objects appearing in the "natural" world are problematic. β. Concept-realism is a way to consolidate the substantialist view on conventional knowledge. Concepts represent reality and/or ideality in a one-to-one relationship. However, general concepts or universals cannot be established on the basis of induction. The concept is a generalization on the basis of a finite number of elements used in the induction. Hence an unjustified logical jump from the singular to the general occurs. But in conventional knowledge, especially in valid non-scientific contexts, this happens all the time. Falsificationism has avoided this logical problem, but remains bound to a realism allowing "outer" objects to impact our senses. γ. Determinations are lawful connections between actual occasions. Conditions are assumptions on which rests the validity or effect of something else. All conventional objects depend on determinations & conditions. They are solely powered by these. Actual occasions & events are linked if the conditions defining the category of determination are fulfilled. For example, in the case of causation, it is necessary, in order for an effect to occur, to have an efficient cause and a physical substrate (to propagate it). In general determinism, these determinations are not absolutely certain, but relatively probable. Science is terministic, not deterministic. If individual action and (as an extension) civilization is considered, events are also connected by way of conscious intention, escaping the conditions of the categories of determination. Indeed, without "freedom", or the possibility to posit nondetermined events, ethics is reduced to physics and free will impossible. How is responsible action possible without the actual exercise of free will, i.e. the ability to accept or reject a course of action, thereby creating an "uncaused" cause or influencing agent, changing all co-functional interdependent determinations or interactions ? Even if it remains open whether the will is free or not, morally, we must act as if it is. ε. Scientists are cognitive actors producing valid but mistaken conventional object-knowledge by way of corroborated empirico-formal propositions and theories. This is information triggering correspondence (with facts) & consensus (between all involved sign-interpreters). Everyday observation also involves experimentation & (inter) subjective naming, but, in the language-game of true knowing, a more solid, inert and tenacious objectification is at hand. Here, a series of more lasting connections between direct observable events is made, and categories of determination are put forward to organize these connections. The following irreducible types of lawfulness ensue : • causality : effect by efficient, external cause (example : a ball kicking another ball or Cartesian physics) ; • interaction : reciprocal causation or functional interdependence (example : the force of gravity in Newtonian physics) ; • statistical determination : end result by the joint activity of independent objects (example : the long-run frequency of throwing two aces in succession is 1/36, the position or momentum of a particle, enduring correlation between two variables) ; • teleological determination : of means by the ends (example : standardization, final determination of actual occasions) ; • holistic determination : of parts by the whole (example : needs of an organ determined by the organism, impact of the electro-magnetic field on the objects within it). That conventional objects have no analytically findable self-nature or substantial own-form existing from their own side, does not mean they are nonexistent, possessing nothing. They do not however possess themselves, but are the result of other-powers acting upon them, enacting the laws of togetherness, thrusting creative advance, performing the power & beauty of the symphony of interdependent & interpenetrative arising, making these objects arise, abide, cease & reemerge. They do no exist as substances, nor do they exist as nothingness, as stochastic voids. Things have no shred of substantial existence from their own side, but are part of interdependences. These involve (a) actual occasions depending upon each other in a determinate way, neither existing without the other, (b) subjects of experience & objects of experience conditioning one another. These types of interdependence (determinations & conditions) make it clear conventional objects are functional and so highly unlikely events, in no way the outcome of randomness & coincidence. Conventional reality is in itself a well-formed & functional totality, evidencing unity & harmony. Because it is the actual mayavic scene of illusion, suffering is pervasive. Not because of its nature as it is, but because of the obscurations & afflictions caused by ignorance of the true nature of phenomena, namely dwelling in the extremes of affirmation (acceptance) & negation (denial) and their conceptual elaborations : exaggerated desire or craving (pathogenic obsession) & hatred (pathogenic rejection). § 3 Valid but Mistaken Appearance. α. Valid conventional knowledge holds a justified view on conventional reality (a sense of the objective "outer" world) and on conventional ideality (a sense of subjective, "inner" selfhood). Organizing this valid scientific knowledge in terms of a paradigm covering the totality of conventional sensate & mental objects is the task of science aided by immanent metaphysics. This implies all possible logical & functional instantiations, i.e. empirico-formal propositions of fact (science) and arguable speculations about the totality of the world (immanent metaphysics). β. Validity implies logical well-formedness and the regulations of correspondence & consensus. This means a problem can be solved and/or a certain operation can be executed. In theoretical format, logic & functionality are transcendental (not transcendent !) and so represent the ideal of the norm. This ideal is not substantially given, but a set of rules (or information). β.1 Theoretically, the consistency of epistemology depends on the necessity of accepting that facts, besides intra-mental, are also extra-mental. When this normative set principles & norms is actually applied (as in applied epistemology), logic & functionality incorporate the "as if" mentality of methodological realism & methodological idealism. β.2 Epistemology & science make use of substantial instantiation, causing the whole domain of valid conventional knowledge, insofar as the fundamental truth or nature of phenomena is concerned, to be mistaken, for truth-concealing. γ. Sensate & mental objects possessed by the conventional knower are impermanent and so constantly changing. This change is not random. It has order (information), momentum (matter) & sense (consciousness). γ.1 But to the conventional mind, operating in the first six modes of cognition, these objects in all cases appear as existing independently of other objects and isolated (separated) from them. γ.2 In the physical domain, there is the Einstein-limit of locality imposed by relativity : material signals cannot travel at speeds higher than the photon, a particle without mass speeding at 300.000 km/s and its own antiparticle. A single photon is deemed to exist independent of the mind and separate from other photons. This limit defines the parameters of what is considered "physical". γ.3 In the domain of information, the binary code organizes all possible software. The "0" and "1" of this system are deemed to exist as independent abstract objects in "mathematical space". Their various manipulations & algorhythms (poetically named "architectures") are independent from the electro-magnetic impulses with which they are joined and which they organize. γ.4 Sentient beings cognize by way of object & subject. Both can be reified and then appear as independent & separate entities. The concealment of the true nature of things, namely their impermanence and non-substantiality makes valid & invalid conventional knowledge mistaken. By making sensate & mental objects appear as existing from their own side, a difference is introduced between how things ultimately are and how they appear to a mistaken mind. This means the difference causing ignorance is epistemic and not ontological. δ.1 Insofar as ultimate truth goes, there is only a single world-system as it is in its two aspects of actual world and virtual world-ground. Because all phenomena are at all time mutually interpenetrating & interdependent, they are fundamentally identical (i.e. lacking self-power). Can we say the total world (past, present & future) rises simultaneously ? δ.2 The mistaken appearance of conventional objects due to the mentioned false ideation causes the world to appear differently than it actually is. This false appearance is the root-cause of all possible mental obscurations. Clear this, and the complete, pure and luminous totality emerging from infinity dawns, the union of compassion & wisdom, of a view efficiently dealing with conventionalities while realizing their process-based nature.. Again : the Sun seems to rise in the East and set in the West. But this diurnal movement is actually the Earth rotating on its axis.  Likewise, the Sun seems to rotate around the Earth. Actually, the ecliptic is the path of the Earth around the Sun. Despite the Lunar disk being rather constant, a Harvest Moon seems huge. Understanding the astronomy & the physics behind these illusionary phenomena does not take away the illusion. Likewise, conceptually grasping the limitations of the conceptual mind does not make the illusion caused by substantial instantiation vanish. But we are no longer fooled and merely grasp at the impermanence of it all. So succinctly put, the conventional mind operating conventional knowledge about conventional objects is valid or invalid, but in all cases mistaken. Not because things do not work. Valid objects work. Not because things are merely nonexistent, for some work. Merely because conventional reality does not appear as it is. That is all there is to it. Projecting substance, it is merely process. Positing solidity, it is merely space. Presuming self-powered, self-settled self-nature, only otherness is truly found. § 4 Appearance, Illusion & the Universal Illusion. α. Universal illusion cannot be identified, for positing "mâyâ" turns it into something particular, contradicting its universality. Neither can we exclude universal illusion by assuming "existence" equals "being known in thought". We assume the mental coincides (represents) the extra-mental and move from this assumption to the affirmation this must be the case. This is illogical. Transcendence can only be approached with a non-affirmative negation. Posit nothing. Classical metaphysics is prone to this category mistake (assumptions are not certainties). Metaphysical realism (mind corresponds with reality) and metaphysical idealism (mind makes reality) are extremes to avoid. β. The argument of illusion has objective & subjective terms : • objective : logical & neurological arguments prevail. Because sensate & mental objects appear as independent & isolated and they are not, all conventional objects are illusions, i.e. things appearing differently than they truly are, as it were concealing their true process-nature underneath the mask of substantiality. This by force of the logic of the definition of illusion. No subject of experience ever faces the totality of changes caused, so we must assume, by particles, fields & forces acting as a constant stream of stimuli on the surface of the receptor organs. Only after a series of complex alterations (transduction, relays & integration) is the neocortex -via the thalamus- informed (after projection on the primary sensory area), about the perceived states, events, occurrences & objects. But, this thalamic projection into the neocortex, in accord with the language of the cerebrum, is not yet sensation. This it only becomes after the afferent pathways enter the verbal association area, immediately connecting them with the attention association area (while the primary sensory area has few connections with the prefrontal lobes !). Our sensations, because of their irreducible and pertinent interpretative, constructive, conceptual, personal nature, could be a kind of fata morgana or mirage, composed of distorted sensory items. Ambiguity is the least one can say of the direct observation of sensate objects. Descartes was right, our senses are unreliable to inform us about the world at large ; they process a very narrow band of available possibilities. • subjective :  the most objectifying operator of consciousness, namely cognition or mind, works in various modes. In the ante-rational mode, sensate objects appear in contexts and have no meaning outside these. In conceptual thought, which is formal, critical & creative, the theoretical connotations grasped by the subject of experience make it impossible to witness sensate objects devoid of interpretation. Even if so-called "subjective factors" are reduced or eliminated, it cannot be conceptually known whether a collective mirage is at hand or not. γ. Universal illusion ("mâyâ") is the result of superimposing a false view on the world-system. It is called "universal" because it touches all possible sensate & mental objects. It is an "illusion" because this is like obscuring what is at hand with something not at hand. γ.1 If no object of knowledge can be found able to resist the ultimate analysis proving its lack of substance, then the appearance of independent & separate permanence is problematic. If all objects lack existence from their own side, self-settled, then no object should appear as such. If all do, one must conclude all conventional thinking, although valid logically & functionally, is bewitched, i.e. as it were "under the spell of Mâra", destroying the wisdom realizing emptiness leading to mental obscurations and afflictive emotions. γ.2 This explains why only great compassion, skilfully exploiting dependent-arising, is able to prepare the mind to sober up and break through all possible substantial instantiations, prehending the world-system only in terms of the existential instantiation (cf. infra). Is this universal illusion the price we pay for coupling our sentience with biological systems like the Hominidae ? Is this "the Fall" ? Then salvation is like merely recognizing the nature of mind. We are no longer naked, but may choose to take off our clothes at any moment ... Mental objects may last but are not permanent. Mindstreams at least last a lifespan, if not longer ... Sensate objects, produced by perceptions and interpretations, are also impermanent. Some very much so, while other enjoy a long abiding. But eventually, they too will cease. To this uncertainty is added the illusionary nature of these objects, for they appear as if being "out there" and "self-powered", but are in fact devoid of any trace of findable own-nature Like in a dream, things are not what they seem. Consider the consistency of the dream itself, especially its solid physics. As soon as gravity comes into play, the conventional mind as it were automatically reifies its objects. This is nearly a reflex. We are drawn back to "believe" a wall is a solid object "out there". We are sure this can be found to be the case. Common sense is based on these hallucinated assumptions. Take out gravity, and the deeper microlevel comes into perspective. Objects flash in and out of existence, and their properties depend on how they are being observed. They are also dependent & non-local (universally entangled). Likewise, on the macrolevel, conventional objects moving very fast experience the dilatation of space & time. How can these properties be reconciled with the conventional objects of common sense ? Clearly, the question about the ultimate truth of phenomena comes first. E. Ultimate Suchness/Thatness. The ultimate nature of all possible phenomena can be proven, expressed and experienced. The proof purifies the conceptual mind to let go of reification (the substantial instantiation). The expressions of ultimate truth are non-conceptual, poetical. Its experience direct. This calls for (a) conceptualization without reification, cutting the discriminating mind and (b) the direct prehension of the ultimate nature of all things. As a continuous symmetry-transformation, this awakened continuum of pure radiant awareness, empty of intrinsic existence never ceases. It gives rise to the "special" apprehension or prehension of a pure mindstream experiencing the absolute truth continuously. From the side of this enlightened or awakened mindstream, nothing but the absolute truth prevails ("dharmakâya"), but insofar as this Clear Light* bodhi-being aids others, it assumes bodies of form ("rûpakâya") manifesting great compassion ("mahâkarunâ"). The body of truth represents the Suchness ("tathatâ"), the transcendence of the absolute, the ultimate. The bodies of form are its Thatness ("tattva"), its immanence or being right "there" as reliable before us. In an unmistaken mind, these two continuously happen together. § 1 The Katapathic View on the Ultimate. α. In the positive approach of the absolute, it is deemed possible to describe the ultimate (both as reality and truth), to conceptually identify its properties and to convey this to others by means of "holy worlds". α.1 The hieroglyphic script is a monumental example of the principle. Here the glyphs themselves possessed operative power ("heka" or magic). In the monotheisms of "the book", God inspired His prophets to write down what He wants for us (as in the case of the Bible) or He made His tale directly descend (as with the Koran). In the East, this positive tale is found in the descend ("avatâra") of the Gods themselves, incarnating as gurus embodying cosmic consciousness ... Alas, nothing of this endured ! α.2 The katapatic approach has an absolutist conceptual framework to offer, one in which the absolute -as God, Gods or Goddesses- becomes the supreme reified object. Such a framework is possible but invalid. β. Insofar as the katapathic view goes, conceptual knowledge should at least be able to convey a conceptual message from the Divine. But this can only happen if our natural languages somehow "connect" with the Divine by force of an onto-semantic aduality supposedly to inherently exist between the absolute and human language. β.1 As ultimate analysis, by evidencing how all conventionalities (like languages & concepts) are relative and impermanent, proves the absence of such an onto-substantial aduality, the katapathic view cannot be properly argued. No "natural bridge" between concepts and the absolute can be found. β.2 Is this adherence to one then a wrong view fed by emotional familiarization & faith ? The more religion reifies, the more violent the confrontations with other-believers may be. Insofar as such exercises of faith are viewed as anthropological data, these blind beliefs deserve respect, but in terms of the longing for wisdom, they are worthless. γ. The importance of conceptual preparation must be clear. To purify the conceptual mind, reification must end. Then, by way of existential instantiation, concepts are merely logical & functional. Per definition, the conceptual mind cannot touch the absolute, prehended by non-conceptual nonduality only. But if the conceptual mind remains tainted by gross, subtle & very subtle obscurations (substantial instantiations), then, per definition, such prehensions are also impossible. So one needs both the purified conceptual mind and nondual prehension. The first formal thinkers believed concepts represented the absolute. The illusion of permanence, objects at their face value, was identified. Substantial objects & subjects emerged, hindering the production of novelty and the élan of creative advance. After two millennia of vainly seeking stability, the Copernican Revolution brought about the understanding conceptual reason cannot find any self-powered object at all. Concepts are convincing overlays, suitable fabrications & potent hallucinations. Ergo, the concept of the Divine as a "substance of substances" is an anachronism. The tale of the Divine is necessarily merely the way of the sublime poet and his fleeting, transient and rhapsodic conceptualizations devoid of self-settled powers. § 2 The Apophatic View on the Ultimate. α. In the apophatic view, there is no Divine tale to give. Language and its concepts never suffice to convey anything concerning the absolute. Only direct, nondual experience is of any use here. Conceptual preparation is accepted, of course, but it is never the cause of awakening, for the latter is beyond any possible affirmation, denial or combination of both. To give credence to any Divine tale beyond its playful poetical value is unreasonable and so rejected. β. To enter the mind of Clear Light*, a clear-crisp conceptual mind is the necessary condition of "purity". Such a mind no longer substantially instantiates its objects. But to "see" emptiness this does not suffice. Nondual cognitive prehensions must "cap" the activities of this purified conceptual mind, allowing the awakened mind to profoundly rest in its existential instantiations, continuously enjoying the manifestation of the union of wisdom & compassion, of formless & form. γ. Transcendent metaphysics is possible. But these speculations do not articulate valid metaphysical statements. Only immanent metaphysics is able to claim any validity in the rational sense of the word, i.e. as part of an argument. γ.1 Transcendent metaphysics is "valid" in the sense it too works in terms of object & subject, albeit in an absolute extension. γ.2 Not all poetry is the same or of the same artistic value. So as a criteriology dealing with the hermeneutics of poetry, transcendent metaphysics may have a future. Un-saying does not mean nothing can be said. It merely points out concepts, words & languages do not suffice in describing the mystical experience, the unveiling of the concealed, the recognition of existence as it is and just that. This is ineffability, like the smell of a rose, wordless. The conceptual mind cannot grasp the denotative sense of what mystics experience directly, i.e. the nondual, non-conceptual inseparability of bliss & emptiness of the mind of Clear Light*. The apophatics do speak about their experiences, but only in a connotative sense, stressing no logical acceptance or denial are able to describe this nondual state beyond all possible affirmation & negation. But if something in addition to what is explicit is implied or suggested, then the Clear Light* has all possible Divine qualities, it is eternal, unchanging, unborn, etc. The danger for a relapse into katapathic theology or buddhology is real here. The mind of devotion has a tendency to invent too much metaphysical compliments. Absence of denotation means a science or metaphysics of the actual station-of-no-station of ultimate enlightenment is impossible. But although no positive, denotative & significant sense can be established, awakening can and must be the object of poetical licence. A hermeneutics of mystical language and a transcendent metaphysics of awakening are therefore not out of the question, nor is a scientific preparation of bodhi-mind. But to conceptually catch non-conceptuality is impossible. § 3 The Non-Affirmative Negation. α. An affirmative negation negates A and by doing so affirms B (when negating "day", "night" is affirmed). A non-affirmative negation negates A and affirms nothing else. When the set of all properties of A are negated, the object itself vanishes. This vanishing is not an instance of nondual cognition (a prehension), but -when carried through on all sensate & mental concepts- the end of the purification of the conceptual mind. This pure conceptual mind is the precondition of prehending emptiness, the true nature of all objects of cognition, but not the cause of such an unmistaken mind. β. The object of negation, or what is to be negated, is not the subject or the object of cognition, nor it is the duality at work between these. Neither is it the absence of these, the union of these or any combination of these. What needs to be exhaustively & non-affirmatively negated in order to condition the mindstream to realize ultimate truth, is the reification of any thing or ¬ x. Call the mental operation actually doing this "zero-ing". γ. Zero-ing purifies the conceptual mind, making it step by step suppler and more transparent. Then, at some point, this allows the mind to undo itself of its reified concepts & substantivist conceptual elaborations, as it were piercing through the generic image it made of all emptinesses, purging itself from the last remnant of very subtle reification. At some point, the fabricated approximation appearing less dense after each and every negation, is gone and the world as it is is prehended. Purifying the conceptual mind is arresting substantial instantiation and eliminate the cause of these instantiations. A calm mind is necessary. This is a concentrated & compassionate mind. Meditative equipoise is perfect concentration on any object of the mind. When this is done with coarse objects, the practice extends to subtle & very subtle objects. Then the mind takes the emptiness of any object as its object of concentration. When able to analytically investigate emptiness and stay perfectly calm (with a sole focus on the emptiness of all possible objects), special insight dawns. This new ability needs then to be trained. Eventually, a totalizing generic image of all possible emptinesses is reached. When the emptiness of this generic image is clearly realized, the reification of concepts has come to an end. The last concept, the emptiness of the generic idea of emptiness, is non-affirmatively negated. With the elimination of all acquired substantivism, innate self-grasping can be addressed. This refers to the obscurations present in the ante-rational modes of cognition. When these too have been reversed, the complete continuum of the conventional mind is finally purified and awakening (the realization of bodhi-mind) may manifest. § 4 Fabricating the Ultimate : Ending Reified Concepts. α. First ultimate logic needs to be understood. After many decades of daily work, this can be done by conceptually grasping the instantiations step by step. Applying them by using various inner & outer objects, brings about a generic idea of emptiness. It is called "generic" because it relates to all members of the set of possible cognitive objects and their emptinesses. It is as if analyzing all the rooms of a house before presenting a synthesising picture of the house. But this mental procedure is still non-meditative and born from the conceptual activity of the apprehending mind. α.1 During unwavering concentration in equipoise tranquility on this generic, totalizing idea and its emptiness, the moment comes the conceptual mind as a whole is purified. The next moment is not yet the direct experience of emptiness, but merely a perfected approximation. When this happens, no coarse & subtle obscurations (discriminations) are left and the mind is fully prepared for the nature of mind to shine through unimpeded. The moment this nondual Clear Light* actually penetrates the purified mind -no longer reifying conceptuality-, the direct experience of non-conceptuality starts. α.2 The actual moment bodhi-mind begins is spontaneous, uncontrived and born out of nothing (not caused). Likewise for all possible prehensions of the nondual, nonconceptual mind. β. As long as emptiness is approached indirectly, the reification of concepts (their substantial instantiation) has not thoroughly ended, and so -at a subtle level- the mind is still impure, tainted, obscured, ignorant. But the generic idea is a ladder, a totalization of all possible conceptualization regarding the emptiness of persons and phenomena. β.1 By taking this idea as the basis of concentration, the reification of all possible concepts can be undone and when this happens on a continuous basis, the process of purification of the conceptual mind has ended. Coarse & subtle obscurations stop and the purification of the very subtle innate reification (born out of ante-rational cognitive activity) begins. β.2 Slowly the opaqueness of the generic idea fades, becoming absolutely transparent. But this transparancy is not the cause of the experience of emptiness. Fully recognizing the mind of Clear Light* is needed. γ. When, after purifying the conceptual mind, emptiness is directly witnessed for the first time, nondual cognition is no longer put on hold and the process of its (non-conceptual) emancipation may begin. This happens by purifying the mind from the process of reification still active in the mythical, pre-rational and proto-rational modes of cognition. The essentializing activity of the conceptual mind (in its formal, critical & creative modes) is acquired. To enter nonduality, the very subtle reification to eliminate is innate. γ.1 Only when the minds associated with the first six modes of cognitive activity have been thoroughly purified by dereifying their objects, is the mind like the purest diamond. Then there results with reference to the "grasper" (the knower), the "grasping" (the knowledge) and the "grasped" (the known), a complete coincidence with that on which consciousness abides & by which it is "anointed". The hexagonal loosens the knots of ignorance, and when then fuel of the fire is gone, the fire goes out. γ.2 This is not awakening yet, but the final purification of the mind as a whole, the stepping-stone to Buddhahood. ∫ A mind lacking compassion may misconstrue the end of conceptual reification (the purification of the conceptual mind) as the first moment of awakening. The purification of the conceptual mind leads to the end of reification. At this point, not a single object is deemed substantial. All is process, i.e. dependent-arising defined by momentum, architecture and sense. This purity can be trained by way of study, reflection and meditation. This is the science of preparations. To understand all logical possibilities and to be able to conceptually grasp absence of inherent existence can be done without meditation, but this does not lead to the end of reification, it is merely a start and may lead to nihilism. Balanced concentration on a single coarse object like a flower is not easy. To realize the meditative equipoise of calm abiding, abstract objects are even more difficult. Successful calm concentration on the emptiness of any object is the next step. Not a coarse object, nor an abstract object are at hand, but their ultimate property, their emptiness. This has to be epistemically isolated. Often, analysis makes calmness leave. Likewise, too calm a mind cannot find the impulse to analyze. So to achieve special insight, coupling calm abiding on emptiness with analysis of emptiness, takes years of long meditative sessions. When this superior seeing is finally realized, the analysis of emptiness enhances tranquil concentration on emptiness. This leads to profound encounters with the absolute property of each and every sensate or mental object of mind. With superior seeing a generic image is construed. Realizing its emptiness is the purification of the conceptual mind, the end of reification. The end of reification is not yet "seeing" emptiness, nor is it awakening. To "see" emptiness the mind of Clear Light* has to be non-conceptually prehended. A purified conceptual mind is therefore a necessary condition but not a sufficient condition. To awaken, the mind as a whole needs to be purified, not only from its acquired obscurations, but also from the innate. What is realized at the end of the purification of the conceptual mind is not a direct experience of emptiness, but the very subtle conceptual realization of emptiness. The mind has indeed been freed of self-cherishing and acquired self-grasping has been eliminated. In itself, this is a very high spiritual achievement, endowing the mindstream with lasting, irreversible qualities. But although lofty, this proximate emptiness is not the same as actually "seeing" emptiness. It is still contrived, and thus planned, manipulated and somehow artificial. It remains conceptual, albeit on a very subtle level. But precisely because it is conceptual, it cannot be said to be a direct, immediate, natural, spontaneous realization. § 5 The Direct Experience of the Unfabricated Ultimate. α. The direct experience of the ultimate is ineffable. It is non-conceptual. One cannot describe the smell of a rose. Pheromones have a vocabulary of their own. Denotative conceptual rendering is impossible. Likewise, the exact nature of any atomic particle before observation is terministic and paradoxical. How to explain "superposition" conceptually ? Only in the language of mathematics can this be done. But one may, no doubt influenced by their smell, compose a poem about the rose. β. To witness the unfabricated nature of emptiness calls for existential instantiations (a pure conceptual mind) and a prehension of the absolute. Duality is constantly carried to a point at infinity and so nonduality is what remains. Nothing positive can be said here. The poetry of the inseparability between directly seeing emptiness (wisdom-mind) and the interconnectness of all events is what is left ... γ. Because duality remains present even at a point at infinity, nondual cognition is bound to experience the conventional and the absolute simultaneously. There is not a single truth, but two truths. Although one of both is unfabricated and the other is contrived, the conventional (the result of collective delusions) are part of the equation. The latter brings in compassion again, for what use is absolute truth if not all sentient beings share in the same direct experience ? Suppose the logical boundaries established by criticism are like the frontiers of the country of conceptual thought, bordered on one side by the non-conceptual mind of Clear Light* and on the other by all pre-conceptual and conceptual modes of cognition. Insofar as philosophers turn away from this demarcation, as Kant did by denying intellectual perception its place, and so never point to what lies beyond the border of conceptuality, their view on emptiness does not even take the Clear Light* into consideration. This would be a "dead" interpretation of emptiness by way of the "dead bones of logic" (Hegel), one limited by conceptual thought and missing the purpose of ultimate analysis : to end reifying concepts by way of concepts, the precondition for looking over the border towards the country of Clear Light*, a country the existence of which, as yogic perceivers show, cannot be denied ! Of course, logically, as Descartes pointed out, the "lumen naturale" or mind of Clear Light* is before any possible conceptualization. For Critical Mâdhyamaka, and correctly so, no logic is able to refute the Middle Way. Nothing about "nirvâna" can be affirmed (all eternalization avoided), and emptiness meditations on the mind itself find no ground to reify any part of its operations. Thus eliminating its substantial instantiation, consuming all possible fuel, extinguishes the fire of reification and makes the mind effortlessly & spontaneously (not causally) arrive at the "other shore" (all nihilism avoided). The mind, besides being known by conventional knowledge as an object of conventional truth, is also known by ultimate knowledge as an object of ultimate truth, i.e. lacking inherent existence. While the mind of Clear Light* is not a substantial part of the objective side of the view, it is introduced by accomplished yogis as a hypothetical subjective fruit each & every sentient being may, with due effort, directly experience. This refers to the presence of an enlightenment potential in all sentient beings. This is not the same as logically affirming Divine qualities inhere in this potential from the start. Instead, they are generated as the result of emptiness-meditations on the mind, turning successful because all sentient beings possess the potential for enlightenment from the start. Affirming the ineffable empty nature of this wisdom-mind does not hinder master yogis to construe the Clear Light* as an interpretative, non-empty object of poetry, praising its inherent qualities, said to endure despite adventitious ignorance & defilement. In fact, the profound yogic experience of Dzogchen & Mahâmudrâ experts confirms this to be the case, and this despite the definitive logic proving conceptual thought cannot penetrate non-conceptual, ultimate truth. However, from the side of logic, these accomplished yogis with their sublime poetry only inspire, uplift and act as a excellent & sublime examples. This has to be made very clear, for the object of this art of the Great Perfection, positing the inseparability of the primordial base (objective "dharmadhâtu" or "khunzi") & the mind's natural clarity (subjective mind of Clear Light* or "rigpa"), has no conceptual ground whatsoever. Within the country of concepts, Nâgârjuna's logic is final ; nothing can be affirmed about ultimate truth ! No logical, conceptual path leads to the beyond of discursive thought, only to its border, and so one is left to develop concepts ending the reification of all concepts. This is however not the end of reifying cognition, at work until the last, tiniest drop of reifying fuel is burnt and beyond ! Let us summarize this in the traditional way (cf. Kamalashîla) : (1) the path of accumulation : the mind is made pliant (compassionate) by generating the mind of awakening for the benefit of all sentient beings ("bodhicitta") and emptiness is conceptually studied, reflected upon and taken as an object of meditation on coarse (outer), subtle (inner) & very subtle (secret) objects. Special insight ensues when calmness & analysis can be combined in such a way they reinforce one another ; (2) the path of preparation : using this special insight or superior seeing, a generic, highly refined conceptual image of the emptiness of both persons & phenomena is realized. A very subtle conceptual generic idea of emptiness results. The conceptual mind (with its formal, critical & creative modes) is completely purified and acquired self-grasping ends. All coarse & subtle obscurations end, but very subtle ignorance remains. An approximation of the direct experience of emptiness is realized ; (3) the path of seeing : emptiness is directly observed for the first time, without the use of concepts, but non-conceptually in the nondual mode of cognition - this is a decisive turning-point, implying genuine transformation of mind  ; (4) the path of meditation : to further stabilize the nondual mind, innate self-grapsing -resulting from the residual activity of the ante-rational mind (with its mythical, pre-rational & proto-rational modes)- is tackled, and so the very subtle obscurations (escaping the purification of the conceptual mind) are gradually totally eliminated ; (5) the path of no-more-learning : the hexagonal mind (with its six modes of cognition, three ante-rational & three conceptual) is totally purified from all possible coarse, subtle and very subtle obscurations, leading directly to complete, irreversible and total awakening, prehending emptines and dependent-arising simultaneously. F. The Ontological Scheme. The heart of ontology is the logic of the ontological principal, the leading idea acting as common ground shared by all possible things, existing, nonexisting or fictional. In the present critical metaphysics of process, this ontological principal is not a substance, but a process. It is not self-powered, self-settled, but other-powered. Perfected, these actual occasions show a continuous kinetography, unchanging architectures of change. But these continua are nevertheless always grafted onto the coordination of movement, of changes in momentum, code & sense. Awakened, a continuous symmetry-transformation or holomovement is at hand (devoid of suffering). Mostly however, the kinetography of change is discontinuous, i.e. a-symmetrical (causing suffering).  Because the ontological principal is a process, it cannot be identified with the substances "matter" or "mind". In fact, the deeper, more profound leading principle is common to both. The ontological scheme is a sketch of the basic concept of this metaphysics of process. This is based upon the most concrete elements at work in our direct experience, close to how things are found ; as a stream of experience constituted by "droplets", "dops", "events" or "moments" of singular, individual experience. These are the final things of which the concrete world is made up. Nothing more can be found behind them. Nothing more real can be found. § 1 Event & Actual Occasion. α. Consider streams of events constituted by singular droplets of happenings acting together. These are interdependent phenomena, each being the outcome of other-powers, namely determinations & conditions other than the event at hand. Actual existence is that what happens. Virtual existence is that what may happen. β. Every event has duration, and so starts, abides & ceases, so it may reemerge. An event is therefore not a momentary instance, a single element of what happens (or could happen), but a very short event-interval en.Δt = ent + dt of time (t) packed with actual happenings. Ergo, events cannot serve as the ontological principal. We need to move to a more fundamental level, and ask what constitutes a single event-interval ? Merely instances, moments or "droplets" of things actually happening. So, each time something is happening, there is an actual occurrence in the world. γ. Actual occasions happen in the world. They are per definition concrete, i.e. embodied by momentum, organized by laws and object of sense (or meaning apprehended by a possible observer & knower). What happens is the world is "the concrete" and there is nowhere "another world". The transcendence of this world, the world-ground, is not other-worldly, introducing a (Platonic) rift in ontology, positioning more than one ontological plane, but neither concrete, but merely abstract. There is only the world and so only a single ontological plane. γ.1 The world-ground is not a transcendent Real-Ideal, but abstracts of definiteness prefigurating, in terms of altering fields (frequencies) of likelihood, the world-to-come. The world-ground is merely the possibility of the next moment of the world, not another world, a "richer" ontological ground, nor a self-sufficient ground. It is the probability of actual, concrete happenings. But neither is this pre-existent abstract realm of propensities devoid of primordial momentum, architecture and Clear Light* sentience. It is a "nothingness" in which the possibility of becoming is afloat & intelligent ! γ.2 The world-ground contains the infinity of all possible (potential, probable) abstract prefigurations of all possible future worlds. This is its primordial architecture, form or information (creativity). But it also encompasses all virtual energy states (primordial momentum, matter) and all possible choices for unity & harmony (primordial sentience, consciousness). γ.3 The world-system is constituted by concrete actual occasions (the world) and by primordial formative abstracts (the world-ground). The world-system is all things actual & virtual (possible, likely, probable). δ. Let us call "actual occasion" a single droplet part of the many drops constituting a single event. Because this actually does occur, it is an actual occasion. Because this occurrence is worldly, it is concrete. Actual occasions are the basic elements of the togetherness of actual events and of actual, existing entities and they are individual & particular. δ.1 These instances never happen "on their own", but are always actualized in concert with others, shaping novel togetherness (creative advance). They depend on determinations and conditions foreign to their own dynamic characteristics or principal ontological properties. The latter are not a fixed, substantial core, but a given form of movement, a particular style of kinetography. δ.2 By virtue of its ontological properties (efficiency & finality), the fruit or effect of the kinetic style of a single actual occasion adds its own to the ongoing sea of process. As small changes may have huge effects, a tiny cluster of actual occasions can be enough to influence the whole movement. So all is dance, a display of energy from the base. The unit, principle or standard of a stream of events is therefore not the very short event-interval en, but its infinitesimal differential interval on.dt, the ultimate abstraction pointing to a single instance or isthmus of actuality. In terms of the ontological properties of an actual occasion, this singular, momentary droplet on has itself differential extension, i.e. is characterized by process on an infinitesimal scale. ε.1 Even on this immeasurably small scale, properties emerge. These ontological properties, attributes or aspects of any actual occasion (the smallest possible unit of change) are themselves a process (interdependent), not a substance, they do not constitute themselves but are constituted by others. These properties emerge as a result of the interplay between any two actual occasions. The differential moment has architecture and choice, in what, without this, would only be a barren transmission from this to the next actual occasion of the probabilities of momentum & position, a priori devoid of any creative advance. If this would be the case, then the novelty happening in the world could not be properly explained. ε.2 The jumps from virtual to actual (Big Bang), from the actual primordial soup to interstellar activity, from interstellar physics to biological systems, from biological organicity to sentience etc. evidence the evolutionary implications of the ontological properties of actual occasions, their ongoing creative advance. Starting with matter, the efficient determinations prevailed over the informational & sentient operators. When the basic order of the universe had been put in place, the further complexification of matter & information eventuated life, the possibility of negentropy, fertilization & instinct. Only at the far end of this evolutionary interval does sentience appear. ζ. The extensive plenum of the continuum of an actual occasion can be : (a) spatial : as in the case of geometrical objects ; (b) temporal : as in the case of the duration of mental objects ; (c) spatio-temporal : as in the case of the endurance of sensate objects. All actual occasions have this extensiveness in common. The extension of actual occasions over each other is crucial to grasp the possibility of the novel togetherness of actual occasions unfolding creativity and shaping the creative advance of the world. This horizontal passage of events or passing of Nature brings in temporality. For essentialism, the principle "operari sequitur esse" holds. This means every process is owned by some substance. Here one thinks substance first and then views change as accidental to it. Process thought inverses the principle : "esse sequitur operari" ; things are constituted out of the flow of process. So things are what they do. Change is thought first and things are momentary arisings, abidings, ceasings & reemergences of dynamical units. A process is an integrated series of connected developments coordinated by an open & creative program. It is not a mere collection of sequential presents or moments, but exhibits a structure allowing a construction made from materials of the past to be passed on to the future generation. This transition is not one-to-one, not merely efficient, for the internal make-up of its actual occasions shapes a new particular concretion, bears finality allowing for creative advance or novelty. In modern times, the standard bearer of process metaphysics  was of course Gottfried Wilhelm Freiherr von Leibniz (1646 - 1716). The fundamental units of Nature are punctiform, non-extended, "spiritual" processes called "monads", filling space completely and thus constituting a "plenum". These monads or "incorporeal automata" are bundles of activity, endowed with an inner force (appetition), ongoingly destabilizing them and providing for a processual course of unending change. And it was in the writings of Leibniz that Whitehead, the dominant figure in recent process thought, found inspiration. Like Leibniz, he considered physical processes as of first importance and other sorts of processes as superengrafted upon them. The concept of an all-integrating physical field being pivotal (cf. the influence of Maxwell's field equations). But unlike Leibniz, the units of process are not substantial spiritual "monads", but psycho-physical "actual occasions". They are not closed, but highly "social" and "open". Actual occasions, the units of process, are Janus-faced : they take from the past and, on the basis of an inner, finative structure, transform states of affairs, paving the way for further processes. They are not merely product-productive, manufacturing things, but state-transformative. Although indivisible, actual occasions are not "little things", but a differential interval of change "dt" explained in terms of efficient & final determinations, the vectors of change. Actual occasions are not closed (not self-sufficient like substances), but fundamentally open to other occasions, by which they are entered and in which they enter. Thus their perpetual perishing is matched by their perpetual (re)emergence in the "concrescence" of new occasions. These occasions always touch their environments and this implies a low-grade mode of sentience (spontaneity, self-determination and purpose). They are thus living & interacting droplets of elemental experience. They are part of the organic organization of Nature as a whole, but constitute themselves an organism of sorts, with an infinitesimal constitution of their own. Nature is a manifold of diffused processes spread out, but forming an organic, integrated whole. As was the case in the ontology of Leibniz, macrocosm and microcosm are coordinated. Not because each actual occasion mirrors the whole, but because they reach out and touch other occasions, forming, by way of complexification, aggregates and finally individualized societies of actual occasions. § 2 Efficient & Final Determinations of an Actual Occasion. α. Actual occasion x is momentary (at that instance) and actual, i.e. logically & functionally present here and now. Abstracted as standing alone, the differential interval "dt" out of which x is constituted has an extensive continuum, albeit momentarily. x has outer and inner relations of extension, i.e. in respect to other (earlier or future) actual occasions and to itself. These definite ontological particularities of each actual occasion involve extrinsic & intrinsic ontological properties. β. The extrinsic ontological properties of an actual occasion are the temporal-efficient connections of actual occasion x with the one before (x-1) and with the upcoming one (x+1). They happen in time, take space and operate certain determinations & conditions related to the momentum (energy or matter) of x. So they are called "efficient", i.e. directly bringing change (enhancing process). This gives the ongoingness of process its stream-like or wave-like characteristics. It is the exteriority of an actual occasion, its horizontal vector. γ. Intrinsic are : (a) the information (architecture, code, form, software) available to the actual occasion regarding other actual occasions, its informational weight and acquired degree of formal integration of data (in abstract operators) and (b) the weighed choice (sentience) successfully advantaging a certain efficient outcome by manipulating its probability-fields. γ.1 Ultimately, this choice aims to actualize the greatest possible unity & harmony in and for the forms of (novel) togetherness involving all possible other actual occasions. But due to lack of information and/or bad choices, this is mostly limited to the immediate environment and merely local interests. γ.3 Both informational & sentient operations define the interiority of an actual occasion, namely what it (momentarily) gathers as for itself (as a momentary "self" or imputation of subjective identity). This refers to the boundaries of actual occasions and what happens within them, to their particle-like or droplet-like spatiality and geometry. This is the vertical vector of an actual occasion, defining order & choice. δ. The extrinsic (efficient) & intrinsic (final) ontological properties of the ontological principal, defining two modes of existence of an actual occasion, only exist as long as the moment endures. But they do define the flash-like impetus of this ephemeral moment to the next, as well as the possibility of x to influence x+1. In the organic totality of the world, an actual occasion is the smallest unity of process. Each momentary occasion extols a perpetual va-et-vient between two modes of existence (or ontological properties) : an objective mode, in which it only exists for others ("esse est percipi"), and a subjective mode of existence, in which the actual occasion is none but subjective experiential properties ("esse est percepere"). In the first, objective mode, a physical experience is at hand, explained in terms of the horizontal vector of the action of efficient causation. In the second, subjective mode, a mental reaction ensues, bringing about the vertical vector of final causation. Actual occasions, contrary to Leibnizian monads, do communicate with other actual occasions. In terms of a logical order, an actual occasion "begins" with an open window to the past, showing previous actual occasions x-1, i.e. the efficient determination of the past world on it. Next, it responds to this past actuality physically. Simultaneously, it cross-wise puts into place its own current inner & dynamic ideality, drawing out possibilities of what was received and weighing the options in order to favour a single outcome by way of choice. By doing so, each actual occasion exercises final determination, showing differential self-determination, spontaneity & self-determination. The difference between efficient and final determination is analogous to the difference between actual and potential in quantum mechanics, brought about by the "collapse of the wave-function" (Bohr, Heisenberg, von Neumann, Schrödinger), turning an infinite number of potential possibilities (given by the vertical vector) into a single actual one (a singular horizontal vector). Choice ends the order of subjectivity, but the actual occasion does not perish. The end of its subjective experience is the beginning of its existence as efficient determinant on subsequent actual occasions, being the physical past entering their event-horizon, and reemerging there. Actual occasions are therefore never in "one place" or "solitary", but a forteriori enter in each other's process (togetherness or concrescence) and so define continua of occasion-streams. They are interconnected momentary events, not isolated (Olympic) enduring substances. Because of this inner, non-physical mode of existence, each occasion has a degree of consciousness (self-determination, spontaneity & novelty). This is not the same as saying occasions have an "inner life" in the way humans experience this. The subjective mode of actual occasions rules a weighing procedure effectuating a decision. And as the outcome of each actual occasion is richer than what physically, by way of efficient causation alone, would have entered its window of past actualities, novelty is possible. Because of this, creative advance ensues. § 3 The Three Operators. α. The two modes of an actual occasion (objective & subjective or efficient & final) encompass its three known aspects : matter, information & consciousness. These appear as integrated explanations of the functioning of the organic totality known as "Nature", "world" or "the concrete". They refer to specific descriptions (of theories and data) of irreducible but interdependent facets of each actual occasion. β. Efficient lawfulness and the objective mode of each actual occasion (the horizontal vector) call for the physical aspect of matter, while final determination and the subjective mode (the vertical vector) call for the aspect of abstract validation (information) and a degree of participatory self-determination (consciousness). β.1 These define ontological boundaries, allowing for a better understanding of the ongoing process of what is actually happening. These are not principles, or worse, substances, but merely aspects explaining physical objects, informational content, its value, and states & contents of consciousness. matter : hardware, sub-atomic, atomic, molecular, cellular, physiological, societies of actual occasions, encompassing particles, waves, fields & forces or the domain of the physical - the Real Numbers system ; information : software, embodied or disembodied notions, ideas, languages, logics, theories about actual occasions ; this is then the domain of the informational - the Natural Numbers system ; consciousness : userware, the self-determination, spontaneity, novelty & participatory sentient grasping of actual occasions or the domain of the conscious - the Complex Numbers system. β.3 The domain of the physical is not exclusively material. Indeed, the actual occasions constituting it do possess (on the most fundamental ontological level) information & sentience (but in a lesser degree). Likewise for the domain of the informational and the domain of the conscious. γ. General process ontology posits bi-modal actual occasions with their three functional domains as the ground of all possible phenomena, existing things, objects, entities or items. Each actual occasion has a physical (efficient, objective) and a mental (finative, subjective) mode ; its horizontal & vertical vectors respectively. The arising of actual occasions is caused by previous actual occasions, and this entry of past actual occasions in what happens hic et nunc is by way of efficient causation. The abiding of each actual occasion is its internal structure, causing choice, decision or self-determination. Whenever a choice is made, the actual occasion ceases, but this perishing brings about an efficient influence on the next actual occasion, and this influence has integrated the work of final determination by way of sentient manipulation of properties. δ. The three operational domains at work in every single actual occasion also operate on every scale of togetherness of actual occasions. Hence, it also applies to the world as a whole, and even extends to the world-ground, albeit in a primordial, virtual sense. In the case of the world-ground however, not being an actual occasion, these three do not refer to operational determinations & conditions but merely to the probability or virtual possibility of the latter. They are pre-existent probabilities for the rise of matter, information & consciousness and their creative concert.  The primordial conditions of the material aspect of the world explains how quantum events pop in and out of existence. They point to the primordial quantum plasma, i.e. a nothingness "potentized" to actualize and become some material thing. The primordial conditions of the informational aspect of the world are an infinite number of possible forms, architectures, codes or organizations likely to actualize when the proper material conditions prevail. ε.1 The primordial conditions of the sentient aspect of the world is the infinite consciousness of God* prehending all past and all current conditions and determinations of all actual occasions conjunctively and capable of (re)weighing the probabilities of material & informational objects. ε.2 This absolute consciousness also extends into the world, and so is the sole actual occasion continuously bridging the world-ground and the world. Insofar as this is merely the potentiality of the highest possible unity & harmony, it is primordial. Insofar as this is actual, it is moving along with every possible actual occasion and so manifest. God* is the ultimate exception. Specific process ontology applies this scheme of general process ontology on non-individualized compounds, aggregates or societies of actual occasions and on individualized societies of actual occasions. Let us see how this works in a neurophilosophy of process. There, in the two individualized societies of actual occasion at hand, namely the brain and the mind (cf. A Philosophy of the Mind and Its Brain, 2009), three irreducible domains or operators are constantly at work. These are derived from cybernetics, information-theory and artificial intelligence : • hardware or matter : the mature, healthy, triune human brain is able, as a physical object ruled by efficient determination, to process, compute and execute complex algorhythms and integrate all kinds of neuronal activity - the developed, individualized mind is able to be open to the efficient determinations resulting from previous moments of brain functioning ; • software or information : the innate and acquired software (wiring) of the brain, its memory & processing speed - the individualized mind is an expert-system containing codes or knowledge to choose from when solving problems ; • userware or consciousness : the mature brain works according to its own final determination, making choices to guarantee its organic functioning as a manifold and affect necessary changes in its environment - individualized consciousness or mind instantiates unified states of consciousness (moment to moment intentional awareness) as a percipient participator interacting meaningfully with its brain and the physical world. § 4 Aggregates of Actual Occasions. α. Entities and their elements, events, are actual occasions interrelated in a determining way in one extensive continuum. A single actual occasion is a limiting type of an event (an entity, actuality or object) with only one member. The world is thus built up of these actual occasions. Events are aggregates of actual occasions. Entities are aggregates of events. Because they cannot be divided, not found standing alone, but only conceptually analyzed when abstracted (on the basis of their extensive continuum), actual occasions are called "atomic". β. The organic togetherness of actual occasions has various ontological levels, shaping an ontological ladder ranging from actual occasions, events, entities, to insentient compounds and individualized societies with varying degrees of freedom. Not only is matter complex (cf. hylic pluralism), also information & consciousness are layered. β.1 Mere aggregates or compounds of actual occasions are not sentient. So traditional panpsychism, stating all possible things have a subjective mode, is not the case. Although the individuals part of an aggregate, namely the actual occasions themselves, do experience a infinitesimally small degree of self-unity, the aggregate itself does not. In terms of aggregation, rocks, rain, rivers, oceans, streets, cities, provinces, countries, continents, planets, artefacts, etc. are insentient. β.2 Lacking any self-conscious finality, unable to name themselves in a self-reflective cognitive act, aggregates are ruled by efficient law only. Actual occasions, mental or physical, come together to form events and events come together to form entities or existing objects. Mental objects are actual occasions mainly processing their inner, subjective, vertical vector, but they do have a minimal efficient determination, namely the "stream" of moments of consciousness. Physical objects are actual occasions mainly acting out their outer, objective, horizontal vector, but they maintain a minimal final determination, namely in the architecture of their particles, fields & forces as well as in their receptivity to the direction given by the conserving cause of the world (the immanent aspect of God*), in particular their "loyalty" to the natural constants necessary to maintain the intelligent design (of themselves & the world) intended by the "Anima Mundi", the perfection ("entelecheia") of the world. Non-individualized aggregates of actual occasions are unable to be aware of the totality of which they are a part. A rock does not know it is a rock. Individuality implies a view on totality and its unity. Like the bird knowing it is part of a flock. § 5 Individualized Societies. α. In individualized societies of actual occasions, interdependence and complex relationality engender negentropic dissipative systems. The most intricate of these is able to give a high-order degree of finality to the impulses of past efficient processes. Here human conscious life enters the picture, with each human being experiencing him or herself as a unity. But there are kingdoms lower than humanity exceedingly demonstrating their individuality, namely minerals, plants & animals. Are there kingdoms higher than humanity ? β. The crystalline architecture of minerals constitutes an intelligent factor, revealing a mathematical order at work behind what are merely interacting waves, particles, fields & forces. The photosynthesis of plants, their ability to multiply and specifically adapt to their immediate environments defines a higher degree of liberty and allows for their individualization. The behaviour of animals is already very advanced, and telling of their differentiation as groups and in certain cases as specific context-bound individuals within a group. Finally, the sentient behaviour of humans, able to produce abstract cultural objects and transmit them, invokes a very high degree of freedom. At every rung of this ontological ladder, we see the three ontological domains becoming more complex. With the emergence of sentience, individualization gives rise to naming, labelling and conceptualization. But this cannot happen without complex code and very sophisticated efficient determination. γ. The domain of consciousness may be organized in degrees of freedom, beginning with a singular actual occasion and ending with all individualized societies of occasions. γ.2 Such an intimate development of consciousness calls for a high-order complexification of mental actual occasions, one producing the complex, non-linear subdomain of human inner life. As on this planet this distinct type of sentient life is rare, all human life is by nature precious. γ.3 All other complex individualized societies of occasions do experience themselves as a unity run by a hierarchy, and so fall within the field of panexperientialism. On Earth, the highest level is the dominant actual occasion of experience constituting the human mind. As even actual occasions, with at least an iota of self-determination, provide the lowest-level example of the emergence of a higher-level actuality, we may understand, in comparison, brain cells as highly complex centres of experiential creativity. § 6 Panpsychism versus Panexperientialism. α. While individual occasions, which are not substantial, thing-like, but the common unit of process, possess, besides a physical, objective mode (efficient determination), also a mental, subjective, experiential mode (final determination), non-individualized aggregates or compounds of actual occasions do not manifest such a mental mode and are therefore insentient. They therefore mostly operate efficient determination and are physical, constituted by matter, analyzed in terms of particles, waves, fields, the four forces and the superforce, the infinite vacuum energy of the primordial quantum plasma (primordial matter). This infinite, undifferentiated energy is not an actual occasion. It is not concrete, cannot be abstracted, but is an abstract probability not without paradox. β. The (massive) presence of insentient objects rules out panpsychism, i.e. the claim all things live. This claim is not made. All things experience something, and this in a non-individualized way (as aggregates) or in an individualized way (as societies). Moreover, the mental, subjective mode of a single actual occasion has the lowest possible degree of freedom. As all objects are composed of actual occasions, all objects, at the deepest ontological level, possess differential sentience. This is panexperientialism. γ. The infinitesimal sentience of all possible actual occasions should not be compared with the activity of societies of actual occasions like the high-order conscious experience of human beings. Some societies of actual occasions are indeed individualized, i.e. share a self-image with an imago. Only when an actual occasion, by entering into another actual occasions (adding its concretion or internal make-up to others), helps bringing actual occasions together, can the creativity of the sea of process eventually give way to these individualized societies of actual occasions consciously experiencing their own unity and this at various levels of freedom & harmony (as in minerals, plants, animals, humans and metaphysical entities). γ.1 On this ontological ladder, the process of evolution and its natural creative selection is at work, producing more complex organizations of actual occasions interpenetrating each other. Because so many non-individualized aggregates can be identified, it is not the case all things are sentient. γ.2 Lots of objects, while composed of infinitesimally sentient actual occasions, are totally devoid of any sense of sharing a "self", an awareness of possessing a common imago. Ergo, panpsychism is not the case. All things are not sentient, nor are all things alive. δ. The organic togetherness of all possible actual occasions has various ontological levels, ranging from actual occasions, events & entities (or insentient compounds) to societies, individualized societies with varying degrees of freedom. δ.1 The highest level of freedom is the dominant actual occasion of what happens. On Earth, this is the human mind. δ.2 Actual occasions, with their infinitesimal iota of self-determination, are the lowest-level examples of the emergence of a higher-level actuality. This because of their creative input. This results from making the decision, characterizing their mental, finative mode part of the efficient determination, entering other actual occasions, appropriating data from its vicinity. ε. In terms of efficient determination, the mind emerged from the brain. But in terms of final determination, the possibilities offered by the brain are "weighed" and then chosen by the mind (emerged from the brain). Moreover, the emergent property (the mind as an actual entity in its own right), is able to  exert a determinate influence  of its own (both final & efficient). Mental causation is not an epiphenomenon, for besides the upward causation from the body to the mind, there is the self-determination by the mind, and on the basis of this, downward causation from the mind to the body. This is possible because mind and body are not two different kind of things, but both highly complex individualized societies of actual occasions, linked in a functional and interactionist way. ζ. For panexperientialism, "physical entities" are always physico-mental (or, what comes down to the same, psycho-physical). Focusing on efficient determination, and the emergence of an independent mental out of the physical, actual occasions are physico-mental. But insofar as final determination is concerned, and because of the downward causation effectuated by high-order minds on subtle physical processes, actual occasions are psycho-physical. Both are complementary. In the world, three major sets of specialized actual occasions are at work : matter, information & consciousness. These three give rise to the physical domain, the informational domain and the sentient domain respectively. These three constitute what actually happens in the world. Ontogenetically, the physical domain manifested first (with the Big Bang). Out of the unique singularity of this actual occasion (and its mental mode of finality) arose the expert-systems, the problem-solving architectures of the world aiming to bring about evolution-in-unity (complexifying homogeneity) in the ongoing physical processes. The interaction of matter and information gives ground to sentience to exert its ability to be aware of the momentum & architecture of objects possessed, grasped or apprehended by the knower and this in terms of the harmony of the unity between the known & the knower. These three ontological emergences are "outpourings" of specialized operational domains. The world-ground expresses the mere probability of the actual emergence of these ontological domains of the world. The world is sentient. Every actual occasion is sentient. But between this lowest sentient rung of the ontological ladder and the highest (the totality of all actual occasions prehended by a single immanent & totalizing absolute consciousness), many levels of insentient objects share in the togetherness of all actual occasions constituting the ongoing sea of process. This is why panpsychism is not at hand. Nor is the "nature morte"-view of the world as a set of "disjecta membra" retained. Both the physical mode (matter) as the mental mode (information, consciousness) of all possible phenomena are important. § 7 The God* of Process Ontology. α. God* is not the ultimate substance and final, absolute self-sufficient ground and self-settled self-subsisting essence ("esse subsistens") of all possible things. God* does not essentially (substantially) differ from the world. Although unique, God* is not the One Alone, the "idea" transcending all others, the "totaliter aliter" or "total other", the absolute absoluteness ontologically forever isolated from the world. God* is not absence of togetherness. He is not hidden ("Deus absconditus"). Under analysis, this "God" of reifying theology, this Creator cannot be found. One may conclude such a "God" does not exist. But God* exists, both primordial and immanent. β. God* is the unique non-temporal & non-spatial abstract actual entity giving relevance to the realm of pure possibility (primordial matter and primordial information) in the becoming of the actual world, encompassing both non-temporal everlastingness (as part of the formative elements) as temporal (recurrent) eternity (as ultimate actual entity operating in the world). Here we have a unique (paradoxical) abstract actuality, performing an unexcelled holomovement of holomovements, a unique solo, the Dance of dances. β.1 How can something acting on such a transfinite scale keep the world-ground exclusively "potential" ? Being part of the virtual world-ground, absolute sentience is defined as an actual occasion ! Is God* the unique, all-encompassing exception ? If so, how to maintain God* does not influence the world in terms of efficient determination, i.e. physically ? The spirit of criticism shuns the return of Caesarean Divinity, a God forcing its beings to kneel, bow and grovel at its feet. β.2 Does this mean God* poses a paradox ? Is Divine process para-consistent, implying the logic involving this unique actual occasion is not formal (or Aristotelian), with its linearity, but non-linear or able to efficiently organize certain inconsistencies in the fabric of conceptual reason itself ? Like quantum logic, not avoiding contradictions, but handling them in some way. β.3 Is this God* the object of nondual (non-conceptual) cognition only ? Lacking a mathematically perfect logic is however not absence of logic or no logic at all. Process theology is a branch of transcendent metaphysics and therefore impossible to validate by empirico-formal fact or by conclusive (i.e. absolute) argumentative justification of whatever sort. Its rules are a hermeneutics of mystical poetry, as indicated by "*" in "God*" or "Clear Light*". Lack of conclusive argument is however not absence of terministic argument. γ. God*, both potential & actual, both abstract & present, is the meeting ground of the actual world with the realm of the pure possibilities, one encompassing primordial matter and primordial information. This makes God* stand out in the world-ground. Not in the sense of any Divine Creativity, but by the possibility of infinite reorganization and an absolute consciousness (of which cosmic consciousness is but an instance linked with a given world). God*'s choice for unity & harmony has direct bearing on what happens in the world, albeit not by direct efficient determination, as omnipotence would have it. γ.1 Suppose omnipotence would be the case. The world-ground would then not be a mere abstract of possibilities (the possibility of the next actual occasion of the world), but the throne of an omnipotent God* able to hinder freedom, the creative outcome of the organisations of primordial information. Given freedom, and so novelty & creative advance, this cannot be the case. God* prehends all possibilities of energy & order, and merely gives relevance to these in the becoming of the world, but only acts by way of final determination, influencing (in terms of the domain of matter), physical outcome only indirectly by luring the propensity-fields of momentum, not by the spectacular, miraculous or supernatural way of a "Deus ex machina". One may argue God* has an indirect bearing on the world, but then merely as a Grand Architect forced to consider the material with which the Magnum Opus is done. world-system world actuality temporal & actual world concrete actual world-ground potentiality non-temporal & primordial sentience abstract actual primordial information primordial matter γ.2 God* is the anterior ground guaranteeing a very small fraction of all possibilities may enter into the actual becoming of the spatiotemporal world. Without God*, nothing of what is possible in terms of the world-ground, would become some thing, change and create in the world. The order and creativity of what happens in the world are the result of a certain valuation of possibilities. However, God* is not the world. Nor is God* the realm of pure possibilities. The "Lord of Possibilities" is not primordial matter, nor creative order. γ.3 Actual entities are concrete, while God* is an abstract actual entity. Creativity & the primordial quantum plasma  are non-actual formative elements, and therefore "pure possibilities". God*, creativity and the quantum plasma are the formative abstracts of the world. God* plays with loaded dice. δ. Consider God* has having two natures, called "primordial" and "immanent". δ.1 Primordially, God* is the instance grounding the permanence and continuous novelty characterizing the world. This does not call for substance, but for a infinitely perfect & ongoing symmetry-transformation valuating pure possibility. Allowing metaphysics to conceptualize such a special actual occasion, is opening up conceptual cognition to the standards of transfinite calculus and integrating the para-consistent treatment of paradox. δ.2 The primordial nature of God* has no direct impact on the physical stream of efficient determinations of the world. For although an actual entity, God*'s activity is "abstract", namely in the aesthetic (artistic) process of valuating the available pure possibilities of the creative order and the infinite sea of energy. Although engaged in the factual becoming of the actual entities, God* cannot be conceived as a concrete actual entity, a fact among the facts possessing direct efficient (physical) determination. Ergo, God* cannot be omnipotent. God* is the sole "abstract" actual entity ! Nevertheless, besides being abstract, God* is also a Divine consciousness prehending all actualities here & now. This is the immanent nature of the Divine. ε. God's primordial nature is transcendent, untouched by the actual world. This aspect is the "Lord of All Possibilities". It offers all phenomena the possibility to constitute themselves. If not, nothing would happen. By way of prehensive valuation, God* brings on harmony in all possibilities, for actuality implies choice & limitation. But as all order is contingent, lots of things always remain possible. The "ideal harmony" is only realized as an abstract virtually, and God* is the actual entity bringing this beauty into actuality, turning potential harmony into actual aesthetic value. In this way, God* directs matter indirectly. While not omnipotent, God* remains super-powerful. ε.1 For the order of freedom and responsibility to abide, omnipotence is logically impossible. Suppose God* were omnipotent, then why not prevent the Holocaust ? Due to so many powerful & concentrated evil NAZI intentions, God* could not immediately stop this bad architecture unfolding. The Divine is a Grand Architect, not the Creator of all things. Call this the Auschwitz-paradox : although an extremely powerful "Lord of Beauty", God* -confronting sentient beings exerting their "demonic" creativity- can not prevent this extreme falsehood, ugliness & evil to temporarily abide. Creativity itself is merely the material with which God* works, and cannot be manipulated "ex nihilo" or "ex cathedra". Likewise, the unacceptable and extremely unfortunate destruction of the innocent is the price paid for the freedom of destructive intent (consciousness) and disruptive togetherness (information & matter). ε.2 Evil, both natural (based on material & informational collisions) and moral (based on bad intent), is the outcome of annihilating togetherness, bringing out egology. The presence of friction & entropy do not preclude God* to balance out these unwanted effects in the future. Although at times evil is overpowering, in the end harmony always prevails. This is the Ghandi-principle. ζ. God* does not decide, but lures, i.e. makes beauty more likely. There is no direct efficient determination at work here, but a teleological pull inviting creative advance. Given the circumstances, a tender pressure is present to achieve the highest possible harmony. ζ.1 God* is the necessary condition, but not the sufficient condition for events. Classical omnipotence & omniscience are thus eliminated. God* knows all actual events as actual and all possible (future) events as possible. He does not know all future events as actual. This would be a category mistake. ζ.2 God* cannot hamper creativity, nor curtail energy. ∫ Falsehood, ugliness & evil are the outcome of the clash of freedom, of the presence of creativity. They are as sad as they are inevitable. η. Given all determining conditions determining things, the Divine purpose for each and every thing, and this on every rung of the ontological ladder, is to just be a contributor to the realization of the purpose of the whole, the unity of harmony in diversity. God* is the unique abstract actual entity making it possible for the multiplicity of events to end up in harmony, togetherness and unity. This aspect of God* is permanent (an ongoing holomovement or symmetry-transformation) & eternal (beginningless and nowhere). This holomovement never ends. ∫ God* is the Adî-Buddha ! The immanent nature of the Divine is God*'s concrete, omnipresent consciousness, actual near all worldly possibilities, actively valorising them to bring out harmony and the purpose of the whole, as well as conserving them as a totality, as a world, society, aggregate, event or actual occasion. θ.1 God*, with infinite care, is a tenderness loosing nothing. Hence, the Divine experience of the world changes. It always grows and can never be given as a whole. In this sense God* is always learning to untie the new knots, to unnerve unique conflicts of interest. θ.2 God* is loyal and will not forsake a single actual occasion. Infinitely intelligent and prehending all-comprehensively, God*'s experience grows and are so part of history. God* is not self-powered and not omnipotent. God* is not an impassible super-object, not a super-substance, nor a "Caesar" disconnected from and looking down on the world, but, on the contrary, changed and touched by what happens insofar as the immanent nature goes. Can process theology merely be another way to analyze the three Bodies of the Âdi-Buddha, the primordial Buddha representing the class of all Buddhas or awakened actual occasions "thus gone" (into holomovement) ? Are the differences between this Âdi-Buddha and the abstract concept of the "God* of process" not merely terminological & cultural ? The Truth Body of the Âdi-Buddha, the "dharmakâya" is a formless, undifferentiated, empty, nondual luminous field of creativity, out of which all possibilities arise. With a thoroughly purified conceptual mind entering the non-conceptual, such metaphysical poetry is not merely nonsensical, but the condensation of actual direct, nondual cognition. In itself, this Truth Body is unmoved and has no motivational factors to allow the Form Bodies to arise. The latter are "spontaneous" emergences. Likewise, creativity and God* are not causally related. God* does not create it, nor is creativity defined by what God* wants. Since beginningless time, the Truth Body is given, just as are unlimited creativity (primordial information) and the infinite (zeropoint) plasma (primordial matter). The Form Body ("rûpakâya") is an ideal form emerging out of the Truth Body for the sake of compassionate activity. In process theology, compassion is subsumed under beauty, for how can ugliness and disorder be compassionate ? God* makes certain definite forms possible by valuating the endless field of creativity using the key of unity & beauty. The Form Bodies are the two ways the Âdi-Buddha relates to ordinary, apparent events ("samsâra") : the Enjoyment Body is the ideal "form" with which the endless possibilities are given definiteness (God* as primordial), while the Emanation Body is the actual ideal "event" bringing this form down to the plane of physicality and concrete "luring" Divine consciousness (God* as immanent, manipulating propensities). The two natures of God* are not two ontological parts or elements, but two ways of dealing with the world. Primordially, God* is always offering possibilities and realizing unity, order & harmony. Consequentially, in these immanent ways, God* takes the self-creation of all actual events in this concrete world into account, considering what is realized of what is made possible. In these two ways, initiating & responding, permanent & alternating, we observe the bi-polar mode of God*, favouring a process-based, pan-en-theist approach of the actual world and its ground. Chapter 2. Mental Pliancy & its Enemies. Having established the general contours of this critical metaphysics of process, the quest for the most general, shared feature of the world and its sufficient ground may be prepared. What kind of mind is best able to do so ? A certain style and a transcendental logic embedded in a critical study of truth, goodness and beauty, definitely capacitate the conceptual mind by limiting it and thereby purifying it ; as it were preparing it for a speculation on process. Indeed, in the context of metaphysics, one of most fundamental mental operators is the constant remembrance of the impermanent nature of all phenomena, essentially devoid of self-settled substance ; a constant return to process, interdependence and relations, in other words to what is at hand hic et nunc. But be not mistaken ! This necessary preparation, offering a general overview or panorama, is only like clearing the ground, not yet the actual deed of planting the seed by nondual prehension. Therefore, to inspire the purified speculative mind, the latter must be made pliant. This is more than just being able to conceptually understand, but touches actionality, affectivity as well as all subtle and very subtle states of consciousness, like the direct experience of nondual states of mind. Without this pliancy, the mind is not open enough to attend to totalized objects and so generates a barren view. Optimalized, mental pliancy encompasses all modes of cognition. Mental pliancy is the property of a mind attending its objects exclusively as relations and no longer as relata. Then, the manifold of objects is treated with suppleness & subtleness. In the actual state of presence with what is happening right now, objects are never treated as ontologically isolated from other objects. Nonlocality is part of the hallucination, of the illusion (appearing before us). When this pliancy becomes ultimate, then the non-substantial, non-conceptual resting-place underlying conceptual logic & validity, at best attending truth, goodness & beauty, is at hand. This is a spacious, non-conceptual reality encompassing all phenomena likewise. Such enlightened mental pliancy is the ultimate manifestation of the dual-union of, on the one hand, process, and, on the other hand, lack of self-sufficient "substantia". Ultimate mental suppleness brings out the best of the mind : openness, depth, sharpness, acuteness, clarity, peace, power & wisdom. Speculative activity, being conceptual, cannot penetrate the nondual. Hence, for immanent metaphysics, only conventional mental pliancy pertains. To sufficiently inspire the conceptual mind so it constantly totalizes and grasps its objects with the highest possible degree of interdependence or relatedness, the speculative mind requires the highest possible degree of conventional mental pliancy. This generates the compassionate mind, actively engaged in actually ending the suffering of all other minds. Such a compassionate mind is needed to be able to produce or generate a valid immanent metaphysics. To explain the reasons why this is necessarily the case is one of the main goals of this chapter. Before achieving this, it must be clear what precisely the mind is all about. Three images assist in this : the stream, the mirror and the rainbow arching in space. Understanding these helps to establish a more stricter definition of mind as mere awareness & cognizing. • As a stream, the mind never stays the same, but neither is it without form or merely random. Indeed, what stays identical is not some solid feature establishing itself, but the architecture of change or kinetography of the mind. Different minds have therefore different kinetographics. Always moving, the mind is a dynamical phenomenon, not a static structure or architecture. Change due to constant momentum is the main characteristic of the stream. Such change, relating to all possible features of the mind, points to the mind being without any self-settled element or property. The mind is therefore empty of its own nature but other-powered, i.e. dependent on determinations & conditions of extra-mental objects. To the conceptual mind, succeeding moments of the stream constantly seem to flow in a temporal arrow from past via present to the future. This is the Arrow of Time. Such a mind attends itself in a special way, namely by positing a constant focus or point of reference & identity, called "I", "ego" or "self". The empirical ego is invented by the conceptual mind to position a certain contraction of awareness to a single moment of the stream. Awareness, in principle extended to the whole stream, is reduced to what happens on a small raft travelling on the stream ... From the vantage point of the ego on this simple flat boat, a temporal arrow pertains and the difference between mental and extra-mental is established on the basis of this seemingly fixed reference. However, if the limitations on attention imposed by the raft are left behind, and attention plunges into the stream to dive to its depths, it will eventually hit the original, very subtle layer of the mind. This is the underlying non-conceptual level, one encompassing the stream as a whole, a completeness devoid of any fixed or self-settled object. The mindstream or mental continuum is shared by all sentient beings possessing a mind. The image of the stream accommodates the view on all possible minds, except the nondual one. • As a mirror, the mind is empty of itself but merely reflects objects different than itself. Empty of itself, like the surface of the mirror, the mind is without memory, merely actual reflectivity. Without luminosity, reflections cannot appear on the surface of a mirror. The root of the mind, the very subtle mind is Clear Light*. Moreover, indifferent of what kind of objects appears, a Buddha or a pig, a mirror merely reflects without interpretation, i.e. without judging its attended objects. Interpretation is the work of a certain kind of mind, a concave or convex mind refusing to return to the Euclidian plane of the original, fully functional uncurved mirror-surface. This is the conceptual mind, distinguishing between objects turned inward (subjectivity) or outward (objectivity), and thereby establishing its special characteristic : afflicted duality, or a state of mind causing emotional afflictions and mental obscurations. The Factum Rationis and concordia discors brought to bare earlier are but special instances of this overall afflictive duality of the conceptual mind. This allows for the distinction between pre-conceptual, conceptual and non-conceptual minds. The first leads to innate self-grasping, the second to self-cherishing and acquired self-grasping. The image of the mirror accommodates the view on the nondual mind and none other. • As a rainbow, the mind results from complex determinations & conditions. As the tiny water droplets reflecting in the sunlight, it takes on the colour of the glass in which it is poured. We observe a specific hue and forget this is merely a refraction or curvature of white light. A given frequency is always the absence of all other frequencies. Like a pure transparent crystal or diamond, the mind reflects what it attends. The conceptual mind does this in terms of its specific colours, the non-conceptual mind in tune with the brilliant whiteness of the Clear Light*. The rainbow seems solid and real, but in truth it is merely a spacious phenomenon. As the rainbow seems to connect Earth with heaven, the mind is the only bridge available to cross the chasm separating the conventional from the ultimate. Because of the mind, the end of suffering or salvation from all afflictions and obscurations is possible. Without this true peace, the play of seemingly endless suffering endures. Just as a rainbow is a set of colours, so the mind is a set of possibilities. Just as the rainbow disappears all of a sudden, so states of mind constantly change, at is were leaving no trace. The image of the rainbow accommodates the view on all minds, i.e. the simultaneity and unity between all conventional and all ultimate minds. Buddha mind. The enemies of mental pliancy are ignorance and afflictive emotions. The former betrays a lack of insight in the true nature of phenomena, the latter manifests as the fire of the existential dialectic between exaggerated attachment or afflictive desire and revulsion or hatred. Ignorance superimposes a false idea, promotes a false ideation Cf, designates a wrong view. This impacts all possible cognitive acts. Afflictive desire & hatred denote affective activities acting as root-causes for all subsequent major afflictions of the emotional mind : cruelty, greed, stupidity, passion, jealousy & pride. This directly affects intersubjectivity and therefore our degree of civilization. Studying these emotional states, one discovers their pivot is the notion of an enduring phenomenon. Human beings acquire this habit as the result of attributing a concept or a name to anything observed. Animals have a non-conceptual innate sense of self. This instinct is however not intuition born out of the purification of the conceptual mind, the only valid basis for attaining the nondual mode of cognitive functioning. Instinct is merely the active side of the blue-print of collective sedimentations of emotional problem solving activity (dealing with tribal togetherness and belongingness), and so evolutionary. Instinct is the mental operator association with ante-rationality. Designating a concept is the activity of the conventional (conceptual) mind. In itself this is valid insofar as logic & functionality go. Reification being the culprit, duality and conceptuality, to complete the mind, should not be abandoned. Cognitive reification is attributing self-power to mental and/or sensate objects. Affective reification is either grasping at the egological importance of the empirical ego or coarse mind (self-cherishing), existentially placing the ego and its own before all other matters, or designating the self as possessing a permanent core. Afflictive emotions and their devastating effects on the individual, on nature & society can be countered by the merits of compassion (or the applied science of universal participationism). Cognitive obscurations are to be lifted through wisdom, realizing the ultimate nature of all possible phenomena. 2.1 Definition of Mind. Western philosophy has no clear-cut definition of mind. This is due to the fact its attention is largely curved outward, and this in a vain attempt to uncover an ultimate objective self-sufficient ground. Even when gazing inward, as in the case of Plotinus, Augustine, Cartesius or Husserl, the aim is to uncover the "imago Dei", the substantial ego or the eidetic core, i.e. an ultimate subjective self-settled bedrock. This largely convex mentality of the West drives attention away from the act of actual perception itself, and brings the duality of perceiver & perceived to the forefront. This duality lies at the basis of all important dichotomies. Perceiver and perceived are both interlocked, each hiding behind the other. A functional definition of mind does not focus on these two poles of any relationship, but on the relationship itself, i.e. on the act of perceiving. First identify the subject of experience, then asks about the objects this subject attends, for the subject is always an object-possessor. Likewise, when the object of experience is brought on stage, there is always a subject designating it. Both inward and outward curvatures of attention make us aim to something else than the mind fully present in the act of cognizing. The nature of mind is revealed by presently recognizing the perceiving, the actual act of attending itself, not by fixating the attendee or the attended. That is all. Watch the attending, nothing more. This has to be pointed out, then practiced by generating, maintaining and uniting all acts of mind with this naked presence & pure awareness. § 1 Awareness, Attention & Cognizing. α. Awareness is a verbal noun pointing to the fact mind is always being aware of something, indicating it is always turning something into an object. This does not necessarily entail a conscious act of will. In every moment of the mindstream, the mind is engaged with or relating to something. This arising is a cognitive engagement occurring simultaneously with thinking. This is not necessarily conceptual nor conscious. Babies are also aware, but not verbal. Hostility can be experienced by others without it being conscious to those in which it appears. Indeed, the unconscious mind is also aware. α.1 When the mind and its object, or experience and its contents, always simultaneously come together as one entity, pure or primordial awareness is at hand. This is pure presence. It is non-conceptual and nondual and can be liked to being present with every arising object, whatever it is, and this without adding anything to it (conceptual elaboration) or taking away anything from it (conceptual elimination). To permanently realize this, so we are told, is the "great seal" ("mahâmudrâ") of enlightenment.
bf0245ad097c08bf
måndag 31 augusti 2009 Illusions of Theories of Everything The ultimate dream of theoretical physicists is a Grand Unified Theory GUT or a Theory Of Everything TOE  as a mathematical equation in the form of a system of partial differential equations, the solutions of which would represent all there is in the World. We find this dream partially realized in specific areas of mechanics and physics identified by a specific system of partial differential equations, such as  • fluid mechanics: Navier-Stokes equations  • quantum mechanics: Schrödinger's equation • celestial mechanics: Newton's equations of motion/gravitation. One can argue that in a certain sense all of celestial mechanics including the motion of the planets, comets, asteroids et cet in our Solar system, is represented as solutions to Newton's equations of motion/gravitation, that all of quantum mechanics is represented as solutions of Schrödinger's equation, and that all of fluid mechanics is represented as solutions of the Navier-Stokes equations.  We can thus view Newton's equations, Navier-Stokes equations and Schrödinger's equation as different forms of TOE with Everything representing the totality of a certain part of the World, like a continent of the Earth.  Newton seemed to be able to describe all of celestial mechanics by his equations of motion and gravitation in his TOE,  which made Newton immensly famous attributed with godlike power.  Similarly one can argue that all of fluid mechanics can be described by the Navier-Stokes equations, and all of quantum mechanics by Schrödinger's equation as different forms of TOE.  This can give a physicist in control of a TOE the illusion of superhuman power, but there is a hook: Even if the equations can be written down in a couple of lines, like the Navier-Stokes equations, they can be impossible to solve by analytical mathematics representing solutions in terms of elementary functions such as polynomials and trigonometric functions: Analytical solutions of Newton's equations are known only for the two-body problem of one small body like the Earth orbiting a big body like the Sun. Already three bodies is beyond analytical representation, not to speak of the turbulent solutions of the Navier-Stokes equations.  If we stop here, a TOE in the form of Navier-Stokes equations instead of a theory of everything, would seem to be rather a theory of nothing. This would be like a jeweler with diamonds still to be captured from the rock. Is a  jeweler without jewels, still a jeweler? However, one can compute digital solutions of the Navier-Stokes equations using computers, for specific choices of data, and in this way gain insight case by case using a Computational Theory of Everything. Some diamonds thus can be brought to the surface and put into rings to be admired. But we cannot get full insight in one shot. We cannot capture all diamonds in one day in a true TOE. One can argue that specific knowledge of e.g. fluid mechanics comes form specific computational solutions to the Navier-Stokes equations for specific data, and fluid mechanics is the totality of such specific knowledge. I give examples in my knols on fluid mechanics. See also So even if a GUT or TOE unifying quantum mechanics and gravitation in the form of one set of equations, e.g. in the form of string theory, the main task of computing and studying specific solutions would remain. We can make a parallel with Darwin's theory of evolution based on an equation expressing genetic variability + selection by survival of the fittest. Anybody can formulate this equation and the non-trivial part of evolution theory is the study of specific solutions. Still 150 years after Darwin,  Richard Dawkins struggles hard to compute specific solutions of Darwin's equation... One can even argue that Darwins TOE of evolution as variability + selection, is trivial in the sense that it can be written down in one line, while the determination of specific solutions such as amoebas and human beings is highly non-trivial. We often hear physicists claim that the atomic electron structure of the periodic table of elements is a consequence of quantum mechanics, but at a closer examination we find the electron structure for atoms with more than one electron, that is all elements except Hydrogen, is unknown as a solution to the Schrödinger equation, see my knols on quantum mechanics. Likewise, one can argue that the secret of turbulence is hidden as solutions to the Navier-Stokes equations, a secret closed to analytical solution but open to exploration by computation, as well as the n-body problem of celestial mechanics. The basic differential equations of celestial, fluid and quantum mechanics express basic physical laws of balance or conservation, such as Newton's 2nd law, conservation of mass, momentum and energy. These physical laws are what a blind Nature obeys in its evolution from one moment of time to the next in some form of analog computation, which can be mimicked by digital computation. To evolve according to physical laws does not require any intelligence, just work. In rare cases human intelligence allows shortcuts to analytical solutions, but in general only brute force computation is effective. This is what makes the world go round, whether it is understood by someone or not. torsdag 27 augusti 2009 Will Mathematicians Save the World, Again? The free world was saved from the threat of both nazism, fascism and communism, because of free world mathematicians were able to compute both how to make nuclear bombs and run Star Wars.  Today we are told that mathematical climate models predict catastrophical global warming by CO2 emission from burning of fossil fuels, which represent 75% of the total energy production in the World. Based on these mathematical predictions US President Obama stated the G8 meeting in Aquila in July: • The G8 nations agreed that by 2050, we'll reduce our emissions by 80 percent and that we'll work with all nations to cut global emissions in half.  Realization of these goals will require a major reorganization of the industrial world and threatens to keep the developing world from development. The necessity of the drastic actions required come from predictions of mathematical climate models, and the question that the leaders of the world must pose concerns the reliability of these predictions.  This is a question for mathematicians. Are mathematicians ready to once again save the World from catastrophy, by once again focussing on the most urgent question facing mankind? Let us see what the International Union of Mathematicians IMU has to say about global warming? Nothing it seems. Strange! Are mathematicians not willing to save the World this time? The last International Congress of Mathematicians ICM organized by IMU in Madrid in 2006, had no section on mathematical climate modeling, and the upcoming IMC in Hyderabad, India, in 2010 seems no better. Why? To Limit or Not to Limit Global Warming to 2 degrees C? Swedish Minister for the Environment Andreas Carlgren leading the EU delegation to Washington DC, USA, on 23–26 August for climate negotiations started out optimistically with the following message to the US: • It is vital that the US is involved in the next climate agreement if we are to manage climate issues.  • The EU and the US have a common interest and task in helping to fund adaptation measures and technology transfer to developing countries. This is crucial in order to enable the countries of the world to conclude an agreement in Copenhagen in December • The right conclusions must now be drawn for how the temperature rise is to be kept below 2 degrees Celsius. However, today Carlgren reports pessimistically:  • Some wine some water...the pace of the negotiations is slow, and they need a kick-start at political level if they are going to be concluded in Copenhagen. EU led by Sweden wants to save the world from overheating, but the US, China and India are are slow to jump on the wagon. Why? Is it because they are not convinced by IPCC? Or are they convinced, but nevertheless choose to march on towards catastrophy? Is Obama stepping back from his bold plans for his presidency and his promise at the meeting in Aquila in July: Is the Copenhagen meeting collapsing even before starting? Sweden and Carlgren has a tough job to do...Maybe it is not so easy to convince rich people that the have get poorer and poor people that they have to stay poor... Listen to Roy Spencer Testimony to the US Senate Environment and Public Works Committee onsdag 26 augusti 2009 tisdag 25 augusti 2009 New Flight Theory is Taking Off Our new theory of flight is starting to get appreciated: Diego Gugliotta, professional teacher of aerodynamics to pilot students, expresses • I had a look to your Mathematical Theory of Flight, which indeed is very interesting. In my leisure time I'm a glider pilot and I also teach pilot students in aerodynamic. Professionally I'm an engineer educated at Aalborg University (thermodynamics, and a M.Sc. in system engineering). • After reading your paper I really don't know what to do with my teaching. It is my impression that it is very difficult to know what to rely on when explaining why gliders fly at all , and it's obvious that lesson number one shall be by definition "why does it fly". The last two years I adopted the Newton-Bernoulli approach, combined with Kutta-Zhukovsky's circulation theory, without really knowing how to explain such a circulation. I also experienced, like you also mentioned in your paper, that not even NASA explained the theory of lift. • Your theory gives sense, and I'm looking to adopt it as the right theory of lift in my teaching, but now to the 1 million question: How do I explain a 17 year old glider pilot student with only basic school education the theory of lift? any good idea? The reaction of Diego Gugliotta supports our experience that not even NASA can explain why it is possible to fly, as illustrated on my blogs listed under theory of flight including interviews with NASA Glenn Research Center and my flight expert collegues at KTH. An answer to the question by Diego can be: • Redirection of the incoming flow down will give a reaction up = lift. The flow gets redirected if it does not separate on the top of the wing before the trailing edge. Separation is only possible at a stagnation point. Since the flow is only slightly viscous and thus slides along the wing surface with small friction, stagnation cannot occur before the trailing edge. Hence there is lift. OK? Note that it is crucial that the flow has small viscosity: You cannot glide in syrup. Diego answers: • As a further comment you may note that I don't believe aerodynamics are anything for pilots. I believe I should adopt to explain how, and not why:-HOW: It's a fact that there is a differential pressure between the upper and the lower part of a wing. It's a fact, and it's very easy to demonstrate even in a classroom, that differential pressure times area ends up with a force.  • -WHY: It's a fact as well, that Bernoulli holds, and that Newton's 3rd law also holds. However, at least for me, circulation is not a fact, and there is where all my "whys" ends up in nonsense. It doesn't necessarily mean that it doesn't hold. It's just not me the one to disclose this eventual fact, as it requires time, dedication and research; exactly the three parameters you and Johan utilize in your work. You tried to disclose the circulation fact,but in your well documented paper, you ended up rejecting this theory. Yourwork gives sense, and I hope to see soon the reaction of other researchersworking in this field, so they can explain to some one that indeed can work out Euler's equations, the theory of lift. Thank you for your work. Thanks Diego. I think our new theory can be presented to pilots and can also be understood and appreciated by pilots, because it is a correct understandable theory, and nothing is more practically useful than a correct understandable theory. Right? Obama and Reinfelt Saving the World Obama announced his presidency plans in New Direction on Climate Change: • Few challenges facing America and the World are more urgent than combatting climate change. • The science is beyond dispute, the facts are clear: • Sea levels are rising, coast lines are shrinking, record drought, spreading famine and storms that are growing stronger with each passing hurricane season. • Climate change and our dependence of foreign oil if left unaddressed will continue to weaken our economy and threaten our national security. • We will invest $15 billion each year to catalyze private sector effort to build a clean energy future: We will invest in solar power, wind power and next generation of biofuels. • This investment will not only help us reduce our dependence on foreign oil, making the US more secure, and will not only help us bring about a clean energy future saving the planet,  but it will also help us transform our industry and steer our country out of the economic crisis by creating 5 million new green jobs that pay well a cannot be outsourced. Obama seems to believe that the science beyond dispute is represented by chief climate activist James Hansen, NASA Goddard Institute for Space Studies,  stating in a presentation in DC in 2007 • Why should I be speaking out?  • I think there is a huge gap between what is understood about global warming by the relevent scientific community and what is known about global warming by those who need to know, the public and policy makers. • There is an urgency in the problem because of the large inertia in the systems. We have had in the last 30 years 0.5 degree C of global warming. But there is another half degree in the pipeline because gases that are already in the atmosphere, and another half degree because of energy infrastructure which is place. • Even though the climate change so far is just beginning to be noticable, there is a lot more in the pipeline. • If we follow the present course for another 10 years, we will have a different planet: No ice in the arctic, sea level rise of 6 meter and extermination of species. It's an urgent problem to begin to address. • Greenland and Antarctic ice sheet decreasing. Sea level is now rising 35 cm per century but the concern is that it is a very nonlinear process which could cause a sea level rise of 5 - 6  meters over a century. If we continue with business as usual we will get global warming of 2-3 degrees C.  We need to get on a different track within the next few years. • We cannot burn fossil fuels unless we capture the CO2. Our Prime Minister Fredrik Reinfelt also believes in James Hansen, and Obama, as he prepares for the Copenhagen Climate Council in December: • I have on the behalf of the EU wellcomed the new signals and leadership now shown on climate change from the US adminstration. We are following very closely what they are intending to do and hoping to come together in our efforts. I think it is extremely important with the incoming EU presidency of Sweden to be very active in talks and in working together between the EU and US on this issue. But the science of global warming is not beyond dispute, as I have discussed on previous blogs. Suppose Obama gets to know that the science is disputed and that the facts are not clear. What would he then say? And what would Reinfelt then say? Would that change the subject of the talks? But Obamas idea of saving at the same time the US from both the economical crisis and energy security threats, and the World from burning up, is clever, maybe even too clever... måndag 24 augusti 2009 Reality of the Virtual vs Virtual Reality Slavoj Zizek suggests to complement the concept of virtual reality as reproduction of reality, with the concept of  Zizek compares virtuality of the real with reality of the virtual with examples from politics, sociology, psychoanalysis and also physics, which connects to my knols Simulation TechnologySimulations of Wittgenstein and Hyperreality in physics In his discussion of the concept of reality of the virtual, Zizek uses the Lacanian triad of imaginary-symbolic-real applied to the concepts of virtual and real: • imaginary virtual • symbolic virtual • real virtual   • imaginary real • symbolic real • real real which Zizek characterizes, in short, as:  • imaginary virtual: filtered virtual image of e.g. other people  • symbolic virtual: beliefs which have to be virtual to be operative, like paternal authority, Santa Claus, democracy.  • real virtual: to be defined, the jewel of the collection and, recalling the Lacanian definition of real  = that which resists symbolization, • imaginary real: images too strong to be directly confronted • symbolic real: scientific formulas like quantum physics, which work but which appear to be meaningless with regard to our ordinary notion of reality • real real: core of real, obscene shadow of symbolic real, undertext of e.g. Sound of Music:and and  Shortcuts.  Zizek recalls the decomposition of Donald Rumsfeld of knowledge into known- knowns, known-unknowns and unknown-unknowns. Zizek then completes with unknown- knowns = things we don't know that we know = unconscious, which he seems to view as a form reality of the virtual.  To explore the relation between reality of the virtual and virtuality of the real,  Zizek considers Einstein's theory of gravitation connecting mass to curvature of space which can be viewed in two ways:  • mass defines curved space = real defines virtual = virtual reality • curved space defines mass = virtual defines real = reality of the virtual Similarly, Newtonian theory of gravitation connecting mass to gravitational potential, can be viewed in two ways: • mass defines potential = real defines virtual = virtual reality • potential defines mass =  virtual defines real = reality of the virtual as discussed in the The Hen and the Egg of Gravitation with the message that it is not so clear what is most real: mass or gravitational potential. It may depend on our senses. In psychoanalytic terms the connection between trauma and symbolic space can be viewed as • trauma deforms symbolic space = virtual reality • deformed symbolic space generates trauma = reality of the virtual or in fascism/antisemitism • Jews deform social space into social antagonism = virtual reality • social antagonism deforms social space into antisemitism = reality of the virtual Evidently,  a relation of cause - effect is represented by the order of real - virtual, with the usual way of thinking being that the real preceeds the virtual. But Zizek says that the cause - effect can be turned around, as in the theory of gravitation, in which case the virtual preceeds the real.  Finally, if the cause - effect relation is unclear or irrelevant, virtual reality = reality of the virtual.  Of course there is a connection to body - soul with The soul is not only a representation of reality, but the soul lives its own life and generates its own reality. Further, there seems to be a connection between reality of the virtual and hyperreality = image without real origin. In the context of a mathematical model like the Navier-Stokes equations expressing physical laws of balance, • digital computational solutions of the NS equations are representations of reality = virtual reality • reality is created by analog computational solution of balance laws = reality of the virtual. lördag 22 augusti 2009 Penguin Logic of IPCC Vincent Gray summaries his experience as expert reviewer for the Intergovernmental Panel on Climate Change IPCC in The Triumph of Double-Speak as follows: • Despite over 20 years’ of effort and four major Reports, the IPCC has not succeeded in providing any evidence that increases in  greenhouse gases are having a measurable effect on the climate. Why is it, then, that so many people believe that they have done so. The answer lies in their subtle use of  doublespeak, the technique of creating confusion by manipulation of language. This newsletter shows how they have confused and twisted the meanings of words in such a way as to create triumph out of failure.   If what Gray claims is true, then the Copenhagen Climate Council based on the IPCC reports does not have to open, and the leaders of the world can foucs on solving real problems instead of creating real problems by inventing imaginary problems.  Let's see if Gray's analysis is correct by going to the documents and then focus on the Fourth Assessment Report AR4 from 2007. In particular let's check if it respresents a form of  Science of Penguin Logic or pseudo-science. AR4 states in Technical Summary: • While this report provides new and important policy-relevant information on the scientifi c understanding of  climate change, the complexity of the climate system  and the multiple interactions that determine its behaviour impose limitations on our ability to understand fully the  future course of Earth’s global climate.  • The areas of  science covered in this report continue to undergo rapid  progress and it should be recognised that the present  assessment reflects scientifi c understanding based on the  peer-reviewed literature available in mid-2006.   • Equilibrium climate sensitivity is likely to be in the range  2°C to 4.5°C with a most likely value of about 3°C, based upon multiple observational and modelling constraints. There is a good understanding of the origin of differences in equilibrium climate sensitivity found in different  models. Cloud feedbacks are the primary source of inter-model differences in equilibrium climate sensitivity. • The overall response of global climate to radiative  forcing is complex due to a number of positive and negative feedbacks that can have a strong influence on the climate  system radiative balance.  The key quantity is climate sensitivity measuring global warming vs doubling of the CO2 level in the atmosphere: IPCC states that it is likely to be in the range  2° to 4.5° C, with according to the IPCC Uncertainty Guidance likely = probability > 66%. To help interpretation of this statement IPCC informs us  • Finally we come to the most difficult question of when the detection and attribution of human­induced climate change is likely to occur. The answer to this question must be subjective, particularly in the light of the very large signal and noise uncertainties discussed in this chapter. Some scientists maintain that these uncertainties currently preclude any answer to the question posed above. Other scientists would and have claimed...that confident detection of a significant anthropogenic climate change has already occurred... This can be interpreted as a reservation that convincing scientific support of the IPCC climate sensitivity estimate is lacking. But using doublespeak it is also interpreted by IPCC as something close to a truth: • Most of the observed increase in globally averaged temperature since the mid 20th century is very likely due to the observed increase in anthropogenic greenhouse gas   concentrations’.    We see that IPCC oscillates between not-knowingthe most difficult question is if human induced climate change is likely to occur? and knowing: is very likely due to...anthropogenic greenhouse gas.  This is an extreme form of doublespeak, which is also practiced by modern theoretical physicists in search of a Theory Of Everything saying nothing about the physics of the world we live in. Knowing everything and nothing at the same time!  Let us analyze the logic of the key statement of IPCC: • Climate sensitivity between 2° and 4.5° C with probability > 66% = likely. Suppose we compare with the following possible statement by IPCC:  • Climate sensitivity between 1° and 10° C with probability > 95% = extremely likely. This statement could seem more alarming by threatening with an extreme of 10° C combined with extremely likely. IPCC could take one step further to  • Climate sensitivity between -10°  and  +20° C with probability > 99% = virtually certain. which could seem even more alarming. We seem to be led to the conclusion that IPCC uses Penguin Logic. What do you think? Compare also Sheep Herd Accuracy. fredag 21 augusti 2009 Feedback, Sensitivity, Cancellation and Duality Concerning the climate sensitivity of current climate models, IPCC states: torsdag 20 augusti 2009 Malthus is Back Again Mathematics can be a powerful tool:  In his famous treatise An Essay on the Principle of Population first published in 1798, Thomas Robert Malthus presented a mathematical analysis predicting exponential population growth in time, while food supply would have a much slower linear growth  in time, later referred to as Malthus' Principle of Population. Malthus thus predicted mathematically an inevitable collapse of human civilization if actions were not taken to limit population growth.  But the mathematics of Malthus was wrong: populations did not grow exponentially and food supply not linearly: human civilization did not collapse. Not yet at least... Nevertheless, Malthus is today back again: Based on mathematical climate models the UN International Panel of Climate Change IPCC predicts exponential growth of the global temperature caused by burning of carbonbased fuels, which will lead to a collapse of human civilization on an overheated Earth, if actions are not taken to limit CO2 emission, now.  Exponential growth is thus feared, but our capitalistic society is driven by dreams of exponential growth at x% per year of • GNP  • investments  • income • house prices... But steady exponential growth is not possible, because it will surpass any limit in finite time: The exponential growth of a financial bubble is eventually followed by a financial crisis until the next bubble can start to grow, exponentially. The overall growth is not exponential because of negative feed-back: The bubble is follwed by a compensating crisis. Exponential growth represents positive feed-back: The more it grows the more rapidly it grows. A dynamical system with positive feed-back exponential growth is unstable and in order to survive without explosion has to develop a different dynamics somehow curbing the growth by stabilizing negative feed-back. This is the nature of turbulence which is a fundamental aspect of climate. Also compare with the climate feed-back analysis by Richard Lindzen: • The earth’s climate (in contrast to the climate in current climate mocdels) is dominated by a strong net negative feed-back. Climate sensitivity is on the order of 0.3°C, and such warming as may arise from increasing greenhouse gases will be indistinguishable from the fluctuations in climate that occur naturally from processes internal to the climate system itself. The mathematics of exponential growth can be captured analytically and thus is attractive to a mathematical theoretical mind, but it is too simplistic to capture the dynamics of complex systems such as human populations or turbulence.  Similarly, the IPCC mathematical climate models are most likely too simplistic to capture the dynamics of a the complex system of global climate. Malthus' Principle of Population and the IPCC mathematical models seem to have the same degree of realism.  In the previous blog I noted that global climate and human population now connect on the agenda of the Optimum Population Trust endorsed by Sir David Attenborough: • World population is projected to rise from today's 6.8 billion to 9.15 billion in 2050. The World Population Clock is ticking.  We are rapidly destabilising our climate and destroying the natural world on which we depend for future life. • The West should provide money to promote contraception in the Third World and poor countries would be denied 'carbon allowances' unless they control their numbers.  • It is time we abandoned this crazy taboo. Is this also the agenda of the upcoming UN Copenhagen Climate Council? To limit the number of emitters according to Malthus' Principle of Population? To deny poor people carbon allowances unless they control their numbers. Is Malthus back again?  What do you think? onsdag 19 augusti 2009 Authority vs Science: Unreason vs Reason Leading MIT atmospheric physicist and climatologist Richard Lindzen in a talk on The Politics of  Global Warming at the International Conference of Climate Change, New York City, March 8 2009, reminds us about a few simple truths concerning science in general and the science of climate modeling in particular: • Endorsing global warming as scientist, just makes life easier. • Most arguments about global warming boil down to science vs authority. For much of the public authority will win, since they do not want to deal with science. • The climate alarm movement has control of carrots and sticks; most funding for climate would not be there without alarm. • What can be done is to better understand science, in particular the logic of science. Actually, science and logic is often not that hard to understand.  • Current climate models have large positive feed-backs with thermal radiation decreasing under increasing seasurface temperature, while Nature most likely has negative feed-back.  Getting people including many scientistst to understand this, is crucial.  • The Global warming issue has done much to set back climate science, in particular the notion that climate is one-dimensional totally described by some fictitious global mean temperature and some single gross forcing a la CO2 level, is grotesque in its oversimplification. Lindzen tells us something  important: Good science and scientific logic can be understood by many. Authority cannot win against science in the long run.  However, in the short run it can, as is illustrated in the previous blog: Evidently Sir David Attenborough has little understanding of the mathematics of climate models, and thus easily can be convinced that predictions of climate models is the truth: If climate models show global warming up to 10 degrees Celsius over the next hundred years, because the accuracy is not better than 10 degrees, then we have to take action to prevent a certainly dangerous increase of 10 degrees. But is it reasonable to keep poor people from increasing their standard of living because climate models are inaccurate? Is it? Note that Sir David Attenborough has joined the Optimum Population Trust with the following modest proposal on its agenda: • It is time we abandoned this crazy taboo. The idea to limit energy consumption of poor people until they have become rich enough to have few children is amazing in its inhuman Moment22 stupidity. Is this also a result of climate models? What does Sir David Attenborough say? Maybe it is time for an interview... måndag 17 augusti 2009 Sir David Attenborough: The Truth About Climate Change The science of climate modeling predicting global warming by anhropogenic emission of CO2 from carbonbased fuels is nicely summarized by the legendary Sir David Attenborough in the Truth About Global Warming: • The key question is: How can we distinguish between climate variations induced by natural causes and by CO2 emission? • The key thing that convinced me was a temperature graph prepared by climate scientist Professor Peter Cox showing that a climate model with CO2 emission included can reproduce the temperature during the 20th century better than without. • So there you have it: It seems little doubt that this recent rise, this steep rise in temperature, is due to human activity. • It is clear that without the action of human beings there would have been far less temperature change since the 1970s. The science of climate change is the science of climate modeling. Sir Attenborough became convinced by looking at a graph produced by running a certain mathematical climate model with and without a certain greenhouse effect included. But Sir Attenborough did not ask the natural question:  • How reliable and accurate are climate models?   Suppose Sir Attenborough was informed that climate models are not reliable, that their accuracy is unknown, would that change his conviction based on a single graph being the output of a climate model? Suppose he was informed that climate models are constructed so as to give the result of the graphs, a graph which is the result of modeling activity of human beings. What would Sir Attenborough then say? Compare previous blogs on climate simulation. lördag 15 augusti 2009 Swedes in the Lead of Climate Control fredag 14 augusti 2009 Al Gore in the Kingdom of Denmark • 150 years ago the scientist John Tyndall in UK discovered for the first time that CO2 intercepts infrared radiation/heat. • From his discovery followed a great deal of work that led to growing concern that from the rapid accumulation of CO2 in the atmosphere, the build up of heat in the atmosphere and ocean would reach dangerous levels. • This year an important event will take place in this hall in this city, in this Kingdom. All nations will gather in an effort to secure a treaty limiting the accumultation of greenhouse gases and the emissions that lead to this accumulation. • We are now facing three interrelated crisis: climate, financial and energy security, all three linked by a common thread to an absurd overdependence on carbonbased fuels. If we grab hold of that thread and pull it, these crisis begin to unravel and we hold in our hands the answer to all three: • A historic shift from expensive vulnerable polluting carbonbased to new sources of energy that are free for ever: wind, solar and earth. In Denmark now 1/4 comes from wind • CO2 is tasteless, odorless, colorless and has no price tag. It does trap heat. • More increases are in store because the heat built up in the oceans that will be released into the atmosphere. • We must put top urgent priority on preventing the catastrophy that would befall us if we did not act. • The changes that are now needed will require participation and leadership from all parts of civilization. • It is critically important that we get the rules of the market place correct and that the signals we derive from the market, are ones that accurately reflect human values, so that we can make decisions... that will allow us to live our lives in ways that are in keeping with what we know to be right. • There is a very simple test of what is right where the climate is concerned: If the next generation looks back at this year and sees around them the worsening catastrophies that were foretold if the world did not act....If they look back on us and ask: What were they thinking, why did they sit on their hands, why did they choose not to take action to avoid the horrendous catastrophy that the scientific community spelled out to them, and told them would happen if they did not act. • If they instead see around them in their world millions of good green jobs, a spirit of renewal, a sense of optimism and hope, a feeling that, yes we can deal with the problems. If they look back with gratitude, this means we have done our job. • But it is not much time; we have to do it this year, not next year. The clock is ticking, beacuse mother Nature does not do bail-outs:  • We have already as predicted seen increasing droughts, destructive fires, stronger storms, record flooding, spread of tropical diseases... • But there is good news: The worlds business community and leaders are beginning to respond. • Our policies in the US are changing: President Obama within on one month passed the largest green renewable energy stimulus bill in history.  • Every nation and business has a leadership role to play. In short, Gore first claims that scientists have shown that: • CO2 emission causes global warming, which will cause horrendous catastrophy, and then makes a political call: • The Leaders of the World have to act and limit CO2 emissions. UN Secretary General Ban Ki-Moon backs up Al Gore's message in his address to the Global Environment Forum sending the scaring message: • droughts, floods and other natural disasters... as well as mass social unrest and violence... human suffering will be incalculable... if the world’s leaders do not seal a deal  on climate change... in Copenhagen... • We have just four months. Four months to secure the future of our planet. But  scientists do not seem to agree on answers to the basic questions: • How much global warming is caused by CO2 emission? • What will be the effects of global warming? • What will be the effects of limits of CO2 emission, for the developing world? In order for the December meeting in Copenhagen to be meaningful, some answers seem to be required...unless everything is just politics for the Leaders of the World... One question naturally presents itself: Does the ambition of the World Leaders of industrial countries to limit the use of carbonbased fuels, come from self-interest to guarantee continued access to these fuels? Modernity in Physics, Arts and Music onsdag 12 augusti 2009 Logic of Penguin Science = ?? The statement A implies B means that if A is true, then B is also true. An elementary mistake in logical scientific reasoning is to conclude that if A implies B and B is observed to be true, then A is true. But this is to confuse A implies B with B implies A We illustrate: Let  • A= You bang your head into a wall.  • B = You have a headache. We could probably agree that there is theoretical evidence that A implies B: Head bang leads to head ache, in theory at least. Suppose now that B is true, that is suppose that you have a headache. Can we then conclude that A is true, that is that you bang your head into a wall? Not necessarily: You may get a headache from other causes, like drinking to much alcohol. It can even be that the implication that you get a headache from head bang is incorrect, so that there is no connection at all; you may have an unusually solid skull. Yet this type of logic is a trademark of modern physics/science:  • If we assume that a gas is in a state of molecular chaos with the velocities of two molecules before collision being statistically independent, then we can theoretically derive Boltzmann's equation, which has certain solutions which agree with certain observations. Hence the gas in a state of molecular chaos. • If we assume that there is a smallest quantum of energy, then we can theoretically derive a formula for the spectrum of black-body radiation, which agrees with observation. Hence there is a smallest quantum of energy. • If we assume that light consists of particles named photons, then we can theoretically derive a formula for photoelectricity, which agrees with certain observations. Hence light consists of photon particles.  • If we assume Pauli's exclusion principle, then we can explain certain observed atomic electron configurations.  Hence electrons obey Pauli's exclusion principle. • If we assume that the wave function collapses at observation, then we can theoretically explain an certain observed blips on a screen. Hence the wave function collapses at observation. • If we assume Heisenberg's uncertainty principle for elementary particles, then we can theoretically explain an observed interaction between observer and observed particle. Hence elementary particles obey Heisenberg's uncertainty principle.  • If we assume that a proton consists of three quarks, then we can theoretically derive a formula for the observed mass of a proton. Hence a proton consists of three quarks. • If we assume that spacetime observations of different observers are connected by the Lorentz transformation of special relativity, then we can theoretically explain the observation that the speed of light is the same for all observers. Hence spacetime observations of different observers are connected by the Lorentz transformation. • If we assume that spacetime is curved, then we can theoretically explain observed gravitation. Hence spacetime is curved. • If we assume there was a Big Bang, then we can theoretically explain the observed expansion of the Universe. Hence there was a Big Bang. • If we assume there is a black hole at the center of a galaxy, then we can theoretically explain the observed shape of a galaxy. Hence there is a black hole in the center of a galaxy. • If string theory would predict an observable phenomenon, it would follow that matter consists of tiny vibrating strings. • If we assume that the Earth rests on four invisible tortoises, then we can theoretically explain why the Earth does not fall down. Hence the Earth rests on four invisible tortoises. • If we assume that CO2 is a critical greenhouse gas, then we can theoretically explain observed global warming. Hence CO2 is a critical greenhouse gas. Do you see the possibly incorrect logic in these statements? If so, do you see the potential danger of such possibly incorrect logic? Do you think such possibly incorrect logic represents science or pseudo-science?  Notice that in all the above cases, the fact that a certain phenomenon is observed, which can be theoretically explained from a certain assumption, is used to motivate that the assumption is not just an assumption but a true fact: There is molecular chaos and a smallest quantum of energy, electrons do respect the exclusion principle, the Lorentz transformation must connect different observations, spacetime is curved, light is photons, there was a Big Bang, there is a black hole in the center of a galaxy, a proton is three quarks, the Earth is resting on four tortoises, CO2 is a critical greenhouse gas.  Notice also that in all cases, it is impossible to directly check if the assumption is valid, which is part of the beauty. The assumption is hidden to inspection and can only be tested indirectly: It is impossible to directly observe molecular chaos, a smallest quantum of energy, photon, electron, particle exclusion, wavefunction collapse, uncertainty, quark, spacetime curvature, black hole, tortoise, string...or that CO2 is a critical greenhouse gas. It is therefore impossible to directly disprove their existence...Clever, but there is an obvious drawback, since the existence is also impossible to verify...science or pseudo-science? The argument is that the assumption must be true, because this is the only way a theoretical explanation seems to be possible. Our inability to come up with an alternative explanation thus is used as evidence: The more we restrict our creativity and perspective, the more sure we get that we are right. Convincing or penguin science? Compare the same logic in a trial: If we assume X had a reason to kill Y, then we can theoretically explain the observed murder of Y. Hence X had a reason to kill Y. And thus probably did it! What if you were X? Notice in particular that present climate politics is based on the idea that CO2 is the cause of the observed global warming, with the motivation that certain theoretical climate models show global warming from CO2. But the observed modest global warming during the 20th century of 0.7 degrees Celsius may have natural causes rather than anthropogenic burning of fossil fuels. What do you think? What does a penguin in the Antarctic think? Compare e.g. EIKE. tisdag 11 augusti 2009 Interview with Erland Källen: Meteorologist Interview with Erland Källen, Professor of Dynamic Meteorology at Stockholm University. CJ: What is the accuracy of the climate models used in the IPCC predictions of the effects of greenhouse gases on the global climate? EK: The error margins are pretty big. The scenario showing 2 degrees warming has an error margin between 1 and nearly 4 degrees, with the upper margin bigger than the lower. All scenarios thus give a warming with the worst scenario over 6 degrees...  CJ: Do you really mean that if the upper error margin was 10 degrees, then the worst scenario would be more than 12 degrees, so that a bigger error margin would indicate more warming? Or do we use the term error margin differently? EK: ?? EK:  Over a longer time period there is a connection between increase of CO2 and global temperature....It is impossible that natural variations only, could explain the warming the last 50 years....From a moral point of view it is very difficult to understand, why we in the rich part of the World have a right to demand birth control in developing countries, if we don't at the same time open to an increased standard of living...To explain the warming the last 50 years it is difficult to see another main reason than increase of CO2...From computer simulations we draw the conclusion that increased CO2 is the most plausible explanation of observed temperature change.. CJ: Are you gradually changing from impossible...to difficult to see...to most plausible...to...? EK: ?? Erland Källen does not seem to be willing to be interviewed by me. But the questions remain. Chill-Out: Climate Change?? Chill-Out -- The Truth about the Climate Bubble (in Swedish) by Lars Bern and Maggie Thauersköld,  is an important contribution to the Swedish debate on Anthropogenic Global Warming AGW. Read and think! The key question is if the global warming of 0.7 degrees Celsius during the 20th century, is due to an increase of CO2 in the atmosphere from 0.028% to 0.038% due to anthropogenic burning of fossil fuels, and if therefore strict limitations on CO2 emissions must be imposed to save the World?  Al Gore says YES! based on the following key statements in the 2007 Synthesis Report of the UN Intergovernmental Panel of Climate Change IPCC: • Continued green-house-gas/CO2 emissions at or above current rates  would cause further warming and induce many changes in the global climate system during the 21st century that  would very likely be larger than those observed during  the 20th century.  Chill-Out puts these statements into perspective and in particular points to the fact that the predictions of IPCC are based on computer simulations showing a better fit to measured temperature with anthropogenic warming included than without.  Note that the IPCC statements are very cautious, which reflects a generally accepted view that the accuracy/reliability of current climate models is questionable, which I have discussed in previous blogs on climate simulation. The next UN Climate Conference will take place in Copenhagen in December under Swedish chairmanship of EU. UN Climate chief Yvo de Boer hopes the conference will in particular reach agreements to limit the growth of emissions in developing countries, required to be necessary by predictions of catastrophical global warming. The key question is if poor people will have to remain poor because of the scientifically vague predictions of IPCC based on certain computer simulations generally viewed to be unreliable?Fredrik Reinfeldt, Swedish prime minister and current EU president, says YES! calling for immediate global action on climate change at the opening of Nordic Climate Solutions, in Copenhagen November 27 2008: • We must act today, in order to save tomorrow. Chill-Out helps you to understand the background and meaning of this statement. Also compare A man-made mortality tale: How the IPCC’s fairly sober summary of climate science has been spun to tell a story of Fate, Doom and human folly. Read and think! måndag 10 augusti 2009 Role of Mathematics Education in Society?? Can we learn something about the role of mathematics education in our society, from how mathematics departments present their educational programs?  The following statements are typical:  • Princeton University: The mathematician's best work is ART, a high perfect art, as daring as the most secret dreams of imagination. Mathematical genius and artistic genius touch one another. (Gösta Mittag-Leffler). • MITMathematics provides a language and tools for understanding the physical world around us and the abstract world within us.  • University of ChicagoOne of the wonderful things about the University of Chicago is that EVERYONE has to take mathematics. Most students complete this requirement by taking one of our calculus sequences.  • Chalmers University of Technology: The mathematical sciences are fundamental and indispensable to a large part of modern science and engineering. Progress in other disciplines is often linked to an increased use of mathematics. Mathematics is however also a subject in itself, and fundamental research is a necessary condition for its many applications. • University of OxfordMathematics plays a pivotal role in the progress of society and its continued growth relies on the exchange and development of research ideas, the encouragement and teaching of the next generation of mathematical thinkers, and outreach to the public and schools.  We  summarize: • Mathematics is a form of sublime art, which miraculously has shown to be very useful in science and engineering. EVERYBODY needs to learn mathematics in order to understand the physical world outside and abstract world inside ourselves. We observe that the existence of the computer is not visible.  The message is that mathematics primarily is a form of art which by developing according to its own inner principles, to which computing does not belong, best serves the needs of society. In this scenario there is little incentive to reform motivated by the computer now changing the society outside mathematics departments and their educational programs.
57d49b039a5886fa
S03E16: The Excelsior Acquisition Tonight Sheldon wants to ask Stan Lee how the Silver Surfer uses his silver surfboard to accomplish interstellar flight.  As well he should!   Nobody, not even Sheldon, knows how we are going to travel between stars. The Silver Surfer accomplishes interstellar travel on his silver surfboard. How will we? Proxima Centauri is our best bet.  It is the closest star to our home orbiting around our own star, Sol.   Proxima Centauri is  an unremarkable red dwarf star named appropriately from the Latin proxima, which is “next to”, as in “proximate”.  It is not so-named because it is close to us, but rather because it is close to the star Alpha Centauri, a star in the constellation Centauri.   Alpha Centauri is the third brightest star in the night sky, but mostly just because it is so close.  We may want try to visit someday.  After all we are neighbors and have yet to bring them so much as a fruit basket. “Close” is a funny word to use on interstellar distances.   Proxima and Alpha Centauri are so far away it takes light 4.2 years to arrive.  Nothing we know of can allow us to travel faster than light, our ultimate speed limit.  Even the television transmissions of the pilot episode of Big Bang Theory, which have been traveling at the speed of light since late 2007, are only halfway to whoever might inhabit the rocks orbiting those stars.  Not even Hulu.com  in Alpha Centauri has TBBT available yet.  (Life near Alpha Centauri has that in commonwith Earth.) Alpha Centauri, being so bright, has probably been known to the earliest hominids who bothered to look up.  But Proxima Centauri being so dim was only discovered using powerful telescopes in 1915.  We may not be done yet.  Even dimmer stars known as brown dwarfs may be traveling the galaxy even closer to us than Proxima Centauri.  These stars are so cool, you have to look for them in infrared light.   Finding such nearby stars is one of the key missions of the newly launched WISE satellite.   When I told one of the BBT writers/exec producers we may soon find closer stars than Proxima Centauri he said “The Federation may be closer than we think”. Proxima Centauri (red star, center) is the closest known star to Earth at 4.2 light years distance. (If you enjoy astronomy pictures such as this one, I highly recommend visiting NASA's "Astronomy Picture of the Day") Right now our plate is full just with interplanetary travel within our own solar system.  A trip taking astronauts to Mars, as recently imagined by NASA, even at its closest approach will take over half a year.  Proxima Centauri is 750,000 times farther Mars’s closest approach to Earth.  At the same speed, that would take over a quarter million years to get there.  We must invent something faster. Suppose our human engineers develop a technology that allows us to travel 1% the speed of light on average to Proxima Centari.  The astronauts only need now to spend 400 years on the spacecraft.   (I’m ignoring the tiny  benefit due to time dilation slowing the astronaut’s lifespan as we discussed earlier for the story of Paolo and Vincenzo.) The astronauts won’t survive to get there, but if they keep having children their 16th generation could make it.  I don’t think the intermediate generations will be particularly happy with their forbears for condemning them to a lonely flight through interstellar space.  If one generation rebels, and refuses to procreate the mission will be a failure.   Even if that 16th generation arrived successfully, they would hardly be Earthlings. I think we can prove that we humans will never attempt interstellar transit until we know how to travel at least 25% the speed of light.  (The mission to Mars discussed above is only 0.001% the speed of light.)   Suppose a mission really was undertaken to travel to Proxima Centauri with a fantastic new technology that would take us there at 1% the speed of light.  It will take 400 years.  Now suppose anytime in the next 200 years, a new technology is developed to increase that average speed to 2% per year.  Given the rate of technological progress that is not a bad bet.  So the spacecraft that launches later would beat the earlier craft.   So not until a technology reaches some reasonable fraction of the maximum speed limit, the speed of light, would anyone bother to take an early flight.   The speed would have to be as large as 25% the speed of light to nearly guarantee this would not be a problem.  At least then the same generation will arrive as left the Earth.  It may not ever be possible, but the argument shows it is unlikely any such mission would be mounted until that is possible. These are the stars in your neighborhood. Each white ring is about 1.7 light-years appart. (If some smarty-pants wants to suggest worm-holes or other space-bending technology, keep in mind that these ideas don’t even work on paper.) This says nothing of the many other technological hurdles must be met.  Traveling even at 1% the speed of light, the spacecraft would suffer terrible damage from interstellar gas and dust.   The rate of cosmic rays, charged particles flying throughout interstellar space, would likely give fatal cancer to anyone who tried this mission and they would arrive long dead. So it pays to go back and understand what is special about the Silver Surfer’s surfboard that allows interstellar transport.  Often science fiction writers will come up with an idea before engineers and scientists.   Perhaps with the Silver Surfer there is an idea we’ve missed.  A good place to start with any such questions is James Kakalios’s terrific book “The Physics of Superheroes“.  Yet no explanation of Silver Surfer can be found — maybe it is just because Silver Surfer started out as a super-villain, not super-hero.  Fortunately someone actually asked the Silver Surfer’s creator, Jack Kirby, why he uses a surfboard.  To which he explains: “Because I’m tired of drawing spaceships.”  -Jack Kirby 17 Responses to “S03E16: The Excelsior Acquisition” 1. feldfrei Says: Indeed, the impact of cosmic rays poses a serious problem for long-term space travel. The situation becomes even more uncomfortable if one desires to take advantage of the time dilatation in order to reach really distant (extragalactic) objects. The more travel at relativistic speed benefits from time dilatation, the more blue-shifted will be the cosmic background microwave radiation due to the relativistic Doppler effect. Combined with the “searchlight effect” due to the Lorentz boost, a relativistic spaceship would see intense jet-like high-frequency electromagnetic radiation in the forward direction. Synchrotron radiation emitted from relativistic electrons in circular accelerators is based on the same principle (just the kinematics is inverted): For further reading (and watching movies visualizing relativistic effects) I can recommend the following website: 2. Ali Says: Hi David, First of all, I’d like to thank you to give us this opportunity to read behind the physics in this blog. Then, with your permission, I’d like to express my disappointment. I really would like to know how a string theorist scientist like Sheldon could say such a line: “Although we live in a deterministic universe, each individual has free will.” Deterministic? From a theoretical quantum physicist? When I first heard this line, I had thought that Sheldon would somehow render it with his classic pranks, bazinga if you will, but no. He went on it. I might get it all wrong but please, can you explain it for me? Thanks in advance Ali from Turkey • shellorz Says: Quantum theory could have been deterministic (like Einstein wanted it to be) with local hidden variables. Now we know it’s not the case (or at least le hidden variables aren’t local). Still, you can imagine a deterministic universe where everything is already set because time then doesn’t exist as we know it. Just a dimension like another that we experience only going one way (as if we were always being walking westward). In this case, the ever-existing universe may have a begining and an end in time without having to be determined. • Steve Says: I was a bit taken back myself by Sheldon’s comment.. A few seconds later I realized that I was being extremely critical of the show, and should probably blame it on the wirters not asking David about the line. Sometimes I forget that the show isn’t exclusively for science-buffs who would pick up on something like that. I also wonder if I would think the stuff was funny if I didn’t know what the characters were talking about. A lot of people do though… • tvvv Says: Sheldon mentioned in an earlier episode that he believed in the many-worlds interpretation of quantum mechanics, so it makes sense that he believes we live in a deterministic universe. 3. Daniel Says: Love the blog. Today’s post reminded me of a book I read last year that mentioned interstellar travel. I’ll recap it here. During the ’50s and ’60s there was some talk of nuclear pulse propulsion that involves vending atomic bombs behind a ship, detonating them, and riding their shock wave. It is not unreasonable (having loosely consulted calculations) to expect 0.1c, which puts us at Alpha Centauri in 40 years. I’d go… Project Orion (as it was known) was canceled due to a treaty that disallowed the detonation of nuclear bombs in the atmosphere or in outer space (thanks a lot, Cold War), but I think we could potentially arrange something with the rest of the world that would allow continued research thereon. With 50 years progress since then, I can see huge potential for even greater velocities. 4. Jason Says: Funny I came to make the exact same post as Ali. Why would a theoretical quantum physicist make such a blanket statement that we live in a deterministic universe? I think this was a “major” gaffe in the writing this week, but perhaps you have another take. Care to explain? I think it might warrant another blog post. • feldfrei Says: The statement may sound funny – but the question is how you would define “determinism”. If one considers the entire universe as a whole, there may be even no time at all as I mentioned here: Considering a subsystem (which could be observed by someone). the wave function describing this subsystem is deterministic since it follows the (time-dependent) Schrödinger equation (neglecting relativistic effects, otherwise Dirac equation). However, the outcome of any measurement is generally not determined and this outcome has to be described classically (at least if you like the Kopenhagen interpretation of quantum mechanics). On the other hand (it is a question of the timescale) you can find many “quasideterministic” systems, e. g. the planets’ motion in our solar system which are highly predictable. An interesting question is addressed by the second part of Sheldon’s statement: A non-deterministic world is only a necessary condition for “free will” but it is not sufficient. Human beings could be treated as complex machines with some random control and there are brain researchers who deny the existence of free will. However, the concept of free will could be of practical use (like statistical descriptions in classical physics). Max Planck discussed in one of his talks that “free will” could be even meaningful in a deterministic classical world. But this leads now to more philosophical questions 😉 5. shellorz Says: The part with the a ship with newer technologies arriving before the one that has been launched decades before rings a bell to me. Alien or Philip K. Dick (Lies, Inc ?,I can’t tell) 6. shellorz Says: … and the silver surfer is not such a super-villain. He got “redeemed”, right ? He saved the Earth and dicthed Galactus 7. Mino Says: Hey Steve. I had a nice dicussion with my dad a week ago. He said that some guys started to send some kind of morse codes into space many years ago and today people send messages with bether technologies. I asked myself if we are still listening to morse code styles of messages. I hope you understand my question. I mean. Maybe they answer the way we send it but we dont listen to that kind of message anymore and it just disappears in the chaos. • shellorz Says: I’m not Steve, but I can reply : intelligent messages, morse or other would stand out from the rest of the signals out there. morse is the way it is “coded” not the way it is sent. If any message is sent, then decyphering the code would be another issue but be reassured we’d get the signal… provided we’re looking the right direction. Hell, we’re not monitoring the whole skies. And not necessarily all the frequencies. Bets have been made that an intelligent species would try to use something “commonplace” as a reference, so we’ve been monitoring with SETI along some “particular” frequencies like hydrogen’s oscillation frequency because hydrogen is the most commonplace element in the universe. Well, so we think. Maybe until we know more about dark matter. the signals we sent only travel at the speed of light. So you wouldn’t get an answer from anywhere further than 25 light years away (which is nothing). We actually don’t need to send that many messages. All our satellite, TV and other communications are partly sent out to deep space. This is actually related to Fermi’s paradox (which is to me as ungrounded as Drake’s law). We think as human beings with our own biased ways of understanding the world. Maybe other intelligent species have been trying to communicate with us through other dimensions (for instance, one of the 7 others dimensions in the superstring theory). Maybe they use worm holes to do that. And we’re here wondering why the hell they’re not responding. But this “are we alone” inquiry is to me mostly a psychological quest. Beyond the anthropic principle, people have realized in the 60s how isolated and alone we are. Finding something else, someone else, even if it might lead us to our demise has become a key to our psychological stability. 8. Erich Landstrom Says: ” Finding such nearby stars is one of the key missions of the newly launched WISE satellite. ” That reminds me of when I was teaching Astronomy. I learned of Georgia State University’s RECONS (Research Constorium on Nearby Stars http://www.recons.org/) RECONS mission purpose is to understand the nature of the Sun’s nearest stellar neighbors, both individually and as a population. Our primary goals are to discover “missing” members of the stellar sample within 10 parsecs (32.6 light years), and to characterize all stars and their environments within that distance limit. As of January 1, 2009, the complete RECONS Census of objects known within 10 parsecs included 354 objects in 249 systems: singles 171 doubles 58 triples 14 quadruples 5 quintuples 1 WDs 18 white dwarfs Os 0 Bs 0 As 4 Fs 6 Gs 21 Ks 44 Ms 239 red dwarfs Ls 4 Ts 8 Ps 10 1 planet around GJ 144 (epsilon Eri) 1 planet around GJ 176 3 planets around GJ 581 1 planet around GJ 674 1 planet around GJ 849 3 planets around GJ 876 and of course 8 planets around Sol 9. Chris Shabsin Says: Wait a sec… if Jack Kirby invented Silver Surfer, then why was Sheldon going to ask Stan Lee about the surfboard? Sheldon should have simply taken the opportunity to ask Kirby himself, as he stood before The King’s own bench. 10. Tradução: “S03E16: The Excelsior Acquisition (A Aquisição do Excelsior)” « The Big Blog Theory (em Português!) Says: […] feita por Hitomi a partir de texto extraído de The Big Blog Theory, de autoria de David Saltzberg, originalmente publicado em 1º de Março de […] Comments are closed. %d bloggers like this:
5eb60e768ffce091
The Nature of Things Presidential Address given to the British Society for the Philosophy of Science J.R. Lucas on June 7th, 1993 It would be improper for a President to play safe. After two years of curbing my tongue and not making all sorts of observations that have sprung to my mind, in order to let you have an opportunity of having your say, I am now off the leash. And whereas mostly in academic life it is appropriate to adopt a prudential strategy, and not say anything that might be wrong, I owe it to you on this occasion to play a maximax strategy, to speak out and say what I really think, being willing to run the risk of being wrong in order not to forgo the chance of actually being right in an area of the philosophy of science which must, I think for ever, be largely a matter of metaphysical speculation. I stand before you a failed materialist. Like Lucretius, from whom I have borrowed my title, I should have liked to be able to explain the nature of things in terms of ultimate thing-like entities of which everything was made, and in terms of which all phenomena could be explained. I have always been a would-be materialist. I remember, when I was six, telling my brother, who was only two, in the corner of a garden in Guildford, that everything was made of electricity and believing that electrons were hard knobbly sparks, and later pondering whether position was a sort of quality, and deciding that it was, absolutely considered, but that relative positions, that is to say patterns, as seen in the constellations in the sky, were only in the eye of the beholder. I am still impelled to a very thing-like view of reality, and would like to explain electricity in terms of whirring wheels, and subatomic entities as absolutely indivisible point-particles, each always remaining its individual self, and possessed of all the qualities that really signified. I find it painful to be dragged into the twentieth century, and though my rational self is forced to acknowledge that things aren't what they used to be, I find it hard to come to terms with their not being what I instinctively feel they have got to be, and am still liable to scream that that the world-view we are being forced to adopt cannot be true, and that somehow it must be fitted back into the Procrustean bed of our inherited prejudices. But I am not going to ask you to listen to my screams. Rather, I shall share with you my attempts to overcome them, and work out new categories for thinking about the nature of the world, and a correspondingly less rigid paradigm of possible explanation. It has taken me in two different directions. On the one hand reality is much softer and squodgier than I used to think. It is not only that the knobbliness is less impenetrable, as quantum tunnelling takes over, nor that it is fuzzier, without the sharp outlines of yestercentury, but, more difficult to comprehend, the very concept of haecceitas, as Duns Scotus called it, this-i-ness, or transcendental individuality, in Michael Redhead's terminology, 1 has disappeared from the categories of ultimate reality. On the other hand, reason has become much wider and more hospitable to new insights from various disciplines. The two changes are connected. Our concept of a thing, in order to be more truly a thing, has been developed into that of a substance, and substances have come to need to have more and more perfections, and we have therefore come to identify as substances more sophisticated combinations of more recherché features; and with this change in what we regard as a thing has come also a corresponding change in our canons of explanation. It will be my chief aim this evening to show how our changed apprehension of reality has opened up new vistas of rationality, and how the wider concept of rationality we have been led to adopt has in turn altered our view of what constitute real substances. The corpuscularian philosophy posited the ultimate constituents of the universe as qualitatively identical but numerically distinct, possessing only the properties of spatial position and its time-derivatives, and developing according to some deterministic law. In the beginning, on this view, God created atoms and the void. The atoms, or corpuscles, or point-particles, were thing-like entities persisting over time, each for ever distinct from every other one, each always remaining the same, each capable of changing its position in space while retaining its individual identity. Spatial position constituted the changeable parameter which explained change without altering the corpuscle's own identity. Space was the realm of mostly unactualised possibility, of changes that might, but mostly did not, occur. But space also performed the logical function of both distinguishing between qualitatively identical corpuscles---two thing-like entities cannot be in the same place at the same time---and providing, in spatio-temporal continuity, a criterion of identity over time. It was thus possible for each point-particle to be like every other one, but to be a different particular individual, and this particularity affected the corpuscularians' ideal of explanation, articulated by Laplace, and much refined in our own time by Hempel. Scientists seek generality, and eschew the contingent and the coincidental. In the Hempelian paradigm, the focus of interest is on the covering law, which is general, and not on the initial conditions, which just happen to be what they are, and can only themselves be explained by the way earlier states of the universe happened to be. Boundary conditions, being the particular positions and velocities of particular point-particles, are too particular to constitute the sort of causes that scientists, in their search for generality, are willing to take seriously as genuinely explanatory. The corpuscularian philosophy had many merits. It reflected our experience of things: stable objects that persist over time, clearly individuated by exclusive space-occupancy, capable of change without losing their identity. As a metaphysical system it had great economy and power. All macroscopic things, all events and phenomena, were to be explained in terms of the positions and movements of these ultimate entities. There was a clear ontology, a clear canon of explanation, and a clear demarcation between physically necessary laws and purely contingent initial conditions. Of course, there were also grave demerits. From my own point of view---though I have failed to persuade Robin Le Poidevin of this 2---time is essentially tensed, and it counts against the corpuscularian scheme that it did not account for the direction of time or the uniqueness of the present: more influential in the history of science was the account of space, and the difficulty in formulating a plausible account of how corpuscles could interact with one another, which in due course led us to replace corpuscularian by field theories, as being better able to account for the propagation of causal influence. The vacuum, though adequate for giving things room to exist and move in, was too thin to let them interact with one another, and Voltaire has had to return from London to Paris. But it was not only space that proved too thin to do its job. The ultimate thing-like entities not only failed to accommodate the things of our actual experience, but have turned out not to be thing-like at all. Although the atoms of modern chemistry and physics are moderately thing-like, subatomic entities are not. We do not obtain predictions borne out by observation if we count as different the case of this electron being here and that there and the case of that being here and this there. Instead of thinking of the word `electron' being a substantive referring to a substantial, identifiable thing, we do better to think of it as an adjective, with some sense of `negatively charged hereabouts'. We do not feel tempted to distinguish two pictures, one of which is red here and red there, and the other of which is red there and red here; the qualities referred to by adjectives lack haecceitas, this-i-ness, and are real only in so far as they are instantiated. We are forced to deny this-i-ness to electrons and other sub-atomic entities in order to accommodate empirical observations, but it is not just a brute fact, but rather the reflection of the probabilistic structure of quantum mechanics. The loss of determinateness in our ultimate ontology is the concomitant of our abandoning determinism in our basic scheme of explanation. Probabilities attach naturally not to specific singular propositions, but to general propositional functions, or, as Colin Howson puts it, 3 generic events, or, in Donald Gillies' terminology, 4 repeatable situations. Although you can intelligibly ask what the probability is of my dying in the next twelve months, the answer is nearly always only an estimate, extrapolated from the probabilities of Englishmen, males, oldie academics, non-smokers, non-diabetics, and other relevant general types, not dying within the subsequent year. Calculations of probabilities depend on the law of large numbers, assumptions of equiprobability, or Bayes' Theorem, which all ascribe probabilities to propositional functions dealing with general properties rather than to singular propositions asserting ultimate particularities. If we accept the probabilistic view of the world, we can no longer picture the universe as made up of particular thing-like entities that Newton could have asked Adam to name, but as a featured something, whose underlying propensities could be characterized in quantum-mechanical terms, and whose features calculated up to a point, and found to be borne out in experience. The loss of particularity legitimises a paradigm shift in our canon of explanation. In his Presidential Address, Professor Redhead noted the shift from a Nineteenth Century ideal, in which we could deduce the occurrence of events granted a unified theory together with certain boundary conditions, to a Twentieth Century schema, which, although less demanding, in as much as it is not deterministic, is more demanding, in that it seeks to explain the boundary conditions too. 5 Outside physics that has always been the case---and often within physics too. It is one of the chief objections to the Hempelian canon, an objection expressed by many of those present here tonight---Nancy Cartwright, John Worrall, Peter Lipton---that it fails to accommodate the types of explanation scientists actually put forward. 6 It depends on the science concerned what patterns of law-like association, to use a phrase of David Papineau's, count as causes. 7 Different sciences count different patterns of law-like associations as causes because they ask different questions and therefore need to have different answers explaining differently with different becauses. The fact that different sciences ask different questions is of crucial importance. Once we distinguish questions from answers, we can resolve ancient quarrels between different disciplines. 8 The biologists have long felt threatened by reductionism, and felt that there was something amiss with the claim that it was all in the Schrödinger equation, or as Francis Crick put it, ``the ultimate aim of the modern movement in biology is in fact to explain all biology in terms of physics and chemistry''. 9 But their claim that there was something else, not in the realm understood by physicists, smacked of vitalism, and was rejected out of hand by all practising physicists. Vitalism made out that answers were in principle unavailable, whereas what is really at issue is not a shortage of answers but an abundance of questions. It was not a case of biologists asking straightforward physicists' questions and claiming to get non-physicists' answers, but of their asking non-physicists' questions, to which the physicists' answers were germane, but could not, in the nature of the enquiry, constitute an exhaustive answer to what was being asked. Biologists differ from physicists in what they are interested in---no hint of vitalism in pointing out that the life sciences investigate the phenomenon of life---and in pursuing their enquiries pick on features which are significant according to their canons of interest, not the physicists'. What is at issue is not whether there is some physical causal process of which the physicists know nothing, but whether there are principles of classification outside the purview of physics. It is a question of concepts rather than causality. My favourite, excessively simpliste example is that of the series of bagatelle balls running down through a set of evenly spaced pins and being collected in separate slots at the bottom: we cannot predict into which slot any particular ball will go, but we can say that after a fair number have run down through the pins, the number of balls in each slot will approximate to a Gaussian distribution. There is nothing vitalist about a Gaussian distribution, but it is a probabilistic concept, unknown to Newtonian mechanics. In order to recognise it, we have to move from strict corpuscularian individualism to a set, an ensemble, or a Kollectiv of similar instances, and consider the properties of the whole lot. More professionally, all the insights of thermodynamics depend on not following through the position and momentum of each molecule, but viewing the ensemble in a more coarse-grained way, and considering only the mean momentum of those molecules impinging on a wall, or the mean kinetic energy of all the molecules in the vessel. Equally the chemist and the biologist are not concerned with the life histories of any particular atoms or molecules, and reckon one hydrogen ion as good as another, and one molecule of oxygen absorbed in the lungs of a blackbird as good as another. 10 The chemist is concerned with the reaction as a whole, the biologist with the organism in relation to its environment and other members of its species. A biologist is not interested in the precise accounting for the exact position and momentum of every atom, even if that were feasible. Such a wealth of information would only be noise, drowning the signal he was anxious to discern, namely the activities and functioning of organisms, and their interactions with one another and with their ecological environment. It is the song of Mr Blackbird as he tries to attract the attention of Mrs Blackbird that concerns the ethologist. He is not concerned with exactly which oxygen molecules are in the blackbird's lungs or blood stream, but in the notes that he trills as dawn breaks, and their significance for his potential mate. If he were presented with a complete Laplacian picture, his first task would be to try and discern the relevant patterns of interacting carbon, oxygen, hydrogen and nitrogen atoms that constituted continuing organisms, and to pick out the wood from the trees. In this change of focus the precise detail becomes irrelevant. He is not, in Professor Watkins' terminology, a methodological individualist. What interests him is not the life history of particular molecules of oxygen, but the metabolic state of the organism, which will be the same in either case. Different disciplines, because they concentrate on different questions, abstract from irrelevant detail, in order to adduce the information that is relevant to their concerns. In practice scientists have long recognised that in order to see the wood they must often turn their attention away from the trees. But whereas that shift was to be defended simply as a matter of choice on their part, now it is legitimised by our new understanding of logical status of the boundary conditions we are interested in. If our ultimate theory of everything can talk only in general terms, and cannot assign positions and velocities to particular atoms, it follows that it is no criticism of other theories that they can talk only in general terms too. Hitherto there has been a sense of information being thrown away, information which was there and ultimately important, so that we were, in some profound way, being given less than the whole truth. There was a Laplacian Theory of Everything which was in principle knowable and in principle held the key to all ologies. Every other discipline was only a partial apprehension of ultimate truth, useful perhaps because more accessible for our imperfect minds, but conveying only imperfect information none the less. Just as we rely on journalists to reduce the welter of information about the Balkans or South America to manageable size, so chemists and biologists seemed to select and distil from total truth to tell us things in a form we were capable of taking in. Compared with the high priests of total truth, they were mere popularisers. I may discern Gaussian patterns in long runs of bagatelle balls, but they are patterns only in the eye of an ill-informed beholder: better informed, I should see why each ball went into the slot that it did, and be aware of the occasions when a non-Gaussian distribution emerged. My Gaussian discernment would seem a rough and ready approximation, like describing France as hexagonal, which is fair enough for some purposes, but falls far short of being fully true. Even though the things we pick on as worthy of note and in need of explanation---the shape of the Gaussian curve, the significance of bird-song---lie outside the compass of the limited concepts and explanation of a Theory of Everything, the possession of perfect information trumps curiosity. The case is altered if there is no fully particularised ultimate reality, and no complete theory of it. We cannot claim that ultimately there are trees which exist in their own right, whereas the woods are only convenient shorthand indications of there being trees there: we cannot trump the different, admittedly partial, explanations put forward by different disciplines by a paradigm one that claims to be complete, nor can we suppose that there is some bottom line that establishes a final reckoning to which all other explanations must be held accountable. All natural sciences concern themselves with general features of the universe, and there is no reason to discountenance any science because it selects some general features rather than others. Questions about boundary conditions cannot, then, be faulted on grounds of their being general, and not ultimately particular. The answers, too, are to be assessed differently, once the mirage of a complete Laplacian explanation is dispelled. Not only is it irrelevant to the ethologist's purposes, which particular mate the blackbird seeks to attract, or which oxygen molecules are in the blackbird's lungs or blood stream, it is, in its precise detail, causally irrelevant too. The blackbird's song is not addressed to a particular Mrs Blackbird in all her individuality, but to potential Mrs Blackbirds in general, and if one mate proves hard to win, another will do. Much more so at lower levels of existence: if one worm escapes the early bird, another will be equally succulent; if one molecule of oxygen is not absorbed by his haemoglobin, another will. Explanations are inherently universalisable, and if the physical universe is one of qualitatively identical features that cannot, even in principle, be numerically distinguished, then the explanations offered by other disciplines are ones that cannot, even in principle, be improved upon by a fuller physical explanation. Indistinguishability and indeterminism imply a looseness of fit on the part of physical explanation which take away its Procrustean character. The new world-view makes room for there being different sciences which are autonomous without invoking any mysterious causal powers beyond the reach of physical investigation. The autonomy I am arguing for is, in the words of Bechner, 11 theory autonomy rather than a process autonomy: we use new concepts to ask new questions, rather than find that old questions have suddenly acquired surprising new answers. But this distinction between questions and answers offers a solution to the problem of reductionism only if there is some further fundamental difference between the concepts involved in framing the questions asked by different sciences. Otherwise, they might still be vulnerable to a take-over bid on the part of physics. A reductionist programme whereby every concept of chemistry and physics is exhaustively defined in terms of physical concepts alone might still be mounted. Thus far I have only cited examples---Gaussian curves, temperature, blackbird song---where reductive analysis seems out of the question. But the unavailablity of reductive analyses is much wider than that. Tony Dale bowled me out recently, when I had overlooked the fact that the concept of a finite number cannot be expressed in first-order logic. The very concept of a set, and more generally of a relational structure, is a holistic one. But rather than multiply examples, let me cite an in-principle argument. Tarski's theorem shows that the concept of truth cannot be defined within a logistic calculus: roughly, although we can teach a computer many tricks, we cannot program it to use the term `true' in just the way we do. It therefore seems reasonable to hold that other concepts, too, are irreducible, and the failure of the reductionist programme is due not to some mysterious forms of causality but to our endless capacity to form new concepts and in terms of them to ask new questions and seek new types of explanation. The new world-view we are being forced to adopt not only permits us to concern ourselves, qua scientists, with general features, but impels us to do so. Even the corpuscularian philosophy gave somewhat short shrift to the things of ordinary experience. Most configurations of atoms were transitory. Even rocks were subject to the attrition of time, and the mountains, far from being eternal, were being eroded by the wind and the rain. Processes could in principle withstand the ravages of time, and at first glance Liouville's theorem seemed to suggest that point-particles whose initial conditions were close to one another would end up close still. But although, indeed, there was a one-one correlation between initial and final conditions, the correlation was much less stable than at first sight appeared. True, the volume in phase-space remains constant, but its shape does not, and may become spreadeagled with the elapse of time, so that the very smallest difference in initial conditions can lead to a wide difference in outcome. Poincaré pointed out the logic of the roulette wheel, 12 and we now regularly hear of the damage done by irresponsible butterflies on the other side of the universe destroying the reliability of met office forecasts. No longer can Newton number the ultimate things among the (kumaton anerithmon gelasma), the innumerable laughter of quantum waves, but if he wants atoms, must raise his sights to those stable solutions of the Schrödinger time-independent equation, which one way or another, will be realised. And although some solid objects are likely to remain substantially the same over time, most collocations of atoms are evanescent. If we seek stability amid the flux of molecular movement, we are likely to find it at a higher level of generality where chaos theory can indicate the recurrence of relatively stable patterns. In the Heraclitean swirl eddies may last long enough to be identified. Flames are processes, but possess the thingly property of subsisting and sometimes of being identified and individuated. So if we want permanence, we shall be led to focus on certain general features, certain types of boundary condition, which can persist over reasonable stretches of time. Just as chemists look to the time-independent Schrödinger equation to show them what stable atomic configurations there are, and would like to be able to work out in detail what molecules are stable too, so at a much higher level, biologists take note of organisms and species of organisms, which are the basic things of their discipline. Organisms are homeostatic, self-perpetuating and self-replicating. They are processes, like flames, but longer lasting and with greater adaptability in the face of adventitious change. They react to adverse changes in the environment so as to keep some variables the same, which together constitute the same organism that survives over time in the face of alterations in the environment. There is thus an essential difference between organism and environment which differentiates all the life sciences from the physical ones. Thinghood has become modal as well as diachronic. It is not enough to continue to be the same over time: organisms need to be able to change in some respects in order to remain the same in others, more important. Even if I were to alter the environment by watering the garden, moving the bird table, replacing the coconut with peanuts, the flora and fauna, though responding in various ways to the altered situation, would mostly persist as the self-same organisms as if I make no alterations. This invariance under a limited range of altered circumstance is more like the invariance of operation of natural laws than the continuance of atomic matter, but goes further; laws of nature would operate even if initial conditions were different, but do not characteristically alter their mode of operation so as to restore some antecedent condition, whereas biological organisms typically do, provided the alteration of initial conditions is not too drastic. Homeostasis is a familiar concept in science---but logically a treacherous one. A homeostatic system tends to maintain the same state, and sameness can easily shift without our noticing it. The simple negative feedback of a flame or an eddy or a thunderstorm results in the process not being interrupted by every adventitious alteration of circumstance, but the persistence is short-lived none the less. Living organisms last longer, and are better able to withstand the attrition of time, because they react to counter the effect of a wider variety of circumstances. The requirement of persistence alters what we count as the substance that persists, and per contra as the concept of substance develops, so also does our idea of what counts as survival, and more generally what goals the substance seeks to secure and maintain. We begin to recognise as important explanatory schemata not only the survival of the organism, but the survival of the species, and now, even, the survival of the biosphere. And we begin to see not only the individual's maximising its own advantage as a rational goal, but the value of co-operative action, if we are to escape from the Prisoners' Dilemma and not be driven by individual selfishness into collective sub-optimality. Beyond that, I find it difficult to peer, but still hope dimly to discern the lineaments of what, if I may borrow a suggestive phrase from Nicholas Maxwell, 13 we might describe as an aim-oriented rationality. The concept of homeostasis is borrowed from control engineering. It leads on naturally into information theory, and information theory provides the key concepts for understanding genetics. As self-perpetuation gives rise to self-replication, there is a greater need for the exact specification of the self, and the chromosome needs to be understood not only biochemically as a complicated molecule of DNA, but as a genetic code specifying what the new organism is to be like. Once again, the change of emphasis from the particular physical configuration to the general boundary condition, and the looseness of fit between the probabilistic explanations of the underlying physics and the quite different explanations of the emergent discipline allow us to accommodate the new insights without falling into obscurantist obfuscation. 14 Homeostasis also implies sensitivity. If an organism is to be independent of its environment, it must respond to it so as to counteract the changes which the changes in the environment would otherwise bring about within the organism itself: if I am to maintain a constant body temperature, I must sweat when it is hot outside and shiver when it is cold. Even plants must respond to light and to the earth's gravitational field. The greater the independence and the more marked the distinction between the self and the non-self, the greater the awareness the self needs to have of the non-self, and the more it needs to register, so as to be able to offset, untoward changes in the world around it. We are still in the dark as to what exactly consciousness is or how it evolved, but can see in outline why it is needed. A windowless monad cannot survive the changes and chances of this fleeting life---sensitivity to clouds on the horizon no bigger than a man's hand is the price of not being destroyed by unanticipated storms. My interest lies in the end of this line of development. We can give a general characterization of what it is for a system to be able to represent within itself some other system, and so can think of organisms in terms not of biochemistry or evolutionary biology but of information theory and formal logic. And from this point of view we can consider not only consciousness but self-consciousness, and a system that can represent within itself not just some other system but itself as well. There are a whole series of self-reflexive arguments. Popper, a former President of our Society, has devoted much energy to arguing from them to an open universe; in particular, he argues from the impossibility of self-prediction. MacKay argues similarly---other people may predict what I am going to do, but I cannot. 15 Many people, Haldane, Joseph, Malcolm, Mascall, Popper, Price, Wick and others, have been concerned about rationality, and have argued that if determinism or materialism were true, we could not be rationally convinced of it. 16 Reductive metaphysics, which reduces rationality to something else---the movement of physical particles, for example---cannot leave room for the rational arguments which alone could establish its truth. I myself found these arguments intriguing, and indeed, compelling, but extraordinarily difficult to formulate in a cast-iron way. Eventually I came up with an argument based on Gödel's theorem, which is indeed a version of these arguments, and is intended to show in one swoop the failure of any reductionist programme as regards reason. I have received much stick for using Gödel's theorem to show that the mind is not a Turing machine, but I am quite impenitent on that score, and believe that the argument goes much further, and shows not only the impossibility of reducing reason to the mere following of rules, but the essential creativity of reason. We can never formalise reason completely or tie it down to any set of canonical forms, for we can always step outside and view all that has been thus far settled from a fresh standpoint. In particular we can find fresh features that seem significant, and seek fresh sorts of explanation of them. It does, I believe, establish the essential openness of the universe, granted only that there is at least one rational agent. If there be rational agents, since we are rational agents, it follows that the course of events in the universe cannot be reduced to a system of things evolving according to a determinate algorithm, but that there are always new opportunities and further possible exercises of rationality. The interplay between things and explanations is illuminating. Instead of starting with things, we are able to identify things only at higher levels of organization, and the higher we go the more thingly properties we find. Atoms have stability (usually), but are qualitatively identical with many others. Organisms have more individuality, and are less commonly clones, but still view their environment if not in terms of chemical similarity nevertheless in terms of fungibility, readily replacing one food supply by another. Nor is it only the environment that organisms regard fungibly: although some birds are faithfully monogamous, many are not, and if one Mrs Blackbird fails to respond to the musical blandishments of her would-be mate, another will serve his reproductive purposes just as well. Human love likewise is not uniformly faithful to the individual ideal, but with human beings we can see this as a derogation from humanity, and can construct a coherent concept of unique individuality, according to which this person is irreducibly himself, and essentially different from anybody else. 17 Our idea of thinghood leads us from the utterly simple and essentially similar atoms of the corpuscularians to infinitely complex and unique persons, each necessarily different from every other. The different ideals of thinghood support different paradigms of explanation. Since different sorts of feature characterize things at different levels, and the features that characterize at the higher levels cannot be completely defined in terms of those that play a part in lower-level explanations, the higher-level explanations cannot be reduced to lower-level ones. As we have seen, a Gaussian curve cannot be defined in terms of a Laplacian explanation, for it essentially involves the notion of an ensemble or Kollectiv. Higher-level systems are not derivable from some fundamental system, but are, instead, autonomous. We cannot predict the exact position or velocity of a sub-atomic entity, but by means of the time-independent Schrödinger equation we can say what properties a hydrogen atom would have if it existed, and we can have good reason for supposing that many such atoms will exist, since they are stable configurations of quantum-mechanical systems. The explanations sought by a chemist are in terms of energy levels and the valency bonds they generate: those sought by the biologist are in terms of the maintenance of life and the continuation of the species. And as these explanations differ, so also do the things they are explanations about. Explanations influence what is to count as a thing, and ideas of what it truly is to be a thing influence what questions we ask, and what explanations we seek to discover. 18 We can see this, if we like, as a form of emergent evolutionary development: new levels of being evolve from lower, chemical elements from the flux after the Big Bang, molecules, organisms, consciousness, and self-consciousness, in the fullness of time; but we can also see it in terms of a hierarchy of Platonic forms and explanations, each going beyond the limits of its predecessors, and at the higher levels reaching out to ever new kinds of creative rationality. To summarise, then. The new scientific world-view differs from traditional corpuscularianism in not postulating some ultimate thing-like entities whose motions determine completely the state of the world not only at that time but at all subsequent ones too. Instead of there being particular point-particles, there are only general features, and instead of a rigid determinist law, there are only probabilities, which are, indeed, enough to enable us to make reliable predictions about many aspects of the world, but do not foreclose the possibility of other types of explanation being the best available. Other types of explanation are answers to other types of questions, and it is because we ask different questions that the different sciences are different. These different questions pick on different general features, often different types of boundary condition; and once we acknowledge that there is no metaphysical reason to reduce the generic characterization of boundary conditions typical of other sciences to the paradigm physical terms of Laplacian corpuscularianism, we can accept these other sciences as sciences in their own right, since, metaphysics apart, we have good reason to resist reductionism as applied to questions rather than answers. The abolition of ultimate things thus opens the way to our acknowledging the autonomy of the various sciences. At the same time, the notion of a thing leads us to pick out various types of boundary condition as instantiating, to a greater and greater degree, certain characteristic features of being a thing---permanence, stability, ability to survive adventitious alterations in the environment, and the like. As we follow these through, we find a natural hierarchy of the sciences in which we ask questions about more and more complicated entities, possessing more and more thing-like perfections. Things have gone up market. By an almost Hegelian dialectic our notion of a thing becomes transmuted into that of a substance, and in so far as we remain pluralists at all, we move from the minimal qualitatively identical, though numerically distinct, atoms of the corpuscularians to the infinitely complex, though windowed, monads of a latter-day Leibniz. Whether Lucretius would have been pleased at this outcome of the complex interplay between ontological intimations of existence and rationalist requirements of explicability, I do not know. But he could hardly complain at my taking this as my theme, here at an address to the British Society for the Philosophy of Science taking place in the London School of Economics, whose motto is taken from Virgil's description of him, and also expresses the common sentiment of all our members, Felix qui potuit rerum cognoscere causas Happy he who understands the explanations of things To return from footnote to text, click on footnote number 1. Michael Redhead, ``A Philosopher Looks at Quantum Field Theory'', in Harvey Brown and Rom Harré, eds., Philosophical Foundations of Quantum Mechanics, Oxford, 1988, p.10. 2. Robin Le Poidevin, Change, Cause and Contradiction, London, 1991, esp. ch.8. 3. C.A.Howson and P.Urbach, Scientific Reasoning: the Bayesian Approach, La Salle, Illinois, USA, 1989, p.19. 4. D.A.Gillies, Objective Probability, Cambridge, 1973, esp. ch.5. 5. Reprinted in S.French and H.Kamminga, eds., Correspondence, Invariance and Heuristics, (Kluwer Academic Publishers, Holland), 1993, p. 329. 6. Nancy Cartwright, How the Laws of Physics Lie, Oxford, 1983, ch.2, esp. pp.44-46. Peter Lipton, Inference to the Best Explanation, London, 1991, esp. ch.3; John Worrall, ``The Value of a Fixed Methodology'', British Journal for the Philosophy of Science, 39, 1988. 7. David Papineau, British Journal for the Philosophy of Science, 47, 1991, p.399. 8. I owe this point to H.C.Longuet-Higgins, The Nature of Mind, Edinburgh, 1972, ch.2, pp.16-21, esp. p.19; reprinted in H.C.Longuet-Higgins, Mental Processes, Cambridge, Mass., 1987, ch.2, pp.13-18, esp.p.16. I am also particularly indebted to C.F.A.Pantin, The Relations between the Sciences, Cambridge, 1968; and to A.R.Peacocke, God and the New Biology, London, 1986, and Theology for a Scientific Age, Oxford, 1990. Michael Polanyi emphasized the importance of boundary conditions and their relevance to the different sorts of explanation sought by different disciplines. In his ``Tacit Knowing'', Reviews of Modern Physics, October, 1962, pp.257-259, he cites the example of a steam engine, which although entirely subject to the laws of chemistry and physics, cannot be explained in terms of those disciplines alone, but must be explained in terms of the function it is capable, in view of its construction, perform. What is interesting about the steam engine is not the laws of chemistry and physics, but the boundary conditions, which in view of those laws, make it capable of transforming heat into mechanical energy; it is the province of engineering science, not physics. The example of the steam engine is illuminating in that no question of vitalism arises. See also, Michael Polanyi, ``Life Transcending Physics and Chemistry'', Chemical and Engineering News, August 21, 1067, pp.54-66; and ``Life's Irreducible Structure'', Science, 160, 1968, pp.1308-1312. 9. F.H.C.Crick, Of Molecules and Man, University of Washington Press, Seattle and London, 1966, p.10. 10. That the biologist is primarily concerned with boundary conditions of a special type is pointed out by Bernd-Olaf Küppers, Information and the Origin of Life, M.I.T. Press, Cambridge, Mass, U.S.A., 1990, p.163. 11. Compare the distinction drawn by M.Bechner between theory autonomy and process autonomy in his ``Reduction, Hierarchies and Organicism'' in F.J.Ayala and T.Dozbhanski, Studies in the Philosophy of Biology: Reduction and Related Problems, London, 1974, p.170; cited by A.R.Peacocke, God and the New Biology, London, 1986, p.9. 12. Henri Poincaré, Science and Method, tr. F.Maitland, London, 1914, p.68. 13. Nicholas Maxwell, From Knowledge to Wisdom, Oxford, 1984, esp. ch.4. 15. D.M.MacKay, ``On the Logical Indeterminacy of a Free Choice'', Mind, LXIX, 1960, pp.31-40. 16. See K.R.Popper, The Open Universe, ed. W.W.Bartley, III, London, 1982, ch.III, $$ 23,24. Popper traces the argument back to Descartes and St Augustine. A further list is given in J.R.Lucas, The Freedom of the Will, Oxford, 1970, p.174. Further arguments and fuller references may be found in Behavioral Sciences, 1990, 13, 4. 17. I argue this in my ``A Mind of One's Own'', Philosophy, October, 1993. 18. Compare A.R. Peacocke, Theology for a Scientific Age, Oxford, 1990, p.41: Because of widely pervasive reductionist presuppositions, there has been a tendency to regard the level of atoms and molecules as alone `real'. However, there are good grounds for not affirming any special priority to the physical and chemical levels of description and for believing that what is real is what the various levels of description actually refer to. There is no sense in which subatomic particles are to be graded as `more real' than, say, a bacterial cell a human person or a social fact. Each level has to be regarded as a slice thorough the totality of reality, in the sense that we have to take account of its mode of operation at that level. Return to bibliography Return to home page
2369e9887caaf284
Chemical Principles/Quantum Theory and Atomic Structure From Wikibooks, open books for an open world Jump to: navigation, search The continuity if all dynamical effects was formerly taken for granted as the basis of all physical theories and, in close correspondence with Aristotle, was condensed in the well- known dogma-Natura non facit saltus- nature makes no leaps. However, present-day investigation has made a considerable breach even in this venerable stronghold if physical science. This time it is the principle if thermo- dynamics with which that theorem has been brought into collision by new facts, and unless all signs are misleading, the days if its validity are numbered. Nature does indeed seem to make jumps-and very extraordinary ones. Max Planck(1914) Physics seemed to be settling down quite satisfactorily in the late nineteenth century. A clerk in the U.S. Patent Office wrote a now-famous letter of resignation in which he expressed a desire to leave a dying agency, an agency that would have less and less to do in the future since most inventions had already been made. In 1894, at the dedication of a physics laboratory in Chicago, the famous physicist A. A. Michelson suggested that the more important physical laws all had been discovered, and "Our future discoveries must be looked for in the sixth decimal place." Thermodynamics, statistical mechanics, and electromagnetic theory had been brilliantly successful in explaining the behavior of matter. Atoms themselves had been found to be electrical, and undoubtedly would follow Maxwell's electromagnetic laws. Then came x rays and radioactivity. In 1895, Wilhelm Rontgen (1845- 1923) evacuated a Crookes tube (Figure 1-11) so the cathode rays struck the anode without being blocked by gas molecules. Rontgen discovered that a new and penetrating form of radiation was emitted by the anode. This radiation, which he called x rays, traveled with ease through paper, wood, and flesh but was absorbed by heavier substances such as bone and metal. Rontgen demonstrated that x rays were not deflected by electric or magnetic fields and therefore were not beams of charged particles. Other scientists suggested that the rays might be electromagnetic radiation like light, but of a shorter wavelength. The German physicist Max von Laue proved this hypothesis 18 years later when he diffracted x rays with crystals. In 1896, Henri Becquerel (1852–1908) observed that uranium salts emitted radiation that penetrated the black paper coverings of photographic plates and exposed the photographic emulsion. He named this behavior radioactivity. In the next few years, Pierre and Marie Curie isolated two entirely new, and radioactive, elements from uranium ore and named them polonium and radium. Radioactivity, even more than x rays, was a shock to physicists of the time. They gradually realized that radiation occurred during the breakdown of atoms, and that atoms were not indestructible but could decompose and decay into other kinds of atoms. The old certainties, and the hopes for impending certainties, began to fall away. The radiation most commonly observed was of three kinds, designated alpha (α), beta (β), and gamma (γ). Gamma radiation proved to be electromagnetic radiation of even higher frequency (and shorter wavelength) than x rays. Beta rays, like cathode rays, were found to be beams of electrons. Electric and magnetic deflection experiments showed the mass of α radiation to be 4 amu and its charge to be +2; α particles were simply nuclei of helium, He. The next certainty to slip away was the quite satisfying model of the atom that had been proposed by J. J. Thomson. Rutherford and The Nuclear Atom[edit] In Thomson's model of the atom all the mass and all the positive charge were distributed uniformly throughout the atom, with electrons embedded in the atom like raisins in a pudding. Mutual repulsion of electrons separated them uniformly. The resulting close association of positive and negative charges was reasonable. Ionization could be explained as a stripping away of some of the electrons from the pudding, thereby leaving a massive, solid atom with a positive charge. In 1910, Ernest Rutherford (1871–1937) disproved the Thomson model, more or less by accident, while measuring the scattering of a beam of α particles by extremely thin sheets of gold and other heavy metals. (His experimental arrangement is shown in Figure 8-1.) He expected to find a relatively small deflection of particles, as would occur if the positive charge and mass of the atoms were distributed throughout a large volume in a uniform way (Figure 8-2a). What he observed was quite different, and wholly unexpected. In his own words: "In the early days I had observed the scattering of α particles, and Dr. Geiger in my laboratory had examined it in detail. He found in thin pieces of heavy metal that the scattering was usually small, of the order of one degree. One day Geiger came to me and said, 'Don't you think that young Marsden, whom I am training in radioactive methods, ought to begin a small research?' Now I had thought that too, so I said, 'Why not let him see if any α particles can be scattered through a large angle?' I may tell you in confidence that I did not believe they would be, since we knew that the α particle was a very fast massive particle, with a great deal of energy, and you could show that if the scattering was due to the accumulated effect of a number of small scatterings, the chance of an α particle's being scattered backwards was very small. Then I remember two or three days later Geiger coming to me in great excitement and saying, 'We have been able to get some of the α particles coming backwards.' ... It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you." Rutherford, Geiger, and Marsden calculated that this observed backscattering was precisely what would be expected if virtually all the mass and positive charge of the atom were concentrated in a dense nucleus at the center of the atom (Figure 8-2b). They also calculated the charge on the gold nucleus as 100 ± 20 (actually 79), and the radius of the gold nucleus as something less than 10−12 cm (actually nearer to 10−13 cm). The picture of the atom that emerged from these scattering experiments was of an extremely dense, positively charged nucleus surrounded by negative charges-electrons. These electrons inhabited a region with a radius 100,000 times that of the nucleus. The majority of the α particles passing through the metal foil were not deflected because they never encountered the nucleus. However, particles passing close to such a great concentration of charge would be deflected; and those few particles that happened to collide with the small target would be bounced back in the direction from which they had come. The validity of Rutherford's model has been borne out by later investigations. An atom's nucleus is composed of protons and neutrons (Figure 8-3). Just enough electrons are around this nucleus to balance the nuclear charge. But this model of an atom cannot be explained by classical physics. What keeps the positive and negative charges apart? If the electrons were stationary, electrostatic attraction would pull them toward the nucleus to form a miniature version of Thomson's atom. Conversely, if the electrons were moving in orbits around the nucleus, things would be no better. An electron moving in a circle around a positive nucleus is an oscillating dipole when the atom is viewed in the plane of the orbit; the negative charge appears to oscillate up and down relative to the positive charge, By all the laws of classical electromagnetic theory, such an oscillator should broadcast energy as electromagnetic waves. But if this happened, the atom would lose energy, and the electron would spiral into the nucleus. By the laws of classical physics, the Rutherford model of the atom could not be valid. Where was the flaw? The Quantization of Energy[edit] Other flaws that were just as disturbing as Rutherford's impossibly stable atoms were appearing in physics at this time. By the turn of the century scientists realized that radio waves, infrared, visible light, and ultraviolet radiation (and x rays and y rays a few years later) were electromagnetic waves with different wavelengths. These waves all travel at the same speed, c, which is 2.9979 × l08 m sec-l or 186,000 miles sec-l (This speed seems almost instantaneous until you recall that the slowness of light is responsible for the 1.3-sec delay each way in radio messages between the earth and the moon.) Waves such as these are described by their wavelength (designated by the Greek letter lambda, λ), amplitude, and frequency (designated by the Greek letter nu, ν), which is the number of cycles of a moving wave that passes a given point per unit of time (Figure 8-4). The speed of the wave, c, which is constant for all electromagnetic radiation, is the product of the frequency (the number of cycles per second or hertz, Hz) and the length of each cycle (the wavelength): c = νλ (8-1) The reciprocal of the wavelength is called the wave number, : = 1/λ Its units are commonly waves per centimeter, or cm−1. The electromagnetic spectrum as we know it is shown in Figure 8-5a. The scale is logarithmic rather than linear in wavelength; that is, it is in increasing powers of 10. On this logarithmic scale, the portion of the electromagnetic radiation that our eyes can see is only a small sector halfway between radio waves and gamma rays. The visible part of the spectrum is shown in Figure 8-5b. Example 1 Light of wavelength 5000 Å (or 5 × 10−5 cm) falls in the green region of the visible spectrum. Calculate the wave number, , corresponding to this wavelength. The wave number is equal to the reciprocal of the wavelength, so = 1/λ = 1/5 × 10-5 cm = 0.2 × 105 cm-1 = 2 × 104 cm-1 The Ultraviolet Catastrophe[edit] Classical physics gave physicists serious trouble even when they used it to try to explain why a red-hot iron bar is red. Solids emit radiation when they are heated. The ideal radiation from a perfect absorber and emitter of radiation is called blackbody radiation. The spectrum, or plot of relative intensity against frequency, of radiation from a red-hot solid is shown in Figure 8-6a. Since most of the radiation is in the red and infrared frequency regions, we see the color of the object as red. As the temperature is increased, the peak of the spectrum moves to higher frequencies, and we see the hot object as orange, then yellow, and finally white when enough energy is radiated through the entire visible spectrum. The difficulty in this observation is that classical physics predicts that the curve will keep rising to the right rather than falling after a maximum. Thus there should be much more blue and ultraviolet radiation emitted than is actually observed, and all heated objects should appear blue to our eyes. This complete contradiction of theory by facts was called the ultraviolet catastrophe by physicists of the time. In 1900, Max Planck provided an explanation for this paradox. To do this he had to discard a hallowed tenet of science-that variables in nature change in a continuous way (nature does not make jumps). According to classical physics, light of a certain frequency is emitted because charged objects- atoms or groups of atoms- in a solid vibrate or oscillate with that frequency. We could thus theoretically calculate the intensity curve of the spectrum if we knew the relative number of oscillators that vibrate with each frequency.All frequencies are thought to be possible, and the energy associated with a particular frequency depends only on how many oscillators are vibrating with that frequency . There should be no lack of high-frequency oscillators in the blue and ultraviolet regions. Planck made the revolutionary suggestion that the energy of electromagnetic radiation comes in packages, or quanta. The energy of one package of radiation is proportional to the frequency of the radiation: E = (8-2) The proportionality constant, h, is known as Planck's constant and has the value 6.6262 × 10−34 J sec. By Planck's theory, a group of atoms cannot emit a small amount of energy at a high frequency; high frequencies can be emitted only by oscillators with a large amount of energy, as given by E = . The probability of finding oscillators with high frequencies is therefore slight because the probability of finding groups of atoms with such unusually large vibrational energies is low. Instead of rising, the spectral curve falls at high frequencies, as in Figure 8-6. Was Planck's theory correct, or was it only an ad hoc explanation to account for one isolated phenomenon? Science is plagued with theories that explain the phenomenon for which they were invented, and thereafter never explain another phenomenon correctly. Was the idea that electromagnetic energy comes in bundles of fixed energy that is proportional to frequency only another one-shot explanation? The Photoelectric Effect[edit] Albert Einstein (1879–1955) provided another example of the quantization of energy, in 1905, when he successfully explained the photoelectric effect, in which light striking a metal surface can cause electrons to be given off. (Photocells in automatic doors use the photoelectric effect to generate the electrons that operate the door-opening circuits.) For a given metal there is a minimum frequency of light below which no electrons are emitted, no matter how intense the beam of light. To classical physicists it seemed nonsensical that for some metals the most intense beam of red light could not drive off electrons that could be ejected by a faint beam of blue light. Einstein showed that Planck's hypothesis explained such phenomena beautifully. The energy of the quanta of light striking the metal, he said, is greater for blue light than for red. As an analogy, imagine that the low-frequency red light is a beam of Ping-Pong balls and that the high-frequency blue light is a beam of steel balls with the same velocity. Each impact of a quantum of energy of red light is too small to dislodge an electron; in our analogy, a steady stream of Ping-Pong balls cannot do what one rapidly moving steel ball can. These quanta of light were named photons. Because of the successful explanation of both the blackbody and photoelectric effects, physicists began recognizing that light behaves like particles as well as like waves. Example 2 Consider once again the green light in Example 1. The relationship E = allows us to calculate the energy of one green photon. What is this energy in kilojoules? In kilojoules per mole of green photons? Let us assume that we know the wavelength to two significant digits, 5.0 X 10−5 cm. The frequency, ν, of this green light is c = λν = 3.0 × 1010 cm sec-1 = (5.0 × 10-5 cm) ν ν =   = 0.60 × 1015 sec-1     (or 0.60 × 1015 Hz) The energy of one green photon, then, is E =   (6.63 × 10-34 J sec)(0.60 × sec-1)   4.0 × 10-19 J, or 4.0 × 10-22 kJ This is the energy of one green photon. To obtain the energy of a mole of green photons, we must multiply by Avogadro's number: E = (4.0 × 10-22 kJ photon-1)(6.02 × 10-23 photons mole-1)   = 2.4 × 102 kJ mole-1 The Spectrum of the Hydrogen Atom[edit] The most striking example of the quantization of light, to a chemist, appears in the search for an explanation of atomic spectra. Isaac Newton (1642–1727) was one of the first scientists to demonstrate with a prism that white light is a spectrum of many colors, from red at one end to violet at the other. We know now that the electromagnetic spectrum continues on both sides of the small region to which our eyes are sensitive; it includes the infrared at low frequencies and the ultraviolet at high frequencies. All atoms and molecules absorb light of certain characteristic frequencies. The pattern of absorption frequencies is called an absorption spectrum and is an identifying property of any particular atom or molecule. The absorption spectrum of hydrogen atoms is shown in Figure 8-7. The lowest-energy absorption corresponds to the line at 82,259 cm−1. Notice that the absorption lines are crowded closer together as the limit of 109,678 cm−1 is approached. Above this limit absorption is continuous. If atoms and molecules are heated to high temperatures, they emit light of certain frequencies. For example, hydrogen atoms emit red light when they are heated. An atom that possesses excess energy (e.g., an atom that has been heated) emits light in a pattern known as its emission spectrum. A portion of the emission spectrum of atomic hydrogen is shown in Figure 8-8. Note that the lines occur at the same wave numbers in the two types of spectra. If we look more closely at the emission spectrum in Figure 8-8, we see that there are three distinct groups of lines. These three groups or series are named after the scientists who discovered them. The series that starts at 82,259 cm−1 and continues to 109,678 cm−1 is called the Lyman series and is in the ultraviolet portion of the spectrum. The series that starts at 15,233 cm−1 and continues to 27,420 cm−1 is called the Balmer series and covers a large portion of the visible and a small part of the ultraviolet spectrum. The lines between 5332 cm−1 and 12,186 cm−1 are called the Paschen series and fall in the near-infrared region. The Balmer spectra of hydrogen from several stars are shown in Figure 8-9. J. J. Balmer proved, in 1885, that the wave numbers of the lines in the Balmer spectrum of the hydrogen atom are given by the empirical relationship RH     n = 3, 4, 5, . . . (8-3) Later, Johannes Rydberg formulated a general expression that gives all of the line positions. This expression, called the Rydberg equation, is = RH (8-4) In the Rydberg equation n1 and n2 are integers, with n2 greater than n1; RH is called the Rydberg constant and is known accurately from experiment to be 109,677.581 cm−1. Example 3 Calculate for the lines with n1 = 1 and n2 = 2, 3, and 4. n1 = 1, n2 = 2 line: n1 = 1, n2 = 3 line: n1 = 1, n2 = 4 line: We see that the wave numbers obtained in Example 3 correspond to the first three lines in the Lyman series. Thus we expect that the Lyman series corresponds to lines calculated with n1 = 1 and n2 = 2, 3, 4, 5, . . . . We can check this by calculating the wave number for the line with n2 = 1 and n2 = . n1 = 1, n2 = line: The wave number 109,678 cm-1 corresponds to the highest emission line in the Lyman series. The wave number for n1 = 2 and n2 = 3 is This corresponds to the first line in the Balmer series. Thus, the Balmer series corresponds to the n1 = 2, n2 = 3, 4, 5, 6, . . . lines. You probably would expect the lines in the Paschen series to correspond to n1 = 3, n2 = 4, 5, 6, 7, . . . They do. Now you should wonder where the lines are with n1 = 4, n2 = 5, 6, 7, 8, . . . , and n1 = 5, n2 = 6, 7, 8, 9, . . . . They are exactly where the Rydberg equation predicts they should be. The n = 4 series was discovered by Brackett and the n = 5 series was discovered by Pfund. The series with n = 6 and higher are located at very low frequencies and are not given special names. The Rydberg formula, equation 8-4, is a summary of observed facts about hydrogen atomic spectra. It states that the wave number of a spectral line is the difference between two numbers, each inversely proportional to the square of an integer. If we draw a set of horizontal lines at a distance RH/n' down from a baseline, with n = 1, 2, 3, 4, . . ., then each spectral line in any of the hydrogen series is observed to correspond to the distance between two such horizontal lines in the diagram (Figure 8-10). The Lyman series occurs between line n = 1 and those above it; the Balmer series occurs between line n = 2 and those above it; the Paschen series occurs between line n = 3 and those above it; and the higher series are based on lines n = 4, 5, and so on. Is the agreement between this simple diagram and the observed wave numbers of spectral lines only a coincidence? Does the idea of a wave number of an emitted line being the difference between two "wave-number levels" have any physical significance, or is this just a convenient graphical representation of the Rydberg equation? Bohr's Theory of the Hydrogen Atom[edit] In 1913, Niels Bohr (1885–1962) proposed a theory of the hydrogen atom that, in one blow, did away with the problem of Rutherford's unstable atom and gave a perfect explanation of the spectra we have just discussed. There are two ways of proposing a new theory in science, and Bohr's work illustrates the less obvious one. One way is to amass such an amount of data that the new theory becomes obvious and self-evident to any observer. The theory then is almost a summary of the data. This is essentially the way Dalton reasoned from combining weights to atoms. The other way is to make a bold new assertion that initially does not seem to follow from the data, and then to demonstrate that the consequences of this assertion, when worked out, explain many observations. With this method, a theorist says, "You may not see why, yet, but please suspend judgment on my hypothesis until I show you what I can do with it." Bohr's theory is of this type. Bohr answered the question of why the electron does not spiral into the nucleus by simply postulating that it does not. In effect, he said to classical physicists: "You have been misled by your physics to expect that the electron would radiate energy and spiral into the nucleus. Let us assume that it does not, and see if we can account for more observations than by assuming that it does." The observations that he explained so well are the wavelengths of lines in the atomic spectrum of hydrogen. Bohr's model of the hydrogen atom is illustrated in Figure 8-11: an electron of mass me moving in a circular orbit at a distance r from a nucleus. If the electron has a velocity of v, it will have an angular momentum of mevr. (To appreciate what angular momentum is, think of an ice skater spinning on one blade like a top. The skater begins spinning with his arms extended. As he brings his arms to his sides, he spins faster and faster. This is because, in the absence of any external forces, angular momentum is conserved. As the mass of the skater's arms comes closer to the axis of rotation, or as r decreases, the velocity of his arms must increase in order that the product mvr remain constant.) Bohr postulated, as the first basic assumption of his theory, that in a hydrogen atom there could only be orbits for which the angular momentum is an integral multiple of Planck's constant divided by 2: There is no obvious justification for such an assumption; it will be accepted only if it leads to the successful explanation of other phenomena. Bohr then showed that, with no more new assumptions, and with the laws of classical mechanics and electrostatics, his principle leads to the restriction of the energy of an electron in a hydrogen atom to the values E = n = 1, 2, 3, 4, . . . (8-5) The integer n is the same integer as in the angular momentum assumption, mevr = n(h/ 2</math>\pi</math>); k is a constant that depends only on Planck's constant, h, the mass of an electron, me, and the charge on an electron, e: k = 13.595 electron volts (eV)* atom-1 = 1312 kJ mole-1 The radius of the electron's orbit also is determined by the integer n: r = n2a0 (8-6) The constant, a0, is called the first Bohr radius and is given in Bohr's theory by a0 = 0.529Å The first Bohr radius is often used as a measure of length called the atomic unit, a.u. The energy that an electron in a hydrogen atom can have is quantized, or limited to certain values, by equation 8-5. The integer, n, that determines these energy values is called the quantum number. When an electron is removed (ionized) from an atom, that electron is described as excited to the quantum state n = . From equation 8-5, we see that as n approaches , E approaches zero. Thus, the energy of a completely ionized electron has been chosen as the zero energy level. Because energy is required to remove an electron from an atom, an electron that is bound to an atom must have less energy that this, and hence a negative energy. The relative sizes of the first five hydrogen-atom orbits are compared in Figure 8-12. Example 4 For a hydrogen atom, what is the energy, relative to the ionized atom, of the ground state, for which n = 1? How far is the electron from the nucleus in this state? What are the energy and radius of orbit of an electron in the first excited state, for which n = 2? The answers are E1 = = -1312 kJ mole-1 E2 = = -328.0 kJ mole-1 r1 = 12 × 0.529 Å = 0.529 Å r2 = 22 × 0.529 Å = 2.12 Å • An electron volt is equal to the amount of energy an electron gains as it passes from a point of low potential to a point one volt higher in potential (1 eV = 1.6022 × 10−19 J). Example 5 Using the Bohr theory, calculate the ionization energy of the hydrogen atom. The ionization energy, IE, is that energy required to remove the electron, or to go from quantum state n = 1 to n = . This energy is IE = E - E1 = 0.00 - (- 1312 kJ mole-1)   = + 1312 kJ Example 6 Diagram the energies available to the hydrogen atom as a series of horizontal lines. Plot the energies in units of k for simplicity. Include at least the first eight quantum levels and the ionization limit. Compare your result with Figures 8-10 and 8-13. Try this one yourself. In the second part of his theory, Bohr postulated that absorption and emission of energy occur when an electron moves from one quantum state to another. The energy emitted when an electron drops from state n2 to a lower quantum state n1 is the difference between the energies of the two states: ΔE = E1 - E2 = -k (8-7) The light emitted is assumed to be quantized in exactly the way predicted from the blackbody and photoelectric experiments of Planck and Einstein: E| = hv = hc (8-8) If we divide equation 8-7 by hc to convert from energy to wave number units, we obtain the Rydberg equation, Recall that the experimental value of RH is 109,677.581 cm-1. The graphic representation of the Rydberg equation, Figure 8-10, now is seen to be an energy-level diagram of the possible quantum states of the hydrogen atom. We can see why light is absorbed or emitted only at specific wave numbers. The absorption of light, or the heating of a gas, provides the energy for an electron to move to a higher orbit. Then the excited hydrogen atom can emit energy in the form of light quanta when the electron falls back to a lower-energy orbit. From this emission come the different series of spectral lines: 1. The Lyman series of lines arises from transitions from the n = 2, 3, 4, . . . levels to the ground state (n = 1) 2. The Balmer series arises from transitions from the n = 3, 4, 5, . . . levels to the n = 2 level. 3. The Paschen series arises from transitions from the n = 4, 5, 6, . . . levels to the n = 3 level. An excited hydrogen atom in quantum state n = 8 may drop directly to the ground state and emit photon in the Lyman series, or it may drop first to n = 3, emit a photon in the Paschen series, and then drop to n = 1 and emit a photon in the Lyman series. The frequency of each photon depends on the energy difference between levels: ΔE = Ea - Eb = hv By cascading down the energy levels, the electron in one excited hydrogen atom can successively emit photons in several series. Therefore, all series are present in the emission spectrum from hot hydrogen. However, when measuring the absorption spectrum of hydrogen gas at lower temperatures we find virtually all the hydrogen atoms in the ground state. Therefore, almost all the absorption will involve transitions from n = 1 to higher states, and only the Lyman series will be observed. Energy Levels of a General One-Electron Atom[edit] Bohr's theory can also be used to calculate the ionization energy and spectral lines of any atomic species possessing only one electron (e.g., He+, Li2+, Be3+ ). The energy of a Bohr orbit depends on the square of the charge on the atomic nucleus (Z is the atomic number): E = k = 13.595 eV or 1312 kJ mole-1 The equation reduces to equation 8-5 in the case of atomic hydrogen (Z = 1). Example 7 Calculate the third ionization energy of a lithium atom. A lithium atom is composed of a nucleus of charge +3 (Z = 3) and three electrons. The first ionization energy, IE1, of an atom with more than one electron is the energy required to remove one electron. For lithium, Li(g) Li+(g) + e-    ΔE = IE1 The energy needed to remove an electron from the unipositive ion, Li+, is defined as the second ionization energy, IE2, of lithium, Li(g) Li2+(g) + e-    ΔE = IE2 and the third ionization energy, IE3, of lithium is the energy required to remove the one remaining electron from Li2+. For lithium, Z = 3 and IE3 = (3)2(13.595 eV) = 122.36 eV. (The experimental value is 122.45 eV.) The Need for a Better Theory[edit] The Bohr theory of the hydrogen atom suffered from a fatal weakness: It explained nothing except the hydrogen atom and any other combination of a nucleus and one electron. For example, it could account for the spectra of He+ and Li2+, but it did not provide a general explanation for atomic spectra. Even the alkali metals (Li, Na, K, Rb, Cs), which have a single valence electron outside a closed shell of inner electrons, produce spectra that are at variance with the Bohr theory. The lines observed in the spectrum of Li could be accounted for only by assuming that each of the Bohr levels beyond the first was really a collection of levels of different energies, as in Figure 8-13: two levels for n = 2, three levels for n = 3, four for n = 4, and so on. The levels for a specific n were given letter symbols based on the appearance of the spectra involving these levels: s for "sharp," p for "principal," d for "diffuse," and f for "fundamental." Arnold Sommerfeld (1868–1951) proposed an ingenious way of saving the Bohr theory. He suggested that orbits might be elliptical as well as circular. Furthermore, he explained the differences in stability of levels with the same principal quantum number, n, in terms of the ability of the highly elliptical orbits to bring the electron closer to the nucleus (Figure 8-14). For a point nucleus of charge +1 in Hydrogen, the energies of all levels with the same n would be identical. But for a nucleus of +3 screened by an inner shell of two electrons Li, an electron in an outer circular orbit would experience a net attraction of +1, whereas one in a highly elliptical orbit would penetrate the screening shell and feel a charge approaching +3 for part of its traverse. Thus, the highly elliptical orbits would have the most additional stability illustrated in Figure 8-13. The s orbits, being the most elliptical of all in Sommerfeld's model, would be much more stable than the others in the set of common n. The Sommerfeld scheme led no further than the alkali metals. Again an impasse was reached, and an entirely fresh approach was needed. Particles of Light and Waves of Matter[edit] At the beginning of the twentieth century, scientists generally believed that all physical phenomena could be divided into two distinct and exclusive classes. The first class included all phenomena that could be described by laws of classical, or Newtonian, mechanics of motion of discrete particles. The second class included all phenomena showing the continuous properties of waves. One outstanding property of matter, apparent since the time of Dalton, is that it is built of discrete particles. Most material substances appear to be continuous: water, mercury, salt crystals, gases. But if our eyes could see the nuclei and electrons that constitute atoms, and the fundamental particles that make up nuclei, we would discover quickly that every material substance in the universe composed of a certain number of these basic units and therefore is quantized. Objects appear continuous only because of the minuteness of the individual units. In contrast, light was considered to be a collection of waves traveling through space at a constant speed; any combination of energies and frequencies was possible. However, Planck, Einstein, and Bohr showed that light when observed under the right conditions, also behaves as though it occurs in particles, or quanta. In 1924, the French physicist Louis de Broglie (b. 1892) advanced the complementary hypothesis that all matter possesses wave properties. De Broglie pondered the Bohr atom, and asked himself where, in nature, quantization of energy occurs most naturally. An obvious answer is in the vibration of a string with fixed ends. A violin string can vibrate with only a selected set of frequencies: a fundamental tone with the entire string vibrating as a unit, and overtones of shorter wavelengths. A wavelength in which the vibration fails to come to a node (a place of zero amplitude) at both ends of the string would be an impossible mode of vibration (Figure 8-15). The vibration of a string with fixed ends is quantized by the boundary conditions that the ends cannot move. Can the idea of standing waves be carried over to the theory of the Bohr atom? Standing waves in a circular orbit can exist only if the circumference of the orbit is an integral number of wavelengths (Figure 8-15c, d). If it is not, waves from successive turns around the orbit will be out of phase and will cancel. The value of the wave amplitude at 10° around the orbit from a chosen point will not be the same as at 370° or 730°, yet all these represent the same point in the orbit. Such ill-behaved waves are not single-valued at any point on the orbit: Single-valuedness is a boundary condition on acceptable waves. For single-valued standing waves around the orbit, the circumference is an integer, n, times the wavelength: 2r = nλ But from Bohr's original assumption about angular momentum, 2r = n Therefore, the idea of standing waves leads to the following relationship between the mass of the electron me its velocity, v, and its wavelength, λ: λ = (8-10) De Broglie proposed this relationship as a general one. With every particle, he said, there is associated a wave. The wavelength depends on the mass of the particle and how fast it is moving. If this is so, the same sort of diffraction from crystals that von Laue observed with x rays should be produced with electrons. In 1927, C. Davisson and L. H. Germer demonstrated that metal foils diffract a beam of electrons exactly as they diffract an x-ray beam, and that the wavelength of a beam of electrons is given correctly by de Broglie's relationship (Figure 8-16). Electron diffraction is now a standard technique for determining molecular structure. Example 8 A typical electron diffraction experiment is conducted with electrons accelerated through a potential drop of 40,000 volts, or with 40,000 eV of energy. What is the wavelength of the electrons? First convert the energy, E, from electron volts to joules: E = 40,000 eV = 6.409 × 10-15 J (This and several other useful conversion factors, plus a table of the values of frequently used physical constants, are in Appendix 2.) Since the energy is E = mev2, the velocity of the electrons is v =   (1.407 × 1016 m2 sec-2)1/2 = 1.186 × 108 m sec-1 (In the expression E = mev2, if the mass is in kilograms and the velocity is in m sec−1, then the energy is in joules: 1 J equals 1 kg m2 sec−2 of energy. We used this conversion of units in the preceding step. The mass of the electron, me = 9.110 × 10−31 kg, is found in Appendix 2.) The momentum of the electron, mev, is mev = 9.110 × 10-31 kg × 1.186 × 108 m sec-1   = 10.80 × 10-23 kg m sec-1 Finally, the wavelength of the electron is obtained from the de Broglie relationship: λ =   = 0.06130 × 10-10 0.06130 × 10-10 m   = 0.06130 Å So 40-kilovolt (kV) electrons produce the diffraction effects expected from waves with a wavelength of six-hundredths of an angstrom. Such calculations are all very well, but the question remains: Are electrons waves or are they particles? Are light rays waves or particles? Scientists worried about these questions for years, until they gradually realized that they were arguing about language and not about science. Most things in our everyday experience behave either as what we would call waves or as what we would call particles, and we have created idealized categories and used the words wave and particle to identify them. The behavior of matter as small as electrons cannot be described accurately by these large-scale categories. Electrons, protons, neutrons, and photons are not waves, and they are not particles. Sometimes they act as if they were what we commonly call waves, and in other circumstances they act as if they were what we call particles. But to demand, "Is an electron a wave or a particle?" is pointless. This wave-particle duality is present in all objects; it is only because of the scale of certain objects that one behavior predominates and the other is negligible. For example, a thrown baseball has wave properties, but a wavelength so short we cannot detect it. Example 9 A 200-g baseball is thrown with a speed of 30 m sec−1 Calculate its de Broglie wavelength. The answer is λ = 1.1 × 10−34 m = 1.1 × 10−24 Å. Example 10 How fast (or rather, how slowly) must a 200-g baseball travel to have the same de Broglie wavelength as a 40-kV electron? The wavelength of a 40-kV electron is 0.0613 Å. v = =   = 0.540 × 10-21 m sec-1 = 1.70 × 10-4 Å year-1 Such a baseball would take over 10,000 years to travel the length of a carbon-carbon bond, 1.54 Å. This sort of motion is completely outside our experience with baseballs; thus we never regard baseballs as having wave properties. The Uncertainty Principle[edit] One of the most important consequences of the dual nature of matter is the uncertainty principle, proposed in 1927 by Werner Heisenberg (1901–1976). This principle states that you cannot know simultaneously both the position and the momentum of any particle with absolute accuracy. The product of the uncertainty in position, , and in momentum, , will be equal to or greater than Planck's constant divided by 4: [Δx][Δ(mvx)] ≥ (8-11) We can understand this principle by considering how we determine the position of a particle. If the particle is large, we can touch it without disturbing it seriously. If the particle is small, a more delicate means of locating it is to shine a beam of light on it and observe the scattered rays. Yet light acts as if it were made of particles - photons - with energy proportional to frequency: E = . When we illuminate the object, we are pouring energy on it. If the object is large, it will become warmer; if the object is small enough, it will be pushed away and its momentum will become uncertain. The least interference that we can cause is to bounce a single photon off the object and watch where the photon goes. Now we are caught in a dilemma. The detail in an image of an object depends on the fineness of the wavelength of the light used to observe the object. (The shorter the wavelength, the more detailed the image.) But if we want to avoid altering the momentum of the atom, we have to use a low-energy photon. However, the wavelength of the low-energy photon would be so long that the position of the atom would be unclear. Conversely, if we try to locate the atom accurately by using a short-wavelength photon, the energy of the photon sends the atom ricocheting away with an uncertain momentum (Figure 8-17). We can design an experiment to obtain an accurate value of either an atom's momentum or its position, but the product of the errors in these quantities is limited by equation 8-11. Example 11 Suppose that we want to locate an electron whose velocity is 1.00 × 106 m sec−1 by using a beam of green light whose frequency is 0.600 × 1015 sec−1. How does the energy of one photon of such light compare with the energy of the electron to be located? The energy of the electron is E =   = 4.56 × 10-19 J But the energy of the photon is almost as large: Ep = = 6.6262 × 10-34 J sec × 0.600 × 1015 sec-1   = 3.97 × 10-19 J Finding the position and momentum of such an electron with green light is as questionable a procedure as finding the position and momentum of one billiard ball by striking it with another. In either case, you detect the particle at the price of disturbing its momentum. As a final difficulty, green light is a hopelessly coarse yardstick for finding objects of atomic dimensions. An atom is about 1 Å in radius, whereas the wavelength of green light is around 5000 Å. Shorter wavelengths make the energy quandary worse. We do not see the uncertainty limitations in large objects because of the sizes of the masses and velocities involved. Compare the following two problems. Example 12 An electron is moving with ~ velocity of 106 m sec−1 Assume that we can measure its position to 0.01 Å, or 1% of a typical atomic radius. Compare the uncertainty in its momentum, p, with the momentum of the electron itself. The uncertainty in position is Δx ≅ 0.01 Å = 0.01 × 10−10m. The momentum of the electron is approximately p = mev ≅ 10-30 kg × 106 m sec-1 = 10-24 kg m sec-1 By the Heisenberg uncertainty principle, the uncertainty in the knowledge of the momentum is Δp = ≅ 0.5 × 10-22 kg m sec-1 The uncertainty in the momentum of the electron is 50 times as great as the momentum itself! Example 13 A baseball of mass 200 g is moving with a velocity of 30 m sec−1. If we can locate the baseball with an error equal in magnitude to the wavelength of light used (e.g., 5000 Å), how will the uncertainty in momentum compare with the total momentum of the baseball? The momentum, p, of the baseball is 6 kg m sec−1, and Δp = 1 × 10−28 kg m sec−1. The intrinsic uncertainty in the momentum is only one part in 1028, far below any possibility of detection in an experiment. Wave Equations[edit] In 1926, Erwin Schrödinger (1887–1961) proposed a general wave equation for a particle. The mathematics of the Schrödinger equation is beyond us, but the mode of attack, or the strategy of finding its solution, is not. If you can see how physicists go about solving the Schrödinger equation, even though you cannot solve it yourself, then quantization and quantum numbers may be a little less mysterious. This section is an attempt to explain the method of solving a differential equation of motion* of the type that we encounter in quantum mechanics. We shall explain the strategy with the simpler analogy of the equation of a vibrating string. The de Broglie wave relationship and the Heisenberg uncertainty principle should prepare you for the two main features of quantum mechanics that contrast it with classical mechanics: 1. Information about a particle is obtained by solving an equation for a wave. 2. The information obtained about the particle is not its position; rather, it is the probability of finding the particle in a given region of space. We can't say whether an electron is in a certain place around an atom, but we can measure the probability that it is there rather than somewhere else. Wave equations are familiar in mechanics. For instance, the problem of the vibration of a violin string is solved in three steps: 1. Set up the equation of motion of a vibrating string. This equation will involve the displacement or amplitude of vibration, A (x), as a function of position along the string, x. 2. Solve the differential equation to obtain a general expression for amplitude. For a vibrating string with fixed ends, this general expression is a sine wave. As yet, there are no restrictions on wavelength or frequency of vibration. 3. Eliminate all solutions to the equation except those that leave the ends of the string stationary. This restriction on acceptable solutions of the wave equation is a boundary condition. Figure 8-15a shows solutions that fit this boundary condition of fixed ends of the string; Figure 8-15b shows solutions that fail. The only acceptable vibrations are those with λ = 2a/n, or = n/2a, in which n = 1, 2, 3, 4, .... The boundary conditions and not the wave equation are responsible for the quantization of the wavelengths of string vibration. *Equations of motion are always differential equations because they relate the change in one quantity to the change in another, such as change in position to change in time. Exactly the same procedure is followed in quantum mechanics: 1. Set up a general wave equation for a particle. The Schrödinger equation is written in terms of the function ψ(x,v,z) (where ψ is the Greek letter psi), which is analogous to the amplitude, A(x), in our violin-string analogy. The square of this amplitude, |ψ|2, is the relative probability density of the particle at position (x, y, z). That is, if a small element of volume, dv, is located at (x, y, z), the probability of finding an electron within that element of volume is If |ψ|2 dv. 2. Solve the Schrödinger equation to obtain the most general expression for ψ(x, y, z). 3. Apply the boundary conditions of the particular physical situation. If the particle is an electron in an atom, the boundary conditions are that |ψ|2 must be continuous, single-valued, and finite everywhere. All these conditions are only common sense. First, probability functions do not fluctuate radically from one place to another; the probability of finding an electron a few thousandths of an angstrom from a given position will not be radically different from the probability at the original position. Second, the probability of finding an electron in a given place cannot have two different values simultaneously. Third, since the probability of finding an electron somewhere must be 100%, or 1.000, if the electron really exists, the probability at anyone point cannot be infinite. We now shall compare the wave equation for a vibrating string and the Schrödinger wave equation for a particle. In this text you will not be expected to do anything with either equation, but you should note the similarities between them. Vibrating string. The amplitude of vibration at a distance x along the string is A(x). The differential equation of motion is The general solution to this equation is a sine function and the only acceptable solutions (Figure 8-15a) are those for which ,math>\textstyle\overline{\nu}</math> = n/2a, where n = 1, 2, 3, 4, ... , and for which the phase shift, α, is zero: Schrödinger equation. The square of the amplitude If(x.v.z>i' is the probability density of the particle at (x,), z). The differential equation is |ψ(x, y, z)|2 is the probability density of the particle at (x, y,z). The differential equation is V is the potential energy function at (x, y, z), and me is the mass of the electron. Although solving equation 8-13 is not a simple process, it is purely a mathematical operation; there is nothing in the least mysterious about it. The energy, E, is the variable that is restricted or quantized by the boundary conditions on |ψ|2. Our next task is to determine what the possible energy states are. The Hydrogen Atom[edit] The sine function that is the solution of te equation for the vibrating string is characterized by one integral quantum muber: n = 1, 2, 3, 4, . . . . The first few acceptable sine functions are These are the first four curves in Figure 8-15a. An atom is three-dimensional, whereas the string has only length. The solutions of the Schrödinger equation for the hydrogen atom are characterized by three integer quantum numbers: n, l, and m. These arise when solving the equation for the wave function, Ψ, which is analogous to the function An(x) in the vibrating string analogy. In solving the Schrödinger equation, we divide it into three parts. The solution of the radial part describes how the wave function, Ψ, varies with distance from the center of the atom. If we borrow the customary coordinate system of the earth, an azimuthal part produces a function that reveals how Ψ varies with north or south latitude, or distance up or down from the equator of the atom. Finally, an angular part is a third function that suggests how the wave function varies with east-west longitude around the atom. The total wave function, Ψ, is the product of these three functions. The wave functions that are solutions to the Schrödinger equation for the hydrogen atom are called orbitals. In the process of separating the parts of the wave function, a constant, n, appears in the radial expression, another constant, l, occurs in the radial and azimuthal expressions, and m appears in the azimuthal and angular expressions. The boundary conditions that give physically sensible solutions to these three equations are that each function (radial, azimuthal, and angular) be continuous, single-valued, and finite at all points. These conditions will not be met unless n, l, and m are integers, l is zero or a positive integer less than n, and m has a value from -l to + l. From a one-dimensional problem (the vibrating string) we obtained one quantum number. With a three-dimensional problem, we obtain three quantum numbers. The principal quantum number, n, can be any positive integer: n = 1, 2, 3, 4, 5, . . . . The azimuthal quantum number, l, can have any integral value from 0 to n - 1. The magnetic quantum number, m, can have any integral value from -l to +l. The different quantum states that the electron can have are listed in Table 8-1. For one electron around an atomic nucleus, the energy depends only on n. Moreover, the energy expression is exactly the same as in the Bohr theory: En =    ::k For Z = 1 (the hydrogen atom), we have simply: En = where k = 13.595 eV or 1312 kJ mole−1. Quantum states, with l = 0, 1, 2, 3, 4, 5, . . ., are called the s, p, d, f, g, h, . . . states, in an extension of the old spectroscopic notation (Figure 8-13). The wave functions corresponding to s, p, d, . . . states are called s, p, d, . . . orbitals. All of the l states for the same n have the same energy in the hydrogen atom; the energy-level diagram is as in Figure 8-10. Example 14 An electron in atomic hydrogen has a principal quantum number of 5. What are the possible values of l for this' electron? When l = 3, what are the possible values of m? What is the ionization energy (in electron volts) of this electron? What would it be in the same n state in He+? With n = 5, l may have a value of 4, 3, 2, 1, or 0. For l = 3, there are seven possible values of m: 3, 2, 1, 0, -1, -2, -3. The ionization energy of the electron depends only on n, according to: IE = -En En = - IE = Since k = 13.6eV. the IE of an electron with n = 5 is IE = 0.544 eV In general, for one-electron atomic species: IE = -En En = IE = For HE+, Z = 2: IE = For a He+ electron with n = 5, we have IE = 4 × 0.544 eV = 2.18 eV Each of the orbitals for the quantum states differentiated by n, l, and m in Table 8-1 corresponds to a different probability distribution function for the electron in space. The simplest such probability functions, for s orbitals (l = 0), are spherically symmetrical. The probability of finding the electron is the same in all directions but varies with distance from the nucleus. The dependence of Ψ and of the probability density |Ψ|2 on the distance of the electron from the nucleus in the 1s orbital is plotted in Figure 8-18. You can see the spherical symmetry of this orbital more clearly in Figure 8-19. The quantity |Ψ|2dv can be thought of either as the probability of finding an electron in the volume element dv in one atom, or as the average electron density within the corresponding volume element in a great many different hydrogen atoms. The electron is no longer in orbit in the Bohr-Sommerfeld sense; rather, it is an electron probability cloud. Such probability density clouds are commonly used as pictorial representations of hydrogenlike atomic orbitals. The 2s orbital is also spherically symmetrical, but its radial distribution function has a node, that is, zero probability, at r = 2 atomic units (I atomic unit is a0 = 0.529 Å). The probability density has a crest at 4 atomic units, which is the radius of the Bohr orbit for n = 2. There is a high probability of finding an electron in the 2s orbital closer to or farther from the nucleus than r = 2, but there is no probability of ever finding it in the spherical shell at a distance r = 2 from the nucleus (Figure 8-20). The 3s orbital has two such spherical nodes, and the 4s has three. However, these details are not as important in explaining bonding as are the general observations that s orbitals are spherically symmetrical and that they increase in size as n increases. There are three 2p orbitals: 2px, 2py, 2pz. Each orbital is cylindrically symmetrical with respect to rotation around one of the three principal axes x, y, z, as identified by the subscript. Each 2p orbital has two lobes of high electron density separated by a nodal plane of zero density (Figures 8-21 and 8-22). The sign of the wave function, Ψ, is positive in one lobe and negative in the other. The 3p, 4p, and higher p orbitals have one, two, or more additional nodal shells around the nucleus (Figure 8-23); again, these details are of secondary importance. The significant facts are that the three p orbitals are mutually perpendicular, strongly directional, and increasing size as n increases. The five d orbitals first appear for n = 3. For n = 3, l can be 0, 1, or 2, thus s, p, and d orbitals are possible. The 3d orbitals are shown in Figure 8-24. Three of them, dxy, dyz, and dxz, are identical in shape but different in orientation. Each has four lobes of electron density bisecting the angles between principal axes. The remaining two are somewhat unusual: The dx2-y2 orbital has lobes of density along the x and y axes, and the dz2 orbital has lobes along the z axis, with a small doughnut or ring in the xy plane/ However, there is nothing sacrosanct about the z axis. The proper combination of wave functions of these five d orbitals will give us another set of five d orbitals in which the dz2 -like orbital points along the x axis, or the y axis. We could even combine the wave functions to produce a set of orbitals, all of which were alike but differently oriented. However, the set of orbitals, all of which were alike but differently oriented. However, the set that we have described, dxy, dyz, dxz, dx2-y2 and dz2, is convenient and is used conventionally in chemistry. The sign of the wave function, Ψ, changes from lobe to lobe, as indicated in Figure 8-24. The azimuthal quantum number l is related to the shape of the orbital, and is referred to as the orbital-shape quantum number: s orbitals with l = 0 are spherically symmetrical, p orbitals with l = 1 have plus and minus extensions along one axis, and d orbitals with l = 2 have extensions along two mutually perpendicular directions (Figure 8-25). The third quantum number, m, describes the orientation of the orbital in space. It is sometimes called the magnetic quantum number because the usual way of distinguishing between orbitals with different spatial orientations is to place the atoms in a magnetic field and to note the differences in energy produced in the orbitals. We will use the more descriptive term, orbital-orientation quantum number. There is a fourth quantum number that has not been mentioned. Atomic spectra, and more direct experiments as well, indicate that an electron behaves as if it were spinning around an axis. Each electron has a choice of two spin states with spin quantum numbers, s = + or - . A complete description of the state of an electron in a hydrogen atom requires the specification of all four quantum numbers: n, l, m, and s. Many-Electron Atoms[edit] It is possible to set up the Schrödinger wave equation for lithium, which has a nucleus and three electrons, or uranium, which has a nucleus and 92 electrons. Unfortunately, we cannot solve the differential equations. There is little comfort in knowing that the structure of the uranium atom is calculable in principle, and that the fault lies with mathematics and not with physics. Physicists and physical chemists have developed many approximate methods that involve guesses and successive approximations o solutions of the Schrödinger equation. Electronic computers have been of immense value in such successive approximations. But the advantage of Schrödinger's theory of the hydrogen atom is that it gives us a clear qualitative picture of the electronic structure of many-electron atoms without such additional calculations. Bohr's theory was too simple and could not do this, even with Sommerfeld's help. The extension of the hydrogen-atom picture to many-electron atoms is one of the most important steps in understanding chemistry, and we shall reserve it for the next chapter. We shall begin by assuming that electronic orbitals for other atoms are similar to the orbitals for hydrogen and that they can be described by the same four quantum numbers and have analogous probability distributions. If the energy levels deviate from the ones for hydrogen (which they do), then we shall have to provide a persuasive argument, in terms of the hydrogenlike orbitals, for these changes. Rutherford's scattering experiments showed the atom to be composed of an extremely dense, positively charged nucleus surrounded by electrons. The nucleus is composed of protons and neutrons. A proton has one positive charge and a mass of 1.67 × 10−27 kg. A neutron is uncharged and has a mass of 1.67 × 10−27 kg. Radio waves, infrared, visible, and ultraviolet light, x rays and γ rays are electromagnetic waves with different wavelengths. The speed of light, c, equal to 2.9979 × 1010 cm sec−1, is related to its wavelength (λ) and frequency (ν) by c = νλ. The wave number, , is the reciprocal of the wavelength, = 1/λ. Hot objects radiate energy (blackbody radiation). Planck proposed that the energy of electromagnetic radiation is quantized. The energy of a quantum of electromagnetic radiation is proportional to its frequency, E = , in which h is Planck's constant, 6.6262 × 10−34 J sec. Electron ejection caused by light striking a metal surface is called the photoelectric effect. Photon is the name given to a quantum of light. The energy of a photon is equal to , in which ν is the frequency of the electromagnetic wave. The pattern of light absorption by an atom or molecule as a function of wavelength frequency or wave number is called absorption spectrum. The related pattern of light emission from an atom or molecule is called emission spectrum. The emission spectrum of atomic hydrogen is composed of several series of lines. The positions of these lines are given accurately by a single equation, the Rydberg equation, in which is the wave number of a given line, RH is the Rydberg constant, 109,677.581 cm−1, and n1 and n2 are integers (n2 is greater than n1). The Lyman series is that group of lines with n1 = 1 and n2 = 2, 3, 4, . . . . The Balmer series has n1 = 2 and n2 = 3, 4, 5, . . ., and the Paschen series has n1 = 3 and n2 = 4, 5, 6, . . . . Bohr pictured the hydrogen atom as containing an electron moving in a circular orbit around a central proton. He proposed that only certain orbits were allowed, corresponding to the following energies: in which E is the energy of an electron in the atom (relative to an ionized state, H+ + e-), k is a constant, equal to 13.595 eV atom−1 or 1312 kJ mole−1, and n is a quantum number that can take only integer values from 1 to ∞. The radius of a Bohr orbit is r = n2a0, where a0 is called the first Bohr radius; a0 = 0.529 Å. One atomic unit of length equals a0. The ground state of atomic hydrogen is the lowest energy state, where n = 1. Excited states correspond to n = 2, 3, 4, . . . . The energy levels in a general one-electron atomic species, such as He+ and Li2+, with atomic number Z, are given by The wave nature of electrons was established when Davisson and Germer showed that metal foils diffract electrons in the same way that they diffract a beam of x rays. The wave-particle duality exhibited by electrons is present in all objects. For large objects (such as baseballs), particle behavior predominates to such an extent that wave properties are unimportant. Heisenberg proposed that we cannot know both the position and the momentum of a particle with absolute accuracy. The product of the uncertainty in position, Δx, and momentum, Δ(mv), must be at least as large as h/4: x][Δ(mvx)] ≥ The wave equation for a particle is called the Schrödinger equation. The solutions to the Schrödinger equation, |Ψ(x,y,z)|2, is the relative probability density of the particle at position (x,y,z). A place where the amplitude of a wave is zero is called a node. Solution of the Schrödinger equation for the hydrogen atom yields wave functions Ψ(x,y,z) and discrete energy levels for the electron. The wave functions Ψ(x,y,z) are called orbitals. An orbital is commonly represented as a probability density cloud, that is, a three-dimensional picture of |Ψ(x,y,z)|2. Three quantum numbers are obtained from solving the Schrödinger equation: the principal quantum number, n, can be any positive integer (n = 1, 2, 3, 4, . . . ); the azimuthal (or orbital-shape) quantum number, l, can have any integral value from 0 to n - 1; the magnetic (or orbital-orientation) quantum number, m, takes integral values from -l to + l. The energy levels depend only on n, Wave functions with l = 0 are called s orbitals; those with l = 1 are called p orbitals; those with l = 2 are called d orbitals; those with l = 3, 4, 5, . . ., are called f, g, h, . . ., orbitals. A fourth quantum number is needed to interpret atomic spectra. It is the spin quantum number,s, which can be or
5141a1a658713972
Why do things make sense? Make it make sense Make it make sense (Photo credit: edmittance) Things pretty much make sense. If they don’t we feel that there is a reason that they don’t. We laughingly make up goblins and poltergeist to explain how the keys came to be in the location in which they are finally found, but we, mostly, have an underlying belief that there are good, physical reasons why they ended up there. Things appear to get a little murkier at the level of the quantum, the incredibly small, but even there, I believe that scientists are looking for an explanation of the behaviour of things, no matter how bizarre. One of the concepts that appears to have to be abandoned is that of every day causality, although scientists appear to be replacing that concept with a more probabilistic version of  the concept of causality. But I’m not going to go there, as quantum physics has to be spelled out in mathematics or explained inaccurately using analogies. I note that there is still discussion about what quantum physics means. English: Schrödinger equation of quantum mecha... English: Schrödinger equation of quantum mechanics (1927). (Photo credit: Wikipedia) We strive for meaning when we consider why things happen. When a stone is dropped it accelerates towards the earth. This is observation. We also observe the way in which it accelerates and Sir Isaac Newton, who would have known from his mathematics the equation which governed this acceleration, had the genius to realise that the mutual attraction of the earth and the stone followed an inverse square law and, even more importantly, that this applied to any two objects which have mass in the entire universe. English: Mural, Balfour Avenue, Belfast Mural ... English: Mural, Balfour Avenue, Belfast Mural on a gable wall on Balfour Avenue in Belfast (see also 978903). The mural “How can quantum gravity help explain the origin of the universe?” was created by artist Liam Gillick and is part of a series of contemporary art projects designed to alert people to the ‘10 remaining unanswered questions in science’ at public sites across Belfast. (Photo credit: Wikipedia) So, that’s done. We know why stones fall and why the earth unmeasurably and unnoticeably jumps to meet it. It is all explained, or is it? Why should any two massy objects experience this attraction? Let’s call it ‘gravity’, shall we? How can we explain gravity? Well, we could say that it is a consequence of the object having mass, or in other words, it is an intrinsic property of massy objects, which if you think about it, explains nothing, or we can talk about curvature of space, which is interesting, but again explains nothing. Curved Spaces Curved Spaces (Photo credit: Digitalnative) Can you see where I am going with this? Every concept that we consider is either ‘just the way things are’ or requires explanation. Every explanation that we can think up either has to be taken as axiomatic or has to be explained further. Nevertheless most people act as if they believe that there is a logical explanation for things and  that things ultimately make sense. It is possible that there is no logical explanation of things, and that the apparent relationships between things is an illusion. I once read a science fiction story where someone invented a time machine. Everywhere the machine stopped there was chaos, because there were no laws of nature and our little sliver of time was a mere statistical fluke. When they tried to return to the present they could not find it. This little story demonstrates that although we appear to live in a universe that is logical and there appears to be a structure to it, this may just be an illusion. English: Illustration of the difference betwee... English: Illustration of the difference between high statistical significance and statistical meaningfulness of time trends. See Wikipedia article “Statistical meaningfulness test” for more info (Photo credit: Wikipedia) If we do live in a logical universe we not be able to access and understand the basis and structure of it. We may see things “through a glass darkly”. We may be like the inhabitants of Plato’s Cave. Everything we experience we experience through our senses, so our experience of the world is already second-hand and for many purposes we use tools and instruments to view the world around us. Also, our sense impressions are filtered, modified and processed by our brains in the process of experiencing something. We can take prescribed or non-prescribed drugs which alter our view of the world. So how can we know anything about the universe. Alternatively there may be order to the universe. There may be ‘laws of nature’ and we may be slowly discovering them. I like the analogy of the blanket – a blanket is held between us and the universe but we are able to poke holes in it. Each hole reveals a metaphoric pixel of information about what lies behind the blanket. Over the years, decades, centuries and millennia we have poked an astronomical number of holes in the blanket, so we have a good idea of the shape of what lies behind it. Cámara estenopéica / Pinhole camera Cámara estenopéica / Pinhole camera (Photo credit: RubioBuitrago) So why do things make sense? Is it because there is a structure to the universe that we are either discovering or fooling ourselves into believing that we are discovering, or is there no structure whatsoever and any beliefs that there are illusions. Maybe there’s another possibility. Maybe the universe does have the structure but it is an ‘ad hoc’ structure with no inherent logic to it all! Highly Illogical Highly Illogical (Photo credit: Wikipedia) This entry was posted in Miscellaneous, Philosophy and tagged , , , , , , , , , , , . Bookmark the permalink. 2 Responses to Why do things make sense? 1. I rarely leave comments, but after reading a ton of comments here Why do things make sense? | Me on the net. I do have a couple of questions for you if it’s allright. or linkedin profile? Leave a Reply WordPress.com Logo Twitter picture Facebook photo Google+ photo Connecting to %s
a3cdc9aa46b5f4b9
Consciousness Studies/Measurement In Quantum Physics And The Preferred Basis Problem From Wikibooks, open books for an open world Jump to: navigation, search The Measurement Problem[edit] In quantum physics the probability of an event is deduced by taking the square of the amplitude for an event to happen. The term "amplitude for an event" arises because of the way that the Schrödinger equation is derived using the mathematics of ordinary, classical waves where the amplitude over a small area is related to the number of photons hitting the area. In the case of light, the probability of a photon hitting that area will be related to the ratio of the number of photons hitting the area divided by the total number of photons released. The number of photons hitting an area per second is the intensity or amplitude of the light on the area, hence the probability of finding a photon is related to "amplitude". However, the Schrödinger equation is not a classical wave equation. It does not determine events, it simply tells us the probability of an event. In fact the Schrödinger equation in itself does not tell us that an event occurs at all, it is only when a measurement is made that an event occurs. The measurement is said to cause state vector reduction. This role of measurement in quantum theory is known as the measurement problem. The measurement problem asks how a definite event can arise out of a theory that only predicts a continuous probability for events. Two broad classes of theory have been advanced to explain the measurement problem. In the first it is proposed that observation produces a sudden change in the quantum system so that a particle becomes localised or has a definite momentum. This type of explanation is known as collapse of the wavefunction. In the second it is proposed that the probabilistic Schrödinger equation is always correct and that, for some reason, the observer only observes one particular outcome for an event. This type of explanation is known as the relative state interpretation. In the past thirty years relative state interpretations, especially Everett's relative state interpretation have become favoured amongst quantum physicists. The quantum probability problem[edit] The measurement problem is particularly problematical when a single particle is considered. Quantum theory differs from classical theory because it is found that a single photon seems to be able to interfere with itself. If there are many photons then probabilities can be expressed in terms of the ratio of the number hitting a particular place to the total number released but if there is only one photon then this does not make sense. When only one photon is released from a light source quantum theory still gives us a probability for a photon to hit a particular area but what does this mean at any instant if there is indeed only one photon? If the Everettian interpretation of quantum mechanics is invoked then it might seem that the probability of the photon hitting an area in your particular universe is related to the occurrences of the photon in all the other universes. But in the Everrettian interpretation even the improbable universes occur. This leads to a problem known as the quantum probability problem: If the universe splits after a measurement, with every possible measurement outcome realised in some branch, then how can it make sense to talk about the probabilities of each outcome? Each outcome occurs. This means that if our phenomenal consciousness is a set of events then there would be endless copies of these sets of events, almost all of which are almost entirely improbable to an observer outside the brain but all of which exist according to an Everrettian Interpretation. Which set is you? Why should 'you' conform to what happens in the environment around you? The preferred basis problem[edit] It could be held that you assess probabilities in terms of the branch of the universe in which you find yourself but then why do you find yourself in a particular branch? Decoherence Theory is one approach to these questions. In decoherence theory the environment is a complex form that can only interact with particles in particular ways. As a result quantum phenomena are rapidly smoothed out in a series of micro-measurements so that the macro-scale universe appears quasi-classical. The form of the environment is known as the preferred basis for quantum decoherence. This then leads to the preferred basis problem in which it is asked how the environment occurs or whether the state of the environment depends on any other system. According to most forms of decoherence theory 'you' are a part of the environment and hence determined by the preferred basis. From the viewpoint of phenomenal consciousness this does not seem unreasonable because it has always been understood that the conscious observer does not observe things as quantum superpositions. The conscious observation is a classical observation. However, the arguments that are used to derive this idea of the classical, conscious observer contain dubious assumptions that may be hindering the progress of quantum physics. The assumption that the conscious observer is simply an information system is particularly dubious: "Here we are using aware in a down - to - earth sense: Quite simply, observers know what they know. Their information processing machinery (that must underlie higher functions of the mind such as "consciousness") can readily consult the content of their memory. (Zurek 2003). This assumption is the same as assuming that the conscious observer is a set of measurements rather than an observation. It makes the rest of Zurek's argument about decoherence and the observer into a tautology - given that observations are measurements then observations will be like measurements. However, conscious observation is not simply a change of state in a neuron, a "measurement", it is the entire manifold of conscious experience. In his 2003 review of this topic Zurek makes clear an important feature of information theory when he states that: There is no information without representation. So the contents of conscious observation are states that correspond to states of the environment in the brain (i.e.: measurements). But how do these states in the brain arise? The issue that arises here is whether the representation, the contents of consciousness, is entirely due to the environment or due to some degree to the form of conscious observation. Suppose we make the reasonable assumption that conscious observation is due to some physical field in the dendrites of neurons rather than in the action potentials that transmit the state of the neurons from place to place. This field would not necessarily be constrained by decoherence; there are many possibilities for the field, for instance, it could be a radio frequency field due to impulses or some other electromagnetic field (cf: Anglin & Zurek (1996)) or some quantum state of macromolecules etc.. Such a field might contain many superposed possibilities for the state of the underlying neurons and although these would not affect sensations, they could affect the firing patterns of neurons and create actions in the world that are not determined by the environmental "preferred basis". Zeh (2000) provides a mature review of the problem of conscious observation. For example he realises that memory is not the same as consciousness: "The genuine carriers of consciousness ... must not in general be expected to represent memory states, as there do not seem to be permanent contents of consciousness." and notes of memory states that they must enter some other system to become part of observation: "To most of these states, however, the true physical carrier of consciousness somewhere in the brain may still represent an external observer system, with whom they have to interact in order to be perceived. Regardless of whether the ultimate observer systems are quasi-classical or possess essential quantum aspects, consciousness can only be related to factor states (of systems assumed to be localized in the brain) that appear in branches (robust components) of the global wave function — provided the Schrodinger equation is exact. Environmental decoherence represents entanglement (but not any “distortion” — of the brain, in this case), while ensembles of wave functions, representing various potential (unpredictable) outcomes, would require a dynamical collapse (that has never been observed)." However, Zeh (2003) points out that events may be irreversibly determined by decoherence before information from them reaches the observer. This might give rise to a multiple worlds and multiple minds mixture for the universe, the multiple minds being superposed states of the part of the world that is the mind. Such an interpretation would be consistent with the apparently epiphenomenal nature of mind. A mind that interacts only weakly with the consensus physical world, perhaps only approving or rejecting passing actions would be an ideal candidate for a QM multiple minds hypothesis. Further reading and references[edit] • Pearl, P. (1997). True collapse and false collapse. Published in Quantum Classical Correspondence: Proceedings of the 4th Drexel Symposium on Quantum Nonintegrability, Philadelphia, PA, USA, September 8-11, 1994, pp. 51-68. Edited by Da Hsuan Feng and Bei Lok Hu. Cambridge, MA: International Press, 1997. • Zeh, H. D. (1979). Quantum Theory and Time Asymmetry. Foundations of Physics, Vol 9, pp 803-818 (1979). • Zeh, H.D. (2000) THE PROBLEM OF CONSCIOUS OBSERVATION IN QUANTUM MECHANICAL DESCRIPTION. Epistemological Letters of the Ferdinand-Gonseth Association in Biel (Switzerland) Letter No 63.0.1981, updated 2000. • Zeh, H.D. (2003). Decoherence and the Appearance of a Classical World in Quantum Theory, second edition, Authors:. E. Joos, H.D. Zeh, C. Kiefer D. Giulini, J. Kupsch, and I.-O. Stamatescu. Chapter 2: Basic Concepts and their Interpretation.
19edea518e995f2d
Psychology Wiki Interpretations of quantum mechanics 34,189pages on this wiki Revision as of 13:58, September 29, 2006 by Lifeartist (Talk | contribs) An interpretation of quantum mechanics is an attempt to answer the question, What exactly is quantum mechanics talking about?. Although quantum mechanics is widely considered "the most precisely tested and most successful theory in the history of science" (Jackiw and Kleppner, 2000), many feel that in spite of this the fundamentals of the theory have yet to be fully understood. There are a number of contending schools of thought, differing over whether quantum mechanics can be understood to be deterministic, what elements of quantum mechanics can be considered real, and other matters. Historical background The operational meaning of the technical terms used by researchers in quantum theory (such as wavefunctions and matrix mechanics) progressed through various intermediate stages. For instance Schrödinger originally viewed the wavefunction associated to the electron as the charge density of an object smeared out over an extended, possibly infinite, volume of space. Max Born later proposed its interpretation as the probability distribution in the space of the electron's position. Other leading scientists, such as Albert Einstein, had great difficulty in accepting some of the more radical consequences of the theory, such as quantum indeterminacy. Even if these matters could be treated as 'teething troubles', they have lent importance to the activity of interpretation. It should not, however, be assumed that most physicists consider quantum mechanics as requiring interpretation, other than very minimal instrumentalist interpretations, which are discussed below. The Copenhagen interpretation, as of 2006, appears to be the most popular one among scientists, followed by the many worlds and consistent histories interpretations. But it is also true that most physicists consider non-instrumental questions (in particular ontological questions) to be irrelevant to physics. They fall back on Paul Dirac's point of view, later expressed in the famous dictum: "Shut up and calculate" often (perhaps erroneously) attributed to Richard Feynman (see [1]). Obstructions to direct interpretation The perceived difficulties of interpretation reflect a number of points about the orthodox description of quantum mechanics, including: 1. The abstract, mathematical nature of the description of quantum mechanics. 2. The existence of what appear to be non-deterministic and irreversible processes in quantum mechanics. 3. The phenomenon of entanglement, and in particular, the higher correlations between remote events than would be expected in classical theory. 4. The complementarity of possible descriptions of reality. First, the accepted mathematical structure of quantum mechanics is based on fairly abstract mathematics, such as Hilbert spaces and operators on those Hilbert spaces. In classical mechanics and electromagnetism, on the other hand, properties of a point mass or properties of a field are described by real numbers or functions defined on two or three dimensional sets. These have direct, spatial meaning, and in these theories there seems to be less need to provide a special interpretation for those numbers or functions. Further, the process of measurement plays an apparently essential role in the theory. It relates the abstract elements of the theory, such as the wavefunction, to operationally definable values, such as probabilities. Measurement interacts with the system state, in somewhat peculiar ways, as is illustrated by the double-slit experiment. • Non-reversible and unpredictable transformations described by mathematically more complicated transformations (see quantum operations). Examples of these transformations are those that are undergone by a system as a result of measurement. A restricted version of the problem of interpretation in quantum mechanics consists in providing some sort of plausible picture, just for the second kind of transformation. This problem may be addressed by purely mathematical reductions, for example by the many-worlds or the consistent histories interpretations. In addition to the unpredictable and irreversible character of measurement processes, there are other elements of quantum physics that distinguish it sharply from classical physics and which cannot be represented by any classical picture. One of these is the phenomenon of entanglement, as illustrated in the EPR paradox, which seemingly violates principles of local causality. Another obstruction to direct interpretation is the phenomenon of complementarity, which seems to violate basic principles of propositional logic. Complementarity says there is no logical picture (obeying classical propositional logic) that can simultaneously describe and be used to reason about all properties of a quantum system S. This is often phrased by saying that there are "complementary" sets A and B of propositions that can describe S, but not at the same time. Examples of A and B are propositions involving a wave description of S and a corpuscular description of S. The latter statement is one part of Niels Bohr's original formulation, which is often equated to the principle of complementarity itself. Complementarity is not usually taken to mean that classical logic fails, although Hilary Putnam did take that view in his paper Is logic empirical?. Instead complementarity means that composition of physical properties for S (such as position and momentum both having values in certain ranges) using propositional connectives does not obey rules of classical propositional logic. As is now well-known (Omnès, 1999) the "origin of complementarity lies in the noncommutativity of operators" describing observables in quantum mechanics. Problematic status of pictures and interpretations The precise ontological status, of each one of the interpreting pictures, remains a matter of philosophical argument. In other words, if we interpret the formal structure X of quantum mechanics by means of a structure Y (via a mathematical equivalence of the two structures), what is the status of Y? This is the old question of saving the phenomena, in a new guise. Some physicists, for example Asher Peres and Chris Fuchs, seem to argue that an interpretation is nothing more than a formal equivalence between sets of rules for operating on experimental data. This would suggest that the whole exercise of interpretation is unnecessary. Instrumentalist interpretation Any modern scientific theory requires at the very least an instrumentalist description which relates the mathematical formalism to experimental practice and prediction. In the case of quantum mechanics, the most common instrumentalist description is an assertion of statistical regularity between state preparation processes and measurement processes. That is, if a measurement of a real-valued quantity is performed many times, each time starting with the same initial conditions, the outcome is a well-defined probability distribution over the real numbers; moreover, quantum mechanics provides a computational instrument to determine statistical properties of this distribution, such as its expectation value. Calculations for measurements performed on a system S postulate a Hilbert space H over the complex numbers. When the system S is prepared in a pure state, it is associated with a vector in H. Measurable quantities are associated with Hermitian matrices acting on H: these are referred to as observables. Repeated measurement of an observable A for S prepared in state ψ yields a distribution of values. The expectation value of this distribution is given by the expression \langle \psi \vert A \vert \psi \rangle. This mathematical machinery gives a simple, direct way to compute a statistical property of the outcome of an experiment, once it is understood how to associate the initial state with a vector, and the measured quantity with an observable (that is, a specific Hermitian matrix). As an example of such a computation, the probability of finding the system in a given state \vert\phi\rangle is given by computing the expectation value of a (rank-1) projection operator \Pi = \vert\phi\rangle \langle \phi \vert The probability is then the non-negative real number given by P = \langle \psi \vert \Pi \vert \psi \rangle = \vert \langle \psi \vert \phi \rangle \vert ^2. By abuse of language, the bare instrumentalist description can be referred to as an interpretation, although this usage is somewhat misleading since instrumentalism explicitly avoids any explanatory role; that is, it does not attempt to answer the question of what quantum mechanics is talking about. Summary of common interpretations of QM Properties of interpretations An interpretation can be characterized by whether it satisfies certain properties, such as: • The mathematical formalism consists of the Hilbert space machinery of ket-vectors, self-adjoint operators acting on the space of ket-vectors, unitary time dependence of ket-vectors and measurement operations. In this context a measurement operation can be regarded as a transformation which carries a ket-vector into a probability distribution on ket-vectors. See also quantum operations for a formalization of this concept. • The interpreting structure includes states, transitions between states, measurement operations and possibly information about spatial extension of these elements. A measurement operation here refers to an operation which returns a value and results in a possible system state change. Spatial information, for instance would be exhibited by states represented as functions on configuration space. The transitions may be non-deterministic or probabilistic or there may be infinitely many states. However, the critical assumption of an interpretation is that the elements of I are regarded as physically real. In this sense, an interpretation can be regarded as a semantics for the mathematical formalism. In particular, the bare instrumentalist view of quantum mechanics outlined in the previous section is not an interpretation at all since it makes no claims about elements of physical reality. The current use in physics of "completeness" and "realism" is often considered to have originated in the paper (Einstein et al., 1935) which proposed the EPR paradox. In that paper the authors proposed the concept "element of reality" and "completeness" of a physical theory. Though they did not define "element of reality", they did provide a sufficient characterization for it, namely a quantity whose value can be predicted with certainty before measuring it or disturbing it in any way. EPR define a "complete physical theory" as one in which every element of physical reality is accounted for by the theory. In the semantic view of interpretation, an interpretation of a theory is complete if every element of the interpreting structure is accounted for by the mathematical formalism. Realism is a property of each one of the elements of the mathematical formalism; any such element is real if it corresponds to something in the interpreting structure. For instance, in some interpretations of quantum mechanics (such as the many-worlds interpretation) the ket vector associated to the system state is assumed to correspond to an element of physical reality, while in others it does not. Determinism is a property characterizing state changes due to the passage of time, namely that the state at an instant of time in the future is a function of the state at the present (see time evolution). It may not always be clear whether a particular interpreting structure is deterministic or not, precisely because there may not be a clear choice for a time parameter. Moreover, a given theory may have two interpretations, one of which is deterministic, and the other not. Local realism has two parts: • The value returned by a measurement corresponds to the value of some function on the state space. Stated in another way, this value is an element of reality; • The effects of measurement have a propagation speed not exceeding some universal bound (e.g., the speed of light). In order for this to make sense, measurement operations must be spatially localized in the interpreting structure. Bell's theorem and its experimental verification restrict the kinds of properties a quantum theory can have. For instance, Bell's theorem implies quantum mechanics cannot satisfy local realism. Consistent histories The consistent histories generalizes the conventional Copenhagen interpretation and attempts to provide a natural interpretation of quantum cosmology. The theory is based on a consistency criterion that then allows the history of a system to be described so that the probabilities for each history obey the additive rules of classical probability while being consistent with the Schrödinger equation. According to this interpretation, the purpose of a quantum-mechanical theory is to predict probabilities of various alternative histories. Many worlds The many-worlds interpretation (or MWI) is an interpretation of quantum mechanics that rejects the non-deterministic and irreversible wavefunction collapse associated with measurement in the Copenhagen interpretation in favor of a description in terms of quantum entanglement and reversible time evolution of states. The phenomena associated with measurement are explained by decoherence which occurs when states interact with the environment. As result of the decoherence the world-lines of macroscopic objects repeatedly split into mutally unobservable, branching histories -- distinct universes within a greater multiverse. The Copenhagen Interpretation The Copenhagen interpretation is an interpretation of quantum mechanics formulated by Niels Bohr and Werner Heisenberg while collaborating in Copenhagen around 1927. Bohr and Heisenberg extended the probabilistic interpretation of the wavefunction, proposed by Max Born. The Copenhagen interpretation rejects questions like "where was the particle before I measured its position" as meaningless. The act of measurement causes an instantaneous "collapse of the wave function". This means that the measurement process randomly picks out exactly one of the many possibilities allowed for by the state's wave function, and the wave function instantaneously changes to reflect that pick. Quantum Logic The Bohm interpretation The Bohm interpretation of quantum mechanics is an interpretation postulated by David Bohm in which the existence of a non-local universal wavefunction allows distant particles to interact instantaneously. The interpretation generalizes Louis de Broglie's pilot wave theory from 1927, which posits that both wave and particle are real. The wave function 'guides' the motion of the particle, and evolves according to the Schrödinger equation. The interpretation assumes a single, nonsplitting universe (unlike the Everett many-worlds interpretation) and is deterministic (unlike the Copenhagen interpretation). It says the state of the universe evolves smoothly through time, without the collapsing of wavefunctions when a measurement occurs, as in the Copenhagen interpretation. However, it does this by assuming a number of hidden variables, namely the positions of all the particles in the universe, which, like probability amplitudes in other interpretations, can never be measured directly. Transactional interpretation The transactional interpretation of quantum mechanics (TIQM) by John Cramer is an unusual interpretation of quantum mechanics that describes quantum interactions in terms of a standing wave formed by retarded (forward-in-time) and advanced (backward-in-time) waves. The author argues that it avoids the philosophical problems with the Copenhagen interpretation and the role of the observer, and resolves various quantum paradoxes. Consciousness causes collapse Consciousness causes collapse is the speculative theory that observation by a conscious observer is responsible for the wavefunction collapse. It is an attempt to solve the Wigner's friend paradox by simply stating that collapse occurs at the first "conscious" observer. Supporters claim this is not a revival of substance dualism, since (in a ramification of this view) consciousness and objects are entangled and cannot be considered as distinct. The consciousness causes collapse theory can be considered as a speculative appendage to almost any interpretation of quantum mechanics and most physicists reject it as unverifiable and introducing unnecessary elements into physics. Relational Quantum Mechanics The essential idea behind relational quantum mechanics, following the precedent of Special Relativity, is that different observers may give different accounts of the same series of events: for example, to one observer at a given point in time, a system may be in a single, "collapsed" eigenstate, while to another observer at the same time, it may be in a superposition of two or more states. Consequently, if quantum mechanics is to be a complete theory, Relational Quantum Mechanics argues that the notion of "state" describes not the observed system itself, but the relationship, or correlation, between the system and its observer(s). The state vector of conventional quantum mechanics becomes a description of the correlation of some degrees of freedom in the observer, with respect to the observed system. However, it is held by Relational Quantum Mechanics that this applies to all physical objects, whether or not they are conscious or macroscopic. Any "measurement event" is seen simply as an ordinary physical interaction, an establishment of the sort of correlation discussed above. Thus the physical content of the theory is to do not with objects themselves, but the relations between them [2]. For more information, see Rovelli (1996). Modal Interpretations of Quantum Theory Modal interpretations of Quantum mechanics were first conceived of in 1972 by B. van Fraassen, in his paper “A formal approach to the philosophy of science.” However, this term now is used to describe a larger set of models that grew out of this approach. The Stanford Encyclopedia of Philosophy describes several versions: • The Copenhagen Variant • Kochen-Dieks-Healey Interpretations At the moment, there is no experimental evidence that would allow us to distinguish between the various interpretations listed below. To that extent, the physical theory stands, and is consistent with, itself and with reality; troubles come only when one attempts to "interpret" it. Nevertheless, there is active research in attempting to come up with experimental tests which would allow differences between the interpretations to be experimentally tested. Some of the most common interpretations are summarized here (however, the assignment of values in this table is not without controversy, for the precise meanings of some of the concepts involved are unclear and, in fact, the subject of the very controversy itself): Interpretation Deterministic? Waveform Real? Unique History? Avoids Hidden Variables? Collapsing Wavefunctions? Copenhagen interpretation (Waveform not real) No No Yes Yes No Copenhagen interpretation (Waveform real) No Yes Yes Yes No Consistent histories (Decoherent approach) Agnostic1 Agnostic1 No Yes Yes Many-worlds interpretation (Decoherent approach) Yes Yes No Yes Yes Bohm-de Broglie interpretation ("Pilot-wave" approach) Yes Yes2 Yes3 No Yes Transactional interpretation No Yes Yes Yes No Consciousness causes collapse No Yes Yes Yes No Relational Quantum Mechanics No Yes Agnostic4 Yes No5 1If wavefunction is real then this becomes the Many-Worlds Interpretation. If wavefunction less than real, but more than just information, then Zurek calls this the Existential Interpretation. 2Both particle AND guiding wavefunction are real. 3Unique particle history, but multiple wave histories. 4Comparing histories between systems in this interpretation has no well-defined meaning. Each interpretation has many variants. It is difficult to get a precise definition of the Copenhagen Interpretation — in the table above, two variants are shown — one that regards the waveform as being a tool for calculating probabilities only, and the other regards the waveform as an "element of reality". See also Related lists • Bub, J. and Clifton, R. 1996. “A uniqueness theorem for interpretations of quantum mechanics,” Studies in History and Philosophy of Modern Physics, 27B, 181-219 • R. Carnap, The interpretation of physics, Foundations of Logic and Mathematics of the International Encyclopedia of Unified Science, University of Chicago Press, 1939. • D. Deutsch, The Fabric of Reality, Allen Lane, 1997. Though written for general audiences, in this book Deutsch argues forcefully against instrumentalism. • Dickson, M. 1994. Wavefunction tails in the modal interpretation, Proceedings of the PSA 1994, Hull, D., Forbes, M., and Burian, R. (eds), Vol. 1, pp. 366-376. East Lansing, Michigan: Philosophy of Science Association. • Dickson, M. and Clifton, R. 1998. Lorentz-invariance in modal interpretations The Modal Interpretation of Quantum Mechanics, Dieks, D. and Vermaas, P. (eds), pp. 9-48. Dordrecht: Kluwer Academic Publishers • A. Einstein, B. Podolsky and N. Rosen, Can quantum-mechanical description of physical reality be considered complete? Phys. Rev. 47 777, 1935. • C. Fuchs and A. Peres, Quantum theory needs no ‘interpretation’ , Physics Today, March 2000. • Christopher Fuchs, Quantum Mechanics as Quantum Information (and only a little more), arXiv:quant-ph/0205039 v1, (2002) • N. Herbert. Quantum Reality: Beyond the New Physics, New York: Doubleday, ISBN 0-385-23569-0, LoC QC174.12.H47 1985. • R. Jackiw and D. Kleppner, One Hundred Years of Quantum Physics, Science, Vol. 289 Issue 5481, p893, August 2000. • M. Jammer, The Conceptual Development of Quantum Mechanics. New York: McGraw-Hill, 1966. • M. Jammer, The Philosophy of Quantum Mechanics. New York: Wiley, 1974. • W. M. de Muynck, Foundations of quantum mechanics, an empiricist approach, Dordrecht: Kluwer Academic Publishers, 2002, ISBN 1-4020-0932-1 • R. Omnès, Understanding Quantum Mechanics, Princeton, 1999. • K. Popper, Conjectures and Refutations, Routledge and Kegan Paul, 1963. The chapter "Three views Concerning Human Knowledge", addresses, among other things, the instrumentalist view in the physical sciences. • H. Reichenbach, Philosophic Foundations of Quantum Mechanics, Berkeley: University of California Press, 1944. • C. Rovelli, Relational Quantum Mechanics; Int. J. of Theor. Phys. 35 (1996) 1637. arXiv: quant-ph/9609002 [3] • M. Tegmark and J. A. Wheeler, 100 Years of Quantum Mysteries", Scientific American 284, 68, 2001. • van Fraassen, B. 1972. A formal approach to the philosophy of science, in Paradigms and Paradoxes: The Philosophical Challenge of the Quantum Domain, Colodny, R. (ed.), pp. 303-366. Pittsburgh: University of Pittsburgh Press. External links pt:Interpretações da mecânica quântica Around Wikia's network Random Wiki
4ec17f99468fab2b
Clusters of atoms and ions form a type of matter that is intermediate between single atoms and bulk matter. Metallic clusters are widely used as catalysts because they have a very high surface to volume ratio, which allows them to speed up chemical reactions. Researchers have, however, recently begun to see if magnetic clusters can be used in biomedicine - for example, to separate labelled biological cells, to improve drug delivery and to enhance contrast in magnetic resonance imaging. The new study, carried out by Manuel Pereiro and colleagues at the University of Santiago de Compostela, involved performing "density functional" calculations using an off-the-shelf computer package. The calculations involved solving the Schrödinger equation for groups of atoms arranged into a cluster and searching for sliver clusters with the lowest energy and hence the highest stability. To do this, the researchers analysed a huge sample of trial geometries, containing between 2 and 22 silver atoms (figure 1). Of these clusters, they then looked at those structures that had the highest magnetic moment. Pereiro and co-workers found that the most stable cluster with the highest magnetic moment contained 13 silver atoms (figure 2). According to the team, this is because the cluster has a highly symmetric icosahedral symmetry. Symmetry allows the silver atomic orbitals to become degenerate, or have the same energy, which, in turn, produces magnetism. Clusters bigger than 13 atoms have a lower magnetic moment per atom because they have distorted icosahedral symmetry; smaller clusters have a lower magnetic moment due to their different, unstable, shapes. According to the researchers, the silver-13 cluster has a high magnetism because atoms at the edge of the cluster transfer electrons to the atom in the middle - to make this inner atom more energetically stable. The charge transfer reduces the inner atom's magnetism and boosts that of the outside atom. This is because the number of outer atoms with partially filled orbitals (that is, unpaired spins) increases while the number of inner atoms with unpaired spins decreases. (Only atoms with unpaired spins can exhibit magnetism in the absence of an external magnetic field). Overall, this leads to an increase in the average magnetic moment of the Ag13 cluster. Such clusters could be used in medicine because they are more biocompatible and less toxic than conventional metallic clusters, which make them ideal for therapeutic drug delivery applications. Confirming the magnetic properties of clusters in the lab would also be "a golden opportunity for experimentalists", says Pereiro.
7f4e0f2650f9ef80
Navigation and service 45th IFF Spring School Computing Solids: Models, Ab-initio Methods and Supercomputing 10 -21 March 2014 in Jülich, Germany Computational materials physics is concerned with the complex interplay of the myriad electrons and atoms in a solid, thereby producing a continuous stream of new and unexpected phenomena and forms of matter. An extreme range of length, time, energy and entropy scales gives rise to the complexity of an extremely broad range of solids and associated properties. There are literally hundreds of thousands of solids. Some solids exhibit useful or exotic phases, such as ferroelectricity, magnetism, superconductivity, or take on exotic states of matter such as the heavy fermion state. Other solids exhibit interesting metal to insulator transitions or show transversal, quantum and non-equilibrium transport processes, to mention but a few. Every day, new solids are synthesised or grown and novel properties are discovered. These solids find applications as present and emergent materials with specially-designed functionalities on which technological advances in fields such as information technology, energy harvesting, storage and conversion, material science, chemistry and even biology depend. It sounds rather miraculous, but the formation and stability of all solids and their properties are encoded in the statistical physics and quantum theory of the many electrons in the solid interacting via the Coulomb potential. Therefore, the Schrödinger equation of many electrons provides a fundamental theoretical concept for the understanding of a large variety of quantum phenomena that could be exploited in future technological devices. The exact solution for this type of Schrödinger equation in a solid is not yet in sight. Instead, over the past decades, powerful theoretical concepts have been developed that allow effective approximations, aimed at reducing complexity while retaining those ingredients necessary for a reliable description of the physical effects in the system. CPU Time DistributionCPU time distribution of parallel code on supercomputer. The approximations of the quantum many-body problem may be roughly divided into three different classes: wave function based methods, ab-initio density functional approaches, and realistic model Hamiltonians, that are solved in part with sophisticated and highly specialized many-body methods such as renormalization techniques or quantum Monte Carlo.  Due to the length and the time scales of the systems investigated, the complexity of the interactions and the possible degree of non-equilibrium, this field has benefited tremendously from the exponential growth of computer resources, in part with new computer architectures. Adapting existing computer codes or developing new codes for these new infrastructures is an increasingly pressing and demanding issue in computational based research.   For download  Poster of IFF-Springschool 2014 (PDF, 1 MB)  Flyer of IFF-Springschool 2014 (PDF, 1 MB)
d462e9488fff4176
Study Programmes 2016-2017 Theoretical chemistry and physics applied to biomoleculs structural analysis Duration : 24h Th, 24h Pr Number of credits : Bachelor in bioengineering4 Master in bioengineering : chemistry and bio-industries (120 ECTS)4 Lecturer : Christian Damblon, Edwin De Pauw Coordinator : Christian Damblon Language(s) of instruction : French language Organisation and examination : Teaching in the second semester Units courses prerequisite and corequisite : Prerequisite or corequisite units are presented within each program Learning unit contents : Description of the ralation between the different spectrometries. Quantic theory and Schrödinger equation. Ultraviolet-visible spectrometry. Infrared and raman spectrometry. Nuclear magnetic resonance spectrometry (1H, 13C, 2D). Mass spectrometry. Coupled techniques. Learning outcomes of the learning unit : Understand theoretical basis of spectrometric methods. Apply these techniques to structural analysis of biological molécules. After completing the course the student is expected to - understand phenomenons that govern the different spectrometry : ultra-violet, visible, infrared, raman, nuclear magnetic resonance, mass spectrometry. - read and explain spectra obtained by the different techniques - apply these techniques on corresponding instruments - identify a molecule from its different spectra Prerequisite knowledge and skills : CHIM9268-1 - General Chemistry CHIM9255-3 - Organic Chemistry CHIM9239-2 - Biological molecules chemistry CHIM9267-1 - Equilibrium Chemistry Planned learning activities and teaching methods : Theoretical lectures. Interpretation exercises of spectra obtained by different spectrometers. The exercises include a brief theoretical reminder with interpretation of the spectra of different chemical functions, spectrometry exercises made by students with the help of the teacher, summary exercises implementing different spectra for the same unknown molecule. Practical work for techniques of IR, MALDI-TOF, LC-MS, GC-MS and NMR. Practical work are given by sets of students working in groups. The experiments illustrate and complement the theoretical notions. Writing a report is requested at the end of the sessions. These reports will be corrected to allow the student to evaluate his work. The presence in the laboratory is mandatory. Any absence must be justified by a medical certificate in proper form. Access to the chemistry exam will not be granted to students with more than a third of unexcused absences in labs. For security reasons, access to the laboratory is authorized only for Students with a lab coat, their safety glasses and in order of registration. Glasses should be worn when handling. There is no practical work examination as such. However, questions involving laboratory situations and laboratory 'vocabulary' may appear when evaluating for exercises. Lectures : 24h Practical Works : 24h (Execises 8h, practical work on devices 16h) Recommended or required readings : The course notes include some of the literature that refers student to books that can help to better understanding of the material.   Assessment methods and criteria : Students must bring their student Ulg card and their identity card to attend all events, under penalty of being denied access and consideration of the event. The distribution of evaluations is as follows: - Written exam:Theory + exercices: 60%  Practicals :40%  Attendance at practical work on devices is mandatory. Work placement(s) : Organizational remarks : Contacts : Prof. Damblon Christian                                  Université de Liège Chimie Biologique                                     Département de Chimie                                         +32 4 3663788                      Prof. De Pauw Edwin Université de Liège Chimie Physique                                             Laboratoire de spectrométrie de masse        Département de Chimie +32 43663415 Items online : Sectroscopic methods, Mass Spectrometry Molecular structure analysis, mass spectrometry
c1bfb841ff386ed6
Interpretations of quantum mechanics From Wikipedia, the free encyclopedia Jump to: navigation, search An interpretation of quantum mechanics is a set of statements which attempt to explain how quantum mechanics informs our understanding of nature. Although quantum mechanics has held up to rigorous and thorough experimental testing, many of these experiments are open to different interpretations. There exist a number of contending schools of thought, differing over whether quantum mechanics can be understood to be deterministic, which elements of quantum mechanics can be considered "real", and other matters. This question is of special interest to philosophers of physics, as physicists continue to show a strong interest in the subject. They usually consider an interpretation of quantum mechanics as an interpretation of the mathematical formalism of quantum mechanics, specifying the physical meaning of the mathematical entities of the theory. History of interpretations[edit] Main quantum mechanics interpreters The definition of quantum theorists' terms, such as wave functions and matrix mechanics, progressed through many stages. For instance, Erwin Schrödinger originally viewed the electron's wave function as its charge density smeared across the field, whereas Max Born reinterpreted it as the electron's probability density distributed across the field. Although the Copenhagen interpretation was originally most popular, quantum decoherence has gained popularity. Thus the many-worlds interpretation has been gaining acceptance.[1][2] Moreover, the strictly formalist position, shunning interpretation, has been challenged by proposals for falsifiable experiments that might one day distinguish among interpretations, as by measuring an AI consciousness[3] or via quantum computing.[4] As a rough guide development of the mainstream view during the 1990s to 2000s, consider the "snapshot" of opinions collected in a poll by Schlosshauer et al. at the 2011 "Quantum Physics and the Nature of Reality" conference of July 2011.[5] The authors reference a similarly informal poll carried out by Max Tegmark at the "Fundamental Problems in Quantum Theory" conference in August 1997. The main conclusion of the authors is that "the Copenhagen interpretation still reigns supreme", receiving the most votes in their poll (42%), besides the rise to mainstream notability of the many-worlds interpretations: "The Copenhagen interpretation still reigns supreme here, especially if we lump it together with intellectual offsprings such as information-based interpretations and the Quantum Bayesian interpretation. In Tegmark's poll, the Everett interpretation received 17% of the vote, which is similar to the number of votes (18%) in our poll." Nature of interpretation[edit] More or less, all interpretations of quantum mechanics share two qualities: 1. They interpret a formalism—a set of equations and principles to generate predictions via input of initial conditions 2. They interpret a phenomenology—a set of observations, including those obtained by empirical research and those obtained informally, such as humans' experience of an unequivocal world Two qualities vary among interpretations: 1. Ontology—claims about what things, such as categories and entities, exist in the world 2. Epistemology—claims about the possibility, scope, and means toward relevant knowledge of the world In philosophy of science, the distinction of knowledge versus reality is termed epistemic versus ontic. A general law is a regularity of outcomes (epistemic), whereas a causal mechanism may regulate the outcomes (ontic). A phenomenon can receive interpretation either ontic or epistemic. For instance, indeterminism may be attributed to limitations of human observation and perception (epistemic), or may be explained as a real existing maybe encoded in the universe (ontic). Confusing the epistemic with the ontic, like if one were to presume that a general law actually "governs" outcomes—and that the statement of a regularity has the role of a causal mechanism—is a category mistake. In a broad sense, scientific theory can be viewed as offering scientific realism—approximately true description or explanation of the natural world—or might be perceived with antirealism. A realist stance seeks the epistemic and the ontic, whereas an antirealist stance seeks epistemic but not the ontic. In the 20th century's first half, antirealism was mainly logical positivism, which sought to exclude unobservable aspects of reality from scientific theory. Since the 1950s, antirealism is more modest, usually instrumentalism, permitting talk of unobservable aspects, but ultimately discarding the very question of realism and posing scientific theory as a tool to help humans make predictions, not to attain metaphysical understanding of the world. The instrumentalist view is carried by the famous quote of David Mermin, "Shut up and calculate", often misattributed to Richard Feynman.[6] Other approaches to resolve conceptual problems introduce new mathematical formalism, and so propose alternative theories with their interpretations. An example is Bohmian mechanics, whose empirical equivalence with the three standard formalisms—Schrödinger's wave mechanics, Heisenberg's matrix mechanics, and Feynman's path integral formalism, all empirically equivalent—is doubtful.[citation needed] Challenges to interpretation[edit] Difficulties reflect a number of points about quantum mechanics: 1. Abstract, mathematical nature of quantum field theories 2. Existence of apparently indeterministic and yet irreversible processes 3. Role of the observer in determining outcomes 4. Classically unexpected correlations between remote objects 5. Complementarity of proffered descriptions 6. Rapidly rising intricacy, far exceeding humans' present calculational capacity, as a system's size increases 7. Lack of interest on this subject by Dirac and other notables (including Feynman) The mathematical structure of quantum mechanics is based on rather abstract mathematics, like Hilbert space. In classical field theory, a physical property at a given location in the field is readily derived. In Heisenberg's formalism, on the other hand, to derive physical information about a location in the field, one must apply a quantum operation to a quantum state, an elaborate mathematical process.[7] Schrödinger's formalism describes a waveform governing probability of outcomes across a field. Yet how do we find in a specific location a particle whose wavefunction of mere probability distribution of existence spans a vast region of space? The act of measurement can interact with the system state in peculiar ways, as found in double-slit experiments. The Copenhagen interpretation holds that the myriad probabilities across a quantum field are unreal, yet that the act of observation/measurement collapses the wavefunction and sets a single possibility to become real. Yet quantum decoherence grants that all the possibilities can be real, and that the act of observation/measurement sets up new subsystems.[8] Quantum entanglement, as illustrated in the EPR paradox, seemingly violates principles of local causality.[9] Complementarity holds that no set of classical physical concepts can simultaneously refer to all properties of a quantum system. For instance, wave description A and particulate description B can each describe quantum system S, but not simultaneously. Still, complementarity does not usually imply that classical logic is at fault (although Hilary Putnam took such a view in "Is Logic Empirical?"); rather, the composition of physical properties of S does not obey the rules of classical propositional logic when using propositional connectives (see "Quantum logic"). As now well known, the "origin of complementarity lies in the non-commutativity of operators" that describe quantum objects (Omnès 1999). Since the intricacy of a quantum system is exponential, it is difficult to derive classical approximations. Instrumentalist interpretation[edit] Any modern scientific theory requires at the very least an instrumentalist description that relates the mathematical formalism to experimental practice and prediction. In the case of quantum mechanics, the most common instrumentalist description is an assertion of statistical regularity between state preparation processes and measurement processes. That is, if a measurement of a real-value quantity is performed many times, each time starting with the same initial conditions, the outcome is a well-defined probability distribution agreeing with the real numbers; moreover, quantum mechanics provides a computational instrument to determine statistical properties of this distribution, such as its expectation value. Calculations for measurements performed on a system S postulate a Hilbert space H over the complex numbers. When the system S is prepared in a pure state, it is associated with a vector in H. Measurable quantities are associated with Hermitian operators acting on H: these are referred to as observables. Repeated measurement of an observable A where S is prepared in state ψ yields a distribution of values. The expectation value of this distribution is given by the expression This mathematical machinery gives a simple, direct way to compute a statistical property of the outcome of an experiment, once it is understood how to associate the initial state with a Hilbert space vector, and the measured quantity with an observable (that is, a specific Hermitian operator). As an example of such a computation, the probability of finding the system in a given state is given by computing the expectation value of a (rank-1) projection operator The probability is then the non-negative real number given by By abuse of language, a bare instrumentalist description could be referred to as an interpretation, although this usage is somewhat misleading since instrumentalism explicitly avoids any explanatory role; that is, it does not attempt to answer the question why. Summary of common interpretations of quantum mechanics[edit] Classification adopted by Einstein[edit] An interpretation (i.e. a semantic explanation of the formal mathematics of quantum mechanics) can be characterized by its treatment of certain matters addressed by Einstein, such as: To explain these properties, we need to be more explicit about the kind of picture an interpretation provides. To that end we will regard an interpretation as a correspondence between the elements of the mathematical formalism M and the elements of an interpreting structure I, where: • The mathematical formalism M consists of the Hilbert space machinery of ket-vectors, self-adjoint operators acting on the space of ket-vectors, unitary time dependence of the ket-vectors, and measurement operations. In this context a measurement operation is a transformation which turns a ket-vector into a probability distribution (for a formalization of this concept see quantum operations). • The interpreting structure I includes states, transitions between states, measurement operations, and possibly information about spatial extension of these elements. A measurement operation refers to an operation which returns a value and might result in a system state change. Spatial information would be exhibited by states represented as functions on configuration space. The transitions may be non-deterministic or probabilistic or there may be infinitely many states. The crucial aspect of an interpretation is whether the elements of I are regarded as physically real. Hence the bare instrumentalist view of quantum mechanics outlined in the previous section is not an interpretation at all, for it makes no claims about elements of physical reality. The current usage of realism and completeness originated in the 1935 paper in which Einstein and others proposed the EPR paradox.[10] In that paper the authors proposed the concepts element of reality and the completeness of a physical theory. They characterised element of reality as a quantity whose value can be predicted with certainty before measuring or otherwise disturbing it, and defined a complete physical theory as one in which every element of physical reality is accounted for by the theory. In a semantic view of interpretation, an interpretation is complete if every element of the interpreting structure is present in the mathematics. Realism is also a property of each of the elements of the maths; an element is real if it corresponds to something in the interpreting structure. For example, in some interpretations of quantum mechanics (such as the many-worlds interpretation) the ket vector associated to the system state is said to correspond to an element of physical reality, while in other interpretations it is not. Determinism is a property characterizing state changes due to the passage of time, namely that the state at a future instant is a function of the state in the present (see time evolution). It may not always be clear whether a particular interpretation is deterministic or not, as there may not be a clear choice of a time parameter. Moreover, a given theory may have two interpretations, one of which is deterministic and the other not. Local realism has two aspects: • The value returned by a measurement corresponds to the value of some function in the state space. In other words, that value is an element of reality; • The effects of measurement have a propagation speed not exceeding some universal limit (e.g. the speed of light). In order for this to make sense, measurement operations in the interpreting structure must be localized. A precise formulation of local realism in terms of a local hidden variable theory was proposed by John Bell. Bell's theorem, combined with experimental testing, restricts the kinds of properties a quantum theory can have, the primary implication being that quantum mechanics cannot satisfy both the principle of locality and counterfactual definiteness. The Copenhagen interpretation[edit] The Copenhagen interpretation is the "standard" interpretation of quantum mechanics formulated by Niels Bohr and Werner Heisenberg while collaborating in Copenhagen around 1927. Bohr and Heisenberg extended the probabilistic interpretation of the wavefunction proposed originally by Max Born. The Copenhagen interpretation rejects questions like "where was the particle before I measured its position?" as meaningless. The measurement process randomly picks out exactly one of the many possibilities allowed for by the state's wave function in a manner consistent with the well-defined probabilities that are assigned to each possible state. According to the interpretation, the interaction of an observer or apparatus that is external to the quantum system is the cause of wave function collapse, thus according to Paul Davies, "reality is in the observations, not in the electron".[11] What collapses in this interpretation is the knowledge of the observer and not an "objective" wavefunction. Many worlds[edit] The many-worlds interpretation is an interpretation of quantum mechanics in which a universal wavefunction obeys the same deterministic, reversible laws at all times; in particular there is no (indeterministic and irreversible) wavefunction collapse associated with measurement. The phenomena associated with measurement are claimed to be explained by decoherence, which occurs when states interact with the environment producing entanglement, repeatedly "splitting" the universe into mutually unobservable alternate histories—effectively distinct universes within a greater multiverse. In this interpretation the wavefunction has objective reality. Consistent histories[edit] Main article: Consistent histories The consistent histories interpretation generalizes the conventional Copenhagen interpretation and attempts to provide a natural interpretation of quantum cosmology. The theory is based on a consistency criterion that allows the history of a system to be described so that the probabilities for each history obey the additive rules of classical probability. It is claimed to be consistent with the Schrödinger equation. According to this interpretation, the purpose of a quantum-mechanical theory is to predict the relative probabilities of various alternative histories (for example, of a particle). Ensemble interpretation, or statistical interpretation[edit] The ensemble interpretation, also called the statistical interpretation, can be viewed as a minimalist interpretation. That is, it claims to make the fewest assumptions associated with the standard mathematics. It takes the statistical interpretation of Born to the fullest extent. The interpretation states that the wave function does not apply to an individual system – for example, a single particle – but is an abstract statistical quantity that only applies to an ensemble (a vast multitude) of similarly prepared systems or particles. Probably the most notable supporter of such an interpretation was Einstein: The attempt to conceive the quantum-theoretical description as the complete description of the individual systems leads to unnatural theoretical interpretations, which become immediately unnecessary if one accepts the interpretation that the description refers to ensembles of systems and not to individual systems. — Einstein in Albert Einstein: Philosopher-Scientist, ed. P.A. Schilpp (Harper & Row, New York) The most prominent current advocate of the ensemble interpretation is Leslie E. Ballentine, professor at Simon Fraser University, author of the graduate level text book Quantum Mechanics, A Modern Development. An experiment illustrating the ensemble interpretation is provided in Akira Tonomura's Video clip 1.[12] It is evident from this double-slit experiment with an ensemble of individual electrons that, since the quantum mechanical wave function (absolutely squared) describes the completed interference pattern, it must describe an ensemble. A new version of the ensemble interpretation that relies on a reformulation of probability theory was introduced by Raed Shaiia.[13][14] De Broglie–Bohm theory[edit] The de Broglie–Bohm theory of quantum mechanics is a theory by Louis de Broglie and extended later by David Bohm to include measurements. Particles, which always have positions, are guided by the wavefunction. The wavefunction evolves according to the Schrödinger wave equation, and the wavefunction never collapses. The theory takes place in a single space-time, is non-local, and is deterministic. The simultaneous determination of a particle's position and velocity is subject to the usual uncertainty principle constraint. The theory is considered to be a hidden variable theory, and by embracing non-locality it satisfies Bell's inequality. The measurement problem is resolved, since the particles have definite positions at all times.[15] Collapse is explained as phenomenological.[16] Relational quantum mechanics[edit] The essential idea behind relational quantum mechanics, following the precedent of special relativity, is that different observers may give different accounts of the same series of events: for example, to one observer at a given point in time, a system may be in a single, "collapsed" eigenstate, while to another observer at the same time, it may be in a superposition of two or more states. Consequently, if quantum mechanics is to be a complete theory, relational quantum mechanics argues that the notion of "state" describes not the observed system itself, but the relationship, or correlation, between the system and its observer(s). The state vector of conventional quantum mechanics becomes a description of the correlation of some degrees of freedom in the observer, with respect to the observed system. However, it is held by relational quantum mechanics that this applies to all physical objects, whether or not they are conscious or macroscopic. Any "measurement event" is seen simply as an ordinary physical interaction, an establishment of the sort of correlation discussed above. Thus the physical content of the theory has to do not with objects themselves, but the relations between them.[17][18] An independent relational approach to quantum mechanics was developed in analogy with David Bohm's elucidation of special relativity,[19] in which a detection event is regarded as establishing a relationship between the quantized field and the detector. The inherent ambiguity associated with applying Heisenberg's uncertainty principle is subsequently avoided.[20] Transactional interpretation[edit] The transactional interpretation of quantum mechanics (TIQM) by John G. Cramer is an interpretation of quantum mechanics inspired by the Wheeler–Feynman absorber theory.[21] It describes a quantum interaction in terms of a standing wave formed by the sum of a retarded (forward-in-time) and an advanced (backward-in-time) wave. The author argues that it avoids the philosophical problems with the Copenhagen interpretation and the role of the observer, and resolves various quantum paradoxes. Stochastic mechanics[edit] An entirely classical derivation and interpretation of Schrödinger's wave equation by analogy with Brownian motion was suggested by Princeton University professor Edward Nelson in 1966.[22] Similar considerations had previously been published, for example by R. Fürth (1933), I. Fényes (1952), and Walter Weizel (1953), and are referenced in Nelson's paper. More recent work on the stochastic interpretation has been done by M. Pavon.[23] An alternative stochastic interpretation was developed by Roumen Tsekov.[24] Objective collapse theories[edit] Objective collapse theories differ from the Copenhagen interpretation in regarding both the wavefunction and the process of collapse as ontologically objective. In objective theories, collapse occurs randomly ("spontaneous localization"), or when some physical threshold is reached, with observers having no special role. Thus, they are realistic, indeterministic, no-hidden-variables theories. The mechanism of collapse is not specified by standard quantum mechanics, which needs to be extended if this approach is correct, meaning that Objective Collapse is more of a theory than an interpretation. Examples include the Ghirardi-Rimini-Weber theory[25] and the Penrose interpretation.[26] von Neumann/Wigner interpretation: consciousness causes the collapse[edit] In his treatise The Mathematical Foundations of Quantum Mechanics, John von Neumann deeply analyzed the so-called measurement problem. He concluded that the entire physical universe could be made subject to the Schrödinger equation (the universal wave function). He also described how measurement could cause a collapse of the wave function.[27] This point of view was prominently expanded on by Eugene Wigner, who argued that human experimenter consciousness (or maybe even dog consciousness) was critical for the collapse, but he later abandoned this interpretation.[28][29] Variations of the von Neumann interpretation include: Subjective reduction research This principle, that consciousness causes the collapse, is the point of intersection between quantum mechanics and the mind/body problem; and researchers are working to detect conscious events correlated with physical events that, according to quantum theory, should involve a wave function collapse; but, thus far, results are inconclusive.[30][31] Participatory anthropic principle (PAP) Main article: Anthropic principle John Archibald Wheeler's participatory anthropic principle says that consciousness plays some role in bringing the universe into existence.[32] Other physicists have elaborated their own variations of the von Neumann interpretation; including: • Henry P. Stapp (Mindful Universe: Quantum Mechanics and the Participating Observer) • Bruce Rosenblum and Fred Kuttner (Quantum Enigma: Physics Encounters Consciousness) • Amit Goswami (The Self-Aware Universe) Many minds[edit] The many-minds interpretation of quantum mechanics extends the many-worlds interpretation by proposing that the distinction between worlds should be made at the level of the mind of an individual observer. Quantum logic[edit] Main article: Quantum logic Quantum logic can be regarded as a kind of propositional logic suitable for understanding the apparent anomalies regarding quantum measurement, most notably those concerning composition of measurement operations of complementary variables. This research area and its name originated in the 1936 paper by Garrett Birkhoff and John von Neumann, who attempted to reconcile some of the apparent inconsistencies of classical boolean logic with the facts related to measurement and observation in quantum mechanics. Quantum information theories[edit] Quantum informational approaches[33] have attracted growing support.[34][35] They subdivide into two kinds[36] • Information ontologies, such as J. A. Wheeler's "it from bit". These approaches have been described as a revival of immaterialism[37] • Interpretations where quantum mechanics is said to describe an observer's knowledge of the world, rather than the world itself. This approach has some similarity with Bohr's thinking.[38] Collapse (also known as reduction) is often interpreted as an observer acquiring information from a measurement, rather than as an objective event. These approaches have been appraised as similar to instrumentalism. The state is not an objective property of an individual system but is that information, obtained from a knowledge of how a system was prepared, which can be used for making predictions about future measurements. ...A quantum mechanical state being a summary of the observer's information about an individual physical system changes both by dynamical laws, and whenever the observer acquires new information about the system through the process of measurement. The existence of two laws for the evolution of the state vector...becomes problematical only if it is believed that the state vector is an objective property of the system...The "reduction of the wavepacket" does take place in the consciousness of the observer, not because of any unique physical process which takes place there, but only because the state is a construct of the observer and not an objective property of the physical system[39] Modal interpretations of quantum theory[edit] Modal interpretations of quantum mechanics were first conceived of in 1972 by B. van Fraassen, in his paper "A formal approach to the philosophy of science." However, this term now is used to describe a larger set of models that grew out of this approach. The Stanford Encyclopedia of Philosophy describes several versions:[40] • The Copenhagen variant • Kochen-Dieks-Healey Interpretations • Motivating Early Modal Interpretations, based on the work of R. Clifton, M. Dickson and J. Bub. Time-symmetric theories[edit] Several theories have been proposed which modify the equations of quantum mechanics to be symmetric with respect to time reversal.[41][42][43][44][45][46] (E.g. see Wheeler-Feynman time-symmetric theory). This creates retrocausality: events in the future can affect ones in the past, exactly as events in the past can affect ones in the future. In these theories, a single measurement cannot fully determine the state of a system (making them a type of hidden variables theory), but given two measurements performed at different times, it is possible to calculate the exact state of the system at all intermediate times. The collapse of the wavefunction is therefore not a physical change to the system, just a change in our knowledge of it due to the second measurement. Similarly, they explain entanglement as not being a true physical state but just an illusion created by ignoring retrocausality. The point where two particles appear to "become entangled" is simply a point where each particle is being influenced by events that occur to the other particle in the future. Not all advocates of time-symmetric causality favour modifying the unitary dynamics of standard quantum mechanics. Thus a leading exponent of the two-state vector formalism, Lev Vaidman, highlights how well the two-state vector formalism dovetails with Hugh Everett's many-worlds interpretation.[47] Branching space-time theories[edit] BST theories resemble the many worlds interpretation; however, "the main difference is that the BST interpretation takes the branching of history to be a feature of the topology of the set of events with their causal relationships... rather than a consequence of the separate evolution of different components of a state vector."[48] In MWI, it is the wave functions that branches, whereas in BST, the space-time topology itself branches. BST has applications to Bell's theorem, quantum computation and quantum gravity. It also has some resemblance to hidden variable theories and the ensemble interpretation: particles in BST have multiple well defined trajectories at the microscopic level. These can only be treated stochastically at a coarse grained level, in line with the ensemble interpretation.[48] Other interpretations[edit] As well as the mainstream interpretations discussed above, a number of other interpretations have been proposed which have not made a significant scientific impact for whatever reason. These range from proposals by mainstream physicists to the more occult ideas of quantum mysticism. Comparison of interpretations[edit] The most common interpretations are summarized in the table below. The values shown in the cells of the table are not without controversy, for the precise meanings of some of the concepts involved are unclear and, in fact, are themselves at the center of the controversy surrounding the given interpretation. No experimental evidence exists that distinguishes among these interpretations. To that extent, the physical theory stands, and is consistent with itself and with reality; difficulties arise only when one attempts to "interpret" the theory. Nevertheless, designing experiments which would test the various interpretations is the subject of active research. Most of these interpretations have variants. For example, it is difficult to get a precise definition of the Copenhagen interpretation as it was developed and argued about by many people. Interpretation Author(s) Deterministic? Wavefunction Local? Counterfactual Ensemble interpretation Max Born, 1926 Agnostic No Yes Agnostic No No No No No Copenhagen interpretation Niels Bohr, Werner Heisenberg, 1927 No No1 Yes No Yes2 Causal No No No de Broglie–Bohm theory Louis de Broglie, 1927, David Bohm, 1952 Yes Yes3 Yes4 Yes No No No17 Yes Yes von Neumann interpretation John von Neumann, 1932, John Archibald Wheeler, Eugene Wigner No Yes Yes No Yes Causal No No Yes Quantum logic Garrett Birkhoff, 1936 Agnostic Agnostic Yes5 No No Interpretational6 Agnostic No No Many-worlds interpretation Hugh Everett, 1957 Yes Yes No No No No Yes Ill-posed Yes Time-symmetric theories Satosi Watanabe, 1955 Yes Yes Yes Yes No No Yes No Yes Stochastic interpretation Edward Nelson, 1966 No No Yes Yes16 No No No Yes16 No Many-minds interpretation H. Dieter Zeh, 1970 Yes Yes No No No Interpretational7 Yes Ill-posed Yes Consistent histories Robert B. Griffiths, 1984 No No No No No No Yes No Yes Objective collapse theories Ghirardi–Rimini–Weber, 1986, Penrose interpretation, 1989 No Yes Yes No Yes No No No No Transactional interpretation John G. Cramer, 1986 No Yes Yes No Yes9 No No14 Yes No Relational interpretation Carlo Rovelli, 1994 Agnostic No Agnostic10 No Yes11 Intrinsic12 No 18 No No • 1 According to Bohr, the concept of a physical state independent of the conditions of its experimental observation does not have a well-defined meaning. According to Heisenberg the wavefunction represents a probability, but not an objective reality itself in space and time. • 2 According to the Copenhagen interpretation, the wavefunction collapses when a measurement is performed. • 3 Both particle AND guiding wavefunction are real. • 4 Unique particle history, but multiple wave histories. • 5 But quantum logic is more limited in applicability than Coherent Histories. • 6 Quantum mechanics is regarded as a way of predicting observations, or a theory of measurement. • 7 Observers separate the universal wavefunction into orthogonal sets of experiences. • 9 In the TI the collapse of the state vector is interpreted as the completion of the transaction between emitter and absorber. • 10 Comparing histories between systems in this interpretation has no well-defined meaning. • 11 Any physical interaction is treated as a collapse event relative to the systems involved, not just macroscopic or conscious observers. • 12 The state of the system is observer-dependent, i.e., the state is specific to the reference frame of the observer. • 14 The transactional interpretation is explicitly non-local. • 15 The assumption of intrinsic periodicity is an element of non-locality consistent with relativity as the periodicity varies in a causal way. • 16 In the stochastic interpretation is not possible to define velocities for particles, i.e. the paths are not smooth. Moreover, to know the motion of the particles at any moment, you have to know what the Markov process is. However, once we know the exactly initial conditions and the Markov process, the theory is in fact a realistic interpretation of quantum mechanics. • 17 The kind of non-locality required by the theory, sufficient to violate the Bell inequalities, is weaker than that assumed in EPR. In particular, this kind non-locality is compatible with no signaling theorem and Lorentz invariance. See also[edit] 1. ^ Vaidman, L. (2002, March 24). Many-Worlds Interpretation of Quantum Mechanics. Retrieved March 19, 2010, from Stanford Encyclopedia of Philosophy: 2. ^ Frank J. Tipler (1994). The Physics of Immortality: Modern Cosmology, God, and the Resurrection of the Dead. Anchor Books. ISBN 978-0-385-46799-5.  A controversial poll mentioned in found that of 72 "leading cosmologists and other quantum field theorists", 58% including Stephen Hawking, Murray Gell-Mann, and Richard Feynman supported a many-worlds interpretation ["Who believes in many-worlds?",, Accessed online: 24 Jan 2011]. 3. ^ Quantum theory as a universal physical theory, by David Deutsch, International Journal of Theoretical Physics, Vol 24 #1 (1985) 4. ^ Three connections between Everett's interpretation and experiment Quantum Concepts of Space and Time, by David Deutsch, Oxford University Press (1986) 5. ^ Schlosshauer, Maximilian; Kofler, Johannes; Zeilinger, Anton (2013-01-06). "A Snapshot of Foundational Attitudes Toward Quantum Mechanics". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 44 (3): 222–230. arXiv:1301.1069Freely accessible. doi:10.1016/j.shpsb.2013.04.004.  6. ^ For a discussion of the provenance of the phrase "shut up and calculate", see Mermin, N. David (2004). "Could feynman have said this?". Physics Today. 57 (5): 10. doi:10.1063/1.1768652.  7. ^ Meinard Kuhlmann, "Physicists debate whether the world is made of particles or fields—or something else entirely", Scientific American, 24 Jul 2013. 8. ^ Guido Bacciagaluppi, "The role of decoherence in quantum mechanics", The Stanford Encyclopedia of Philosophy (Winter 2012), Edward N Zalta, ed. 9. ^ La nouvelle cuisine, by John S Bell, last article of Speakable and Unspeakable in Quantum Mechanics, second edition. 10. ^ Einstein, A.; Podolsky, B.; Rosen, N. (1935). "Can quantum-mechanical description of physical reality be considered complete?". Phys. Rev. 47: 777–780. doi:10.1103/physrev.47.777.  11. ^,Werner/Heisenberg,%20Werner%20-%20Physics%20and%20philosophy.pdf 12. ^ "An experiment illustrating the ensemble interpretation". Retrieved 2011-01-24.  13. ^ Shaiia, Raed M. (9 February 2015). "On the Measurement Problem". doi:10.5923/j.ijtmp.20140405.04.  14. ^ 15. ^ Maudlin, T. (1995). "Why Bohm's Theory Solves the Measurement Problem". Philosophy of Science. 62: 479–483. doi:10.1086/289879.  16. ^ Durr, D.; Zanghi, N.; Goldstein, S. (Nov 14, 1995). "Bohmian Mechanics as the Foundation of Quantum Mechanics ". arXiv:quant-ph/9511016Freely accessible.  Also published in J.T. Cushing; Arthur Fine; S. Goldstein (17 April 2013). Bohmian Mechanics and Quantum Theory: An Appraisal. Springer Science & Business Media. pp. 21–43. ISBN 978-94-015-8715-0.  17. ^ "Relational Quantum Mechanics (Stanford Encyclopedia of Philosophy)". Retrieved 2011-01-24.  18. ^ For more information, see Carlo Rovelli (1996). "Relational Quantum Mechanics". International Journal of Theoretical Physics. 35 (8): 1637–1678. arXiv:quant-ph/9609002Freely accessible. Bibcode:1996IJTP...35.1637R. doi:10.1007/BF02302261.  19. ^ David Bohm, The Special Theory of Relativity, Benjamin, New York, 1965 20. ^ See relational approach to wave-particle duality. For a full account see Zheng, Qianbing; Kobayashi, Takayoshi (1996). "Quantum Optics as a Relativistic Theory of Light" (PDF). Physics Essays. 9 (3): 447. doi:10.4006/1.3029255.  Also, see Annual Report, Department of Physics, School of Science, University of Tokyo (1992) 240. 21. ^ "Quantum Nocality – Cramer". Retrieved 2011-01-24.  22. ^ Nelson, E (1966). "Derivation of the Schrödinger Equation from Newtonian Mechanics". Phys. Rev. 150: 1079–1085. doi:10.1103/physrev.150.1079.  23. ^ Pavon, M. (2000). "Stochastic mechanics and the Feynman integral". J. Math. Phys. 41: 6060–6078. doi:10.1063/1.1286880.  24. ^ Roumen Tsekov (2012). "Bohmian Mechanics versus Madelung Quantum Hydrodynamics". Ann. Univ. Sofia, Fac. Phys. SE: 112–119. arXiv:0904.0723Freely accessible. Bibcode:2012AUSFP..SE..112T.  25. ^ "Frigg, R. GRW theory" (PDF). Retrieved 2011-01-24.  26. ^ "Review of Penrose's Shadows of the Mind". 1999. Archived from the original on 2001-02-09. Retrieved 2011-01-24.  27. ^ von Neumann, John. (1932/1955). Mathematical Foundations of Quantum Mechanics. Princeton: Princeton University Press. Translated by Robert T. Beyer. 28. ^ [Michael Esfeld, (1999), "Essay Review: Wigner's View of Physical Reality", published in Studies in History and Philosophy of Modern Physics, 30B, pp. 145–154, Elsevier Science Ltd.] 29. ^ Zvi Schreiber (1995). "The Nine Lives of Schrödinger's Cat". arXiv:quant-ph/9501014Freely accessible.  30. ^ Dick J. Bierman and Stephen Whitmarsh. (2006). Consciousness and Quantum Physics: Empirical Research on the Subjective Reduction of the State Vector. in Jack A. Tuszynski (Ed). The Emerging Physics of Consciousness. p. 27-48. 31. ^ Nunn, C. M. H.; et al. (1994). "Collapse of a Quantum Field may Affect Brain Function. '". Journal of Consciousness Studies'. 1 (1): 127–139.  32. ^ "- The anthropic universe". 2006-02-18. Retrieved 2011-01-24.  33. ^ "In the beginning was the bit". New Scientist. 2001-02-17. Retrieved 2013-01-25.  34. ^ Kate Becker (2013-01-25). "Quantum physics has been rankling scientists for decades". Boulder Daily Camera. Retrieved 2013-01-25.  36. ^ Information, Immaterialism, Instrumentalism: Old and New in Quantum Information. Christopher G. Timpson 37. ^ Timpson,Op. Cit.: "Let us call the thought that information might be the basic category from which all else flows informational immaterialism." 38. ^ "Physics concerns what we can say about nature". (Niels Bohr, quoted in Petersen, A. (1963). The philosophy of Niels Bohr. Bulletin of the Atomic Scientists, 19(7):8–14.) 39. ^ Hartle, J. B. (1968). "Quantum mechanics of individual systems". Am. J. Phys. 36 (8): 704–712. doi:10.1119/1.1975096.  40. ^ "Modal Interpretations of Quantum Mechanics". Stanford Encyclopedia of Philosophy. Retrieved 2011-01-24.  41. ^ Watanabe, Satosi (1955). "Symmetry of physical laws. Part III. Prediction and retrodiction". Reviews of Modern Physics. 27 (2): 179–186. doi:10.1103/revmodphys.27.179.  42. ^ Aharonov, Y.; et al. (1964). "Time Symmetry in the Quantum Process of Measurement". Phys. Rev. 134: B1410–1416. doi:10.1103/physrev.134.b1410.  43. ^ Aharonov, Y. and Vaidman, L. "On the Two-State Vector Reformulation of Quantum Mechanics." Physica Scripta, Volume T76, pp. 85-92 (1998). 44. ^ Wharton, K. B. (2007). "Time-Symmetric Quantum Mechanics". Foundations of Physics. 37 (1): 159–168. doi:10.1007/s10701-006-9089-1.  45. ^ Wharton, K. B. (2010). "A Novel Interpretation of the Klein–Gordon Equation". Foundations of Physics. 40 (3): 313–332. doi:10.1007/s10701-009-9398-2.  46. ^ Heaney, M. B. (2013). "A Symmetrical Interpretation of the Klein–Gordon Equation". Foundations of Physics. 43: 733–746. doi:10.1007/s10701-013-9713-9.  47. ^ Yakir Aharonov, Lev Vaidman: The Two-State Vector Formalism of Quantum Mechanics: an Updated Review. In: Juan Gonzalo Muga, Rafael Sala Mayato, Íñigo Egusquiza (eds.): Time in Quantum Mechanics, Volume 1, Lecture Notes in Physics 734, pp. 399–447, 2nd ed., Springer, 2008, ISBN 978-3-540-73472-7, DOI 10.1007/978-3-540-73473-4_13, arXiv:quant-ph/0105101v2 (submitted 21 May 2001, version of 10 June 2007), p. 443 48. ^ a b Sharlow, Mark; "What Branching Spacetime might do for Physics" p.2 • Bub, J.; Clifton, R. (1996). "A uniqueness theorem for interpretations of quantum mechanics". Studies in History and Philosophy of Modern Physics. 27B: 181–219.  • Rudolf Carnap, 1939, "The interpretation of physics", in Foundations of Logic and Mathematics of the International Encyclopedia of Unified Science. University of Chicago Press. • Dickson, M., 1994, "Wavefunction tails in the modal interpretation" in Hull, D., Forbes, M., and Burian, R., eds., Proceedings of the PSA 1" 366–76. East Lansing, Michigan: Philosophy of Science Association. • --------, and Clifton, R., 1998, "Lorentz-invariance in modal interpretations" in Dieks, D. and Vermaas, P., eds., The Modal Interpretation of Quantum Mechanics. Dordrecht: Kluwer Academic Publishers: 9–48. • Fuchs, Christopher, 2002, "Quantum Mechanics as Quantum Information (and only a little more)." arXiv:quant-ph/0205039 • -------- and A. Peres, 2000, "Quantum theory needs no ‘interpretation’", Physics Today. • Herbert, N., 1985. Quantum Reality: Beyond the New Physics. New York: Doubleday. ISBN 0-385-23569-0. • Hey, Anthony, and Walters, P., 2003. The New Quantum Universe, 2nd ed. Cambridge Univ. Press. ISBN 0-521-56457-3. • Jackiw, Roman; Kleppner, D. (2000). "One Hundred Years of Quantum Physics". Science. 289 (5481): 893.  • Max Jammer, 1966. The Conceptual Development of Quantum Mechanics. McGraw-Hill. • --------, 1974. The Philosophy of Quantum Mechanics. Wiley & Sons. • Al-Khalili, 2003. Quantum: A Guide for the Perplexed. London: Weidenfeld & Nicholson. • de Muynck, W. M., 2002. Foundations of quantum mechanics, an empiricist approach. Dordrecht: Kluwer Academic Publishers. ISBN 1-4020-0932-1.[1] • Roland Omnès, 1999. Understanding Quantum Mechanics. Princeton Univ. Press. • Karl Popper, 1963. Conjectures and Refutations. London: Routledge and Kegan Paul. The chapter "Three views Concerning Human Knowledge" addresses, among other things, instrumentalism in the physical sciences. • Hans Reichenbach, 1944. Philosophic Foundations of Quantum Mechanics. Univ. of California Press. • Tegmark, Max; Wheeler, J. A. (2001). "100 Years of Quantum Mysteries". Scientific American. 284: 68–75. doi:10.1038/scientificamerican0201-68.  • Bas van Fraassen, 1972, "A formal approach to the philosophy of science", in R. Colodny, ed., Paradigms and Paradoxes: The Philosophical Challenge of the Quantum Domain. Univ. of Pittsburgh Press: 303-66. • John A. Wheeler and Wojciech Hubert Zurek (eds), Quantum Theory and Measurement, Princeton: Princeton University Press, ISBN 0-691-08316-9, LoC QC174.125.Q38 1983. Further reading[edit] Almost all authors below are professional physicists. External links[edit] 1. ^ de Muynck, Willem M (2002). Foundations of quantum mechanics: an empiricist approach. Klower Academic Publishers. ISBN 1-4020-0932-1. Retrieved 2011-01-24.
cc614bcb4f46ea75
Take the 2-minute tour × The Schrödinger Equation provides a Probability Density map of the atom. In light of that, are either of the following possible: 1. The orbital/electron cloud converges to a 2d surface without heat (absolute zero)? 2. heat is responsible for the probability density variation from the above smooth surface? I have taken two calculus based physics, and Modern Physics with the Schrödinger equation, Heisenberg Uncertainty Principle, Etc. share|improve this question 1 Answer 1 up vote 11 down vote accepted 1.) No. All the calculations one does in elementary quantum mechanics courses are at zero temperature. If they were at a finite temperature, you could never reliably say what quantum mechanical state your system is in; it would always be in an ensemble of different states. Since the ground-state wavefunction and ground-state density is not a 2d surface, you don't get one at $T = 0$. 2.) No. At zero temperature, the probability density of your electron is given by the ground state wavefunction: $$\varrho(x) = \psi_0^*(x) \psi_0(x)$$ At finite temperature, your system is best described by an ensemble of states. Basically, you get $$\varrho(x) = \sum_i p_i \psi_i^*(x) \psi_i(x)$$ where $p_i$ is the ensemble-probability for your system to be in state $\psi_i(x)$. For a canonical ensemble, for example, you have $p_i \sim e^{-E_i/kT}$ if your $\psi_i(x)$ are the energy-eigenstates with eigenenergies $E_i$. The same is true for any other expectation value: $$\langle \hat A \rangle = \sum_i p_i \langle \psi_i | \hat A | \psi_i \rangle$$ Note the two different expectation value here: One is $\langle \psi_i | \hat A | \psi_i \rangle$, the quantum mechanical expectation value of $\hat A$ when the system is in state $| \psi_i \rangle$. The sum over these, together with the $p_i$, then gives the thermodynamic expectation value This framework is used everywhere in physics and has been proven to be mind-bogglingly exact. share|improve this answer +1. This is a very good statement of the state of affairs, according to standard quantum theory. For completeness, it's probably worth adding that this theory is incredibly well-tested experimentally. For instance, when people do atomic physics experiments, they do them at a very wide range of temperatures. The wavefunctions corresponding to the various atomic energy levels do not vary as functions of temperature. (I mention this only because it seems possible that the questioner is asking whether standard theory might be wrong, as opposed to asking what standard theory says. It isn't.) –  Ted Bunn Apr 18 '11 at 22:11 Your Answer
3c94f3ed42015c30
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer The main application of Feynman path integrals (and the primary motivation behind them) is in Quantum Field Theory - currently this is something standard for physicists, if even the mathematical theory of functional integration is not (yet) rigorous. My question is: what are the applications of path integrals outside QFT? By "outside QFT" I mean non-QFT physics as well as various branches of mathematics. (a similar question is Doing geometry using Feynman Path Integral?, but it concerns only one possible application) share|cite|improve this question Some expansions in deformation theory, Lie theory, study of graph cohomology etc. are Feynman integral like expansions and one can formally define "theories leading to them". See for example articles by Dror Bar-Natan for some of such combinatorial and Lie-theoretic aspects. Kontsevich's own deformation quantization formula is usual quantum mechanics is governed by a theory called Poisson sigma model (this was the intuition behind Kontsevich's formula, though he did not explicitly write it that way, but it was later rediscovered by Cattaneo and Felder). On the other hand, I find very fascinating Sasha Goncharov's "theory" giving a Feynman diagram expansion giving "correlators" formally like in physics, but in fact consisting of Hodge theoretic information on Kähler manifolds: share|cite|improve this answer Witten, I think, deserves much of the credit for getting mathematicians interested in the path integral, with his paper Quantum field theory and the Jones polynomial. In particular, path integrals are closely related to questions about (quantum) groups. For one direction, namely the perturbative Feynman path integral, you should check out Dror Bar-Natan's thesis and later work. share|cite|improve this answer Also, wasn't it Atiyah who first asked whether there was a physics explanation of the Jones polynomial? – Kevin H. Lin Apr 6 '10 at 0:25 For a pretty narrow definition of mathematicians: analysts, operator algebraists, and integrable systems people had been thinking about path integrals in various contexts long before Witten got involved. I'm not saying Witten hasn't been influential, especially in geometry and topology, but path integrals and mathematics didn't meet for the first time in the early 80s. – userN Apr 6 '10 at 1:28 Kevin and AJ both make good points, and I apologize for misrepresenting the history. My only excuse is that the Witten paper is a nice place to start a history of the topics I've been most interested in. (Incidentally, I originally posted only Dror's thesis, and then decided that perhaps I should mention Witten's motivation for it.) – Theo Johnson-Freyd Apr 6 '10 at 2:08 The path integral has many applications: Mathematical Finance: In mathematical finance one is faced with the problem of finding the price for an "option." An option is a contract between a buyer and a seller that gives the buyer the right but not the obligation to buy or sell a specified asset, the underlying, on or before a specified future date, the option's expiration date, at a given price, the strike price. For example, an option may give the buyer the right but not the obligation to buy a stock at some future date at a price set when the contract is settled. One method of finding the price of such an option involves path integrals. The price of the underlying asset varies with time between when the contract is settled and the expiration date. The set of all possible paths of the underlying in this time interval is the space over which the path integral is evaluated. The integral over all such paths is taken to determine the average pay off the seller will make to the buyer for the settled strike price. This average price is then discounted, adjusted for for interest, to arrive at the current value of the option. Statistical Mechanics: In statistical mechanics the path integral is used in more-or-less the same manner as it is used in quantum field theory. The main difference being a factor of $i$. One has a given physical system at a given temperature $T$ with an internal energy $U(\phi)$ dependent upon the configuration $\phi$ of the system. The probability that the system is in a given configuration $\phi$ is proportional to $e^{-U(\phi)/k_B T}$, where $k_B$ is a constant called the Boltzmann constant. The path integral is then used to determine the average value of any quantity $A(\phi)$ of physical interest $\left< A \right> := Z^{-1} \int D \phi A(\phi) e^{-U(\phi)/k_B T}$, where the integral is taken over all configurations and $Z$, the partition function, is used to properly normalize the answer. Physically Correct Rendering: Rendering is a process of generating an image from a model through execution of a computer program. The model contains various lights and surfaces. The properties of a given surface are described by a material. A material describes how light interacts with the surface. The surface may be mirrored, matte, diffuse or any other number of things. To determine the color of a given pixel in the produced image one must trace all possible paths form the lights of the model to the surface point in question. The path integral is used to implement this process through various techniques such as path tracing, photon mapping, and Metropolis light transport. Topological Quantum Field Theory: In topological quantum field theory the path integral is used in the exact same manner as it is used in quantum field theory. Basically, anywhere one uses Monte Carlo methods one is using the path integral. share|cite|improve this answer Some of your examples are about Wiener integrals (minus sign in the exponent rather than imaginary phase), which are matematically well-defined rather than Feynman path integrals which are successfully defined only in some special cases. – Zoran Skoda Apr 6 '10 at 21:03 I agree. All examples I am aware of outside of QFT exchange the i for a -1. Are you aware of any non-QFT examples that have an i in the exponent? – Kelly Davis Apr 6 '10 at 22:26 One application is to computer graphics. When simulating the effect of lighting a translucent material (see my avatar!) you often need to integrate over all possible paths from the light source to the camera via the material. This is similar to the Feynman integral in quantum mechanics, but note that this is an integral in the domain of classical geometric optics, not quantum field theory. I believe it was Jerry Tessendorf who pioneered this approach in the graphics world. You may have watched movies with effects rendered using techniques derived from Tessendorf's! I should add that this is a particular case of what Steve Huntsman describes in his answer. share|cite|improve this answer I usually think of a path integral as just a very glorified and specific version of a simple and general construction from probability. Namely, a path integral is basically an element of an ordered product of matrices belonging to some semigroup. So under this interpretation, "path integrals" are ubiquitous when this sort of object is being considered--particularly in Markov processes. Every time you're computing a multi-step transition probability, you're doing a path integral, and vice versa. In discrete-time Markov processes you take a power of the transition matrix. Each element of it encodes all the ways in which you can get from the initial to the final state in the appropriate number of steps, along with their proper weights. In continuous time it's the same basic idea, but a bit more involved. The idea is covered here for inhomogeneous continuous-time processes in the course of demonstrating a fairly general form of the Dynkin formula. Here's the gist in physics: We can arrive at a formal solution to the Schrödinger equation via a time evolution operator, i.e. $\vert \psi(t) \rangle = U(t) \lvert \psi(0) \rangle$, $U(t) = e^{-itH}$. But equivalently, the quantum initial-value problem is solved once we have the propagator/transition amplitude/Green function $U(x,t,x_0,t_0) = \langle x \lvert U(t-t_0) \rvert x_0 \rangle$, since $\psi(x,t) = \int dx_0 U(x,t,x_0,t_0) \psi(x_0,t_0)$. The transition amplitudes enable us to obtain transition probabilities by the simple expedient of taking squared norms. The transfer matrix is an infinitesimal time evolution operator: i.e., $T = U(\Delta t) = \exp(-i \Delta t \cdot H) = I - i\Delta t \cdot H$, where these equalities are up to $o(\Delta t)$. Since time evolution operators belong to a semigroup, we have after a simple manipulation that $U(x_N, t_0 + N \Delta t, x_0, t_0) = \langle x_N \lvert T^N \rvert x_0 \rangle$. Following Feynman, we can also obtain the propagator from the Lagrangian point of view. But the idea is still basically the same. share|cite|improve this answer Feynman Path Integral is connected with saddle point method and stationary phase method . In fact is used as generating function for certain factors in perturbation series. So it can be used wherever this technique may be used, if problem requires certain normalizations. If You are looking for variational solution and You cannot find exact solution, path integral is always an option, specially if You know zeroth configuration, and You want to amount certain perturbations which are polynomial potentials ( because then You may account it as functional derivatives see share|cite|improve this answer Path integral is NOT in general "related" to stationary phase; rather the stationary phase is an asymptotic method for integrals with rapidly oscillating phase, whose infinite dimensional version (that version is to large extent non-rigorous and underdeveloped mathematically) can be sometimes meaningfully APPLIED to the path integral. This is a path integral version of the WKB approximation of the usual approach to QM (nlab). Approximating variational extrema by path integral is equally OK in certain asymptotic regime. – Zoran Skoda Apr 7 '10 at 0:34 Yes You are right - my mistake and inconsistency - in general ( from point of view of some kind of the definition, for example by means of general propagator composed in time ordered points). You are right. But please, could You give me an example of this approach without such method? Possibly the only one is Gaussian path integral in quantum oscillator. Other ones usually are treated in perturbation given by saddle point method. – kakaz Apr 7 '10 at 12:20 If we understand QFT as the framework that unites quantum mechanics and special relativity, then I'd refer to Hagen Kleinert: "Path integrals in Quantum Mechanics, Statistics, Polymer Physics and Financial Markets" for non-QFT non-pure-mathematical applications. share|cite|improve this answer Your Answer
f82d3b0ac287c4bf
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer Are there any major fundamental results in finite-dimensional linear algebra discovered after early XX century? Fundamental in the sense of non-numerical (numerical results, of course, are still interesting and important); and major in the sense of something on the scale of SVD or Jordan normal form. (EDIT) As several commenters observed, using Jordan normal form as a benchmark sets the bar way too high. Let's try lowering it to Weyl's inequality. share|cite|improve this question Does the computational complexity of matrix multiplication (for arbitrary fields) count as "numerical"? If not, the critical exponent has moved as recently as last year. – Felipe Voloch Feb 27 '13 at 20:13 @Felipe, does a small movement in the critical exponent count as "major"? Jordan normal form is setting the bar rather high.... – Gerry Myerson Feb 27 '13 at 23:25 @Gerry: major enough to get published in JAMS. – Abdelmalek Abdesselam Feb 27 '13 at 23:28 I'm fond of Weyl's question from 1912: given the spectra $\lambda,\mu$ of two Hermitian matrices, what can you say about the spectrum $\nu$ of the sum? which Weyl gave the first inequalities on. The full list of inequalities was conjectured in the 1960s, proven in the late 1990s, and only cut down to the minimal list this century. I won't put this as an "answer" because seriously, Jordan normal form! – Allen Knutson Feb 28 '13 at 2:06 Here's our survey article on that result: – Allen Knutson Feb 28 '13 at 4:08 up vote 10 down vote accepted Definitely, some items on the top of my list are: 1. Random matrix theory --- both asymptotic and non asymptotic; including things like semi-circular law, circular law, and so on. Check out Terry Tao's blog for very nice summaries. 2. The resolution of Horn's conjecture (see this nice summary article by R. Bhatia, which also mentions several other nice connections) 3. Randomised linear algebra and progress on fast solutions to linear systems (see e.g., the very readable summary in N. Vishnoi's web book) 4. Advances in quantum information theory? Though I don't know how much of that I would push into just linear algebra 5. Not advances in linear algebra itself, but the gigantic success of basic linear algebra in new areas (machine learning, information retrieval, etc., e.g., Google's PageRank method). share|cite|improve this answer While I admit that 'non-numerical' is a bit of a vague criteriion; I would still think that almost linear time solution is not 'non-numerical', in the sense it was I believe intended. – quid Feb 28 '13 at 14:40 I would say the theory of quivers and in particular Gabriel's theorem on finite representation type and its extensions to tame type. Representations of quivers are essentially linear algebra problems in a different language. For instance Jordan canonical form is the description of indecomposable reps of a quiver with one vertex and a loop. In general things like the classification of two endomorphisms of vector spaces, matrix pencils and the n-subspace problem are all problems in the rep theory of quivers. The intro to the book of Gabriel-Roiter says more. Added. A quiver is a directed multigraph, often assumed finite in this context. A representation of a quiver Q is an assignment of a vector space to each vertex and a linear transformation to each edge from the vector space at its source to the vector space if its target. Isomorphisms are isomorphisms of vertex spaces making commuting squares with the edge linear transformations. There is a fairly straightforward notion of direct sum and hence indecomposable rep. Finite rep type means finitely many isoclasses of indecomposables, tame type essentially means indecomposables come in 1-parameter families (plus finitely many exceptions) if you fix the dimensions of the vertex spaces. Wild means its representation theory contains that of all finite dinensional (and hence all finitely generated) algebras. In particular the first order theory is undecidable. Only finite, tame and wild occur. share|cite|improve this answer Hmm. Are there any applications of quivers in linear algebra other than Jordan's form? After brief googling, it seems to me that quivers are used in all branches of mathematics, except for LA. – Timur Feb 28 '13 at 3:39 Subspace problems of the form 'classify all ways to embed n subspaces in a vector space' can be studied using quivers. The four subspace problem is studied in a nice paper of Gelfand and Ponomarev 'Problems of linear algebra and classification of quadruples of subspaces...'. – George Melvin Feb 28 '13 at 3:53 Timur, the representation of quivers is linear algebra. – Mariano Suárez-Alvarez Feb 28 '13 at 4:29 Just putting the references asked for by Timur: share|cite|improve this answer Since you lowered the level to Weyl's Inequalities (1912), it is worth mentionning the improvements of these inequalities made by Ky Fan, Lidskii and others. They culminated in a much involved conjecture by A. Horn (1961), eventually proved by Knutson & Tao on the turn of the century. share|cite|improve this answer This is a borderline suggestion, both in terms of how "major" it is and timing (does 1931 count as "early" 20th century?), but there is the Gershgorin circle theorem. share|cite|improve this answer Also a borderline suggestion since it is rather multilinear than just linear: Recent progress on low rank tensor approximation for all kinds of different applications within mathematics. A list of applications from this preprint includes • approximation of multidimensional integrals • electronic structure calculations • solving stochastic or parameter dependent PDEs • approximating Green's functions in high dimensions • solving Boltzmann-type equations or high-dimensional Schrödinger equations • rational approximation problems • computational finance • multivariate regression and machine learning. share|cite|improve this answer From Wikipedia: "The linear-programming problem was first shown to be solvable in polynomial time by Leonid Khachiyan in 1979, but a larger theoretical and practical breakthrough in the field came in 1984 when Narendra Karmarkar introduced a new interior-point method for solving linear-programming problems." share|cite|improve this answer Both the ellipsoid and the interior point method look to me (note: I'm by no means an expert on this) like analysis-flavored algorithms built specifically for $\mathbb R$ rather than the setting of a general ordered field (or even really closed field); I wouldn't necessarily call them linear algebra for these reasons... – darij grinberg Feb 28 '13 at 5:03 I would not say that the ellipsoid method is "build for $\mathbf R$". It has a definitely arithmetic flavor. Indeed, to get a lower bound for the volume of ellipsoids, one uses crucially the (trivial but overwhemlingly important) fact that the absolute value of a nonzero integer is at least $1$. – ACL Feb 28 '13 at 13:24 While I admit that 'non-numerical' is a bit of a vague criteriion; I would still think that this is not 'non-numerical', in the sense it was I believe intended. – quid Feb 28 '13 at 14:39 Your Answer
f42eeaed8acac98e
Lanthanide Luminescence Software Package - 稀土发光软件 中文 | ​日本语 |  हिंदी 한국어 | ไทย                                                             Theoretical Foundations of the Models Implemented in LUMPAC Process of Geometry Optimization. 1 Excited States Calculation. 1 Intensity Parameters Calculation. 1 Emission Radiative Rate Calculation. 1 Energy Transfer Rates Calculation. 1 Emission Quantum Yield Calculation. 1 Process of Geometry Optimization The potential energy surface (PES) consists of a surface of the calculated energy (E) as a function of the geometric parameters of a molecule (q), E = f (q1, q2, ..., qn), where n is the number of geometric parameters. A steady point in the PES is defined by a flat point on the surface, being mathematically represented by . If , the steady point is a transition state. In contrast, if , the steady point corresponds to a minimum of energy, usually a local minimum. The lowest of the local minima, the global minima, usually defines the most stable ground state geometry of the molecule. In general, the geometry optimization procedure consists in supplying an input molecular structure with geometrical parameters q0 hoped to be as close as possible to the steady point desired. This reasonable geometry is then submitted to an algorithm which systematically alters the atomic positions until a minimum steady point is reached, defined by its geometric parameters qi. The computational chemistry methods that are applied to perform geometry optimizations may be divided roughly into two groups: molecular mechanics methods and quantum methods. In molecular mechanics methods, roughly speaking, the molecule is treated as set points, each representing an atom, interconnected by springs, representing the chemical bonds. Such methods are very fast because the potentials are classic and no electronic wave functions are present. In contrast, quantum methods attempt to solve, for the entire molecular system, the famous Schrödinger eigenvalue equation , where  is the Hamiltonian operator,  is the wave function (the eigenvector), and E is the total energy of the system (the eigenvalue). The quantum computational methods are usually divided into four groups: i) the Hartree-Fock methods which solve the self-consistent field Schrödinger equation under the independent particle approximation, calculating all possible two-electron integrals; ii) the post Hartree-Fock methods, which also take into consideration the contributions due to electron correlations; iii) the semiempirical quantum chemical methods, which are based on the Hartree-Fock method, though some integrals are replaced by parameters, adjusted to reproduce experimental data during its process of development; and, finally, iv) the methods based on density functional theory (DFT), which consider the electron density as the fundamental entity, and not the wave function as all other quantum methods do. The method of choice for performing geometry optimizations depends mainly on the number of atoms present in the desired system. Normally, the standard procedure for geometry optimization consists in building the chemical structure by adding atoms in arbitrary positions and by subsequently connecting them according to their chemical bonds. The next step is to previously optimize the geometry by using less costly computational methods. Molecular mechanics methods are quite fast methods and some of them have parameters available for almost all elements. As a result, although molecular mechanics methods do not provide accurate geometries, such methods are usually applied to transform the drawn geometries into reasonably good starting structures for refinement by computationally more accurate and usually more expensive calculations [1]. The ground state geometries of lanthanide complexes can be calculated by two different quantum chemical based approaches: (i) DFT or ab initio methods with effective core potentials (ECP) for treating the lanthanide ions, or (ii) semiempirical methods. In 2006 [2] and 2011 [3] two papers were published in order to compare geometries predicted by semiempirical methods with those predicted by ab initio and DFT methods, with crystallographic data as reference. In contrast to what would be expected, the results showed that by enlarging the size of the basis set, or by including electron correlation, or both, deviations of the predicted coordination polyhedrons with respect to the crystallographic ones consistently increased, reducing the quality of the results. And among all ab initio methods evaluated, the method RHF/STO-3G using the MWB core effective potential was the most efficient for predicting the coordination polyhedron of lanthanide complexes [2]. This result confirms that the Sparkle models, which demand a much lower computational effort, have higher accuracy in calculations and modeling, when compared to ab initio/ECP ones [3]. In LUMPAC, the Sparkle models will be used in the geometry optimization step because such methods have an excellent capability of geometry prediction and also a considerably low computational cost. As a result, we will now provide a detailed description of these models. The procedure of development of the Sparkle model consists in parameterizing a semiempirical Hamiltonian, such as AM1 or PM3, for example, in which the lanthanide ion is replaced by a +3e point charge. This point charge is subjected to a repulsive potential exp(-αr), where the parameter α quantifies the size of the ion. This mathematic entity is called sparkle”. As the bond between Ln3+ and atom ligands has high ionic character, the Sparkle model has been consistently proven to be adequate. The first Sparkle model, named SMLC (Sparkle Model for the Calculation of Lanthanide Complexes), was developed by Andrade and coworkers in 1994 [4]. This version was parameterized for the AM1 semiempirical model with only one experimental structure in the parameterization set: the tris(acetylacetonate)-(1,10-phenanthroline) of europium (III). When this Sparkle model version was evaluated for a representative test containing 96 europium complexes, the SMLC model lead to errors of approximately 0.68 Å for Ln–L, lanthanide–ligand atom, distances. However, the second parameterized version of the Sparkle model [5], SMLC II, published in 2004, included Gaussian functions in the core-core repulsion energy. The errors for Ln–L distances decreased from 0.68 to 0.28 Å when tested with the same europium structures set. A new and much more sophisticated parameterization scheme was then carried out within AM1 for the Sparkle model in 2005 and was  initially developed for Eu3+, Gd3+, and Tb3+[6]. This new version of the model was called Sparkle/AM1. The main changes consisted in the application of more sophisticated statistical techniques, both in the selection of the most representative training sets as well as in the validation of the parameters obtained. In the Sparkle/AM1 model development of the three parameterized ions, more than 200 different crystallographic structures were used together with a new response function for minimization in the parameterization procedure. These changes made it possible to decrease the errors for Ln–L distances from 0.28 to 0.09 Å in europium complexes. Test sets of gadolinium complexes (70 structures) and of terbium complexes indicated errors of approximately 0.07 Å. Then, the Sparkle/AM1 model was generalized for all types of ligands and parameterized for all 15 trivalent lanthanide ions [7-14]. Currently, the Sparkle models are also parameterized for the following semiempirical models: PM3 [15-21], PM6 [22], PM7 [23] and RM1 [24]. The geometry optimization of lanthanide complexes has a great importance for studying the luminescent properties of the system. All published Sparkle models (Sparkle/AM1, Sparkle/PM3, Sparkle/PM6, Sparkle/PM7  and Sparkle/RM1) are available in MOPAC2012 [25]. The choice of which of the available Sparkle models is to be used must be based mainly on the capability of the underlying semiempirical method, either AM1, PM3, PM6, RM1 or PM7 to correctly describe the specific ligands involved. Nevertheless, many tests performed by our group suggest Sparkle/RM1 to be the version that presents the best overall results. Excited States Calculation The singlet and triplet excited states of the organic part can be calculated by using methods based on time-dependent density functional theory (TD-DFT) [26] or by the semiempirical INDO/S method [27, 28]. In 2001, Gorelsky and Lever [36] compared these two methodologies for the ground and excited states calculations of Ru(II) complexes.  The electronic spectra obtained by these two different methods showed excellent agreement with each other. However, even today, the TD-DFT method is still inappropriate to treat complexes with more than 100 atoms, due to its high demand of computational resources. Santos and coworkers evaluated the accuracy of the semiempirical INDO/S method in comparison with TD-DFT ab initio results in studies of lanthanide complexes [29]. The results showed that triplet state energies calculated by the semiempirical method presented errors similar to those obtained by TD-DFT methodology, with the advantage of being hundreds of times faster. In this context, the geometries optimized by the Sparkle models are used to calculate the singlet and triplet excited states by using the configuration interaction simple (CIS) of INDO/S, which has an accuracy of about 1000 cm-1 [27, 28]. This method is implemented in ZINDO [30] and ORCA [31] programs. In this procedure, a point of charge +3e represents the lanthanide ion [32]. Intensity Parameters Calculation The intensity parameters, Ωλ (λ = 2, 4, and 6), are calculated by Judd-Ofelt theory [33, 34]. According to this theory, the central ion is affected by the nearest neighbor atoms, through a static electric field also referred as crystal or ligand field. Judd and Ofelt described, in independent works, the importance of the electric dipole mechanism for the 4f 4f transitions from the mixing of a ground state 4fN configuration with excited state configurations of opposite parity through the odd terms of the ligand field Hamiltonian. All 4f orbitals have the same parity, that is , where l = 3 for lanthanide ions. Then, the mixing are between the 4f orbitals plus higher-n orbitals, such as the 5d orbital, which presents l = 2, and has an opposite parity to that of the f orbital. The intensity parameters describe the interaction between the lanthanide and ligand atoms, and are calculated by Eq. 1.                            Eq. (1) One aspect which is very important for the possible application of this theory is to know the values that each of the rank variables λ, t, and p may assume in relation to each other. As can been seen in Eq. (1), for example, when λ is equal to 2, t will be equal to 1 and 3, whereas the values of p will be equal to 0, 1, ..., t. The Bλtp parameters are calculated by:                              Eq. (2) The first term, , refers only to the forced electric dipole (ED) contribution, and are given by Eq. (3).                             Eq. (3) The term ΔE corresponds to the difference of energy between the ground state barycenters and the first excited state configuration of opposite parity. The radial integrals, , were taken from reference [35], with an extrapolation for the quantity . The values of radial integrals for Eu3+ ion are  = 0.9175 a.u.,  = 2.0200 a.u.,  = 9.0390 a.u., and  = 110.0323 a.u. The terms θ(t,p) are numeric factors associated with each lanthanide ion and are estimated from radial integrals of Hartree-Fock calculations [36]. The values of θ(t, λ) are θ(1,2) =  -0.17; θ(3,2) = 0.345; θ(3,4) = 0.18; θ(5,4) = -0.24; θ(5,6) = -0.24, and θ(7,6) = 0.24, for Eu3+ ion [36]. The second term of Eq. (2), , refers only to the dynamics coupling (DC) contribution and is given by Eq. (4). This contribution is complementary to the one from the Judd-Ofelt static electric field model, and was firstly considered by Mason and coworkers [37]. The dynamics coupling mechanism, which is more important than the electric dipole mechanism for some transitions, is due to the high gradient of the electromagnetic field generated by the ligands when they interact with an incident external field. The DC mechanism depends on the nature of both ligands and on the coordination geometry, and explains the hipersensitivity in 4f – 4f transitions [36].                             Eq. (4) The quantity (1 – σλ) is a shielding field due to 5s and 5p filled orbitals of lanthanide ions, which have a radial extension larger than those of the 4f orbitals [36]. The values of σλ are σ2 = 0.600, σ4 = 0.139 and σ6 = 0.100.  is a tensor operator of rank  ( = 2, 4, and 6) with values  = -1.366,  = 1.128, and  =- 1.270 for lanthanide ions.  is the Kronecker delta function. As such,  is equal to 0 when t is different from the λ + 1. The parameters  (t = 1, 3, 5, and 7), given by Eq. (5), are the so-called odd-rank ligand field parameters and contains a sum over the surrounding atoms.                          Eq. (5)  are the conjugated spherical harmonics. As can be observed in Eq. (5), the spherical harmonics depend on the spherical coordinates  of the j ligand atoms. The term  present in Eq. (5), according to the Simple Overlap Model (SOM) [38, 39] developed by prof. Oscar Malta (UFPE, Brazil), formalizes that crystal field Hamiltonian and is adequately calculated as a function of the charge density  between the lanthanide ion and the j ligand atoms. The SOM model assumes two postulates [38]: i) the 4f energy potential is generated by charges, uniformly distributed in a small region located around the mid-points of the lanthanide–ligand chemical bonds.; and ii) the total charge in each region is equals to , where the  parameter is proportional to the magnitude of the total overlap between the lanthanide ion and the ligand atoms. Figure 1 shows a sketch of the effective charges for a hypothetical complex (LnL3). The vector  represents the position of the j ligand atoms, and the vector  represents the position of the ith electron of the central metal ion. Figure 1. Graphical representation of the Simple Overlap Model. In other words, the term  introduces a correction to the crystal field parameters of the point charge electrostatic model (PCEM), , such that .This way, this correction confers a degree of covalency to the point charge model from the inclusion of the parameter , since PCEM treats the metal-ligand atom bonds as a purely electrostatic phenomenon. The effective charges are assumed to be at positions defined at the distances given by . The factor , given by Eq. (6), indicates that the effective charges may not be exactly at . The plus sign in Eq. 6 is used when the barycenter of the overlap region is displaced towards the ligand, which happens in the case of oxygen and fluorine coordinating atoms. The minus sign is used when this barycenter is displaced towards the central ion, as is the case of nitrogen and chlorine coordinating atoms.                             Eq. (6) The overlap between 4f orbitals and the valence orbitals of the j ligands, , is calculated by Eq. (7).                                   Eq. (7) where  is a constant equal to 0.05 and n is equal to 3.5 for the lanthanides. R0 is the smallest among all lanthanide–ligand atom distances. The parameters  (t = 1, 3, 5, and 7), like the parameter , also depends on the coordination geometry and on the chemical environment around the lanthanide ion, and is given by Eq. (8).                                    Eq. (8) The limitations in the intensity parameters calculation consist in determining the quantities,  and . As a result, it is necessary to use the experimental intensity parameters. The charge factors and polarizabilities, used in  and  calculations, respectively, are adjusted to reproduce the experimental intensity parameters. During the adjustment procedure, the intensity parameters calculated () from the optimized geometry, obtained from Sparkle model, are compared with the experimental intensity parameters (). The response function (Fresp) is defined by Eq. (9).                             Eq. (9) Emission Radiative Rate Calculation The emission radiative rate (Arad), taking into account the magnetic dipole and forced electric dipole mechanisms, is given by Eq. (10):                                Eq. (10) where  is the difference of energy between the 5D0 and 7FJ states (in cm-1), h is the Planck constant, 2J + 1 is the degeneracy of the initial state, and n is the refractive index of the medium, usually assumed to be equal to 1.5. Sed (Eq. (11)) and Smd in Eq. (12)) are the magnetic dipole and forced electric dipole mechanisms, respectively.                                  Eq. (11) The squared matrix elements , , and  are equal to 0.0032, 0.0023, and 0.0002, respectively, for Eu3+ [40].                                    Eq. (12) where m is the electron mass. The matrix elements that appear in Eq. (12) above are determined according to the intermediate coupling mechanism. The 5D0 7F1 transition is the only one that does not have contributions from the electric dipole mechanism and are quantified theoretically as Smd = 9,6 × 10-42 esu2 cm2 [41]. The 5D0 7FJ transitions (J = 0, 3, and 5) are forbidden by magnetic dipole and forced electric dipole mechanisms, that is, their contributions are equal to 0. The contributions of each transition to the emission radiative rate are calculated by Eq. (13), and are named branching ratios (β0,J).                           Eq. (13) Energy Transfer Rates Calculation The theoretical model used to calculate the energy transfer rate between the organic ligands and the lanthanide ion was developed by Malta and coworkers [42, 43].  According to this model, the energy transfer rates, WET, are given by the sum of two terms:                          Eq. (14) The term , given by Eq. (15), corresponds to the energy transfer rate obtained from the multipolar mechanism.                         Eq. (15) The quantities  are the electric dipole contributions to the intensity parameters, taking into account only the contributions of the parameters .  are reduced matrix elements of the tensor operators U(l). The parameters γλ are calculated by Eq. (16).                          Eq. (16) In Eq. (14), J is the total angular momentum quantum number of the lanthanide ion. G is the degeneracy of the initial state of the ligand, and a specifies the spectroscopic term of the 4f orbitals. SL is the dipole strength associated with the ϕ ϕ´ transitions in the ligands. The quantity F, calculated by Eq. (17), corresponds to the temperature-dependent factor and contains a sum of Frank Condon factors.                                     Eq. (17) The factor  (Eq. 17) is the ligand state bandwidth-at-half-maximum (in cm-1), and  is the difference of energy between the donor and acceptor states involved in the energy transfer process. For lanthanide complexes, the energy donor states correspond to the singlet and triplet excited states, whereas the acceptor states correspond to the lanthanide ion excited states. Typical values of F are in the range of 1012 - 1013 erg-1. The second term of Eq. (13),  refers to the energy transfer rates obtained from the exchange mechanism and are calculated by Eq. (18).                   Eq. (18) In Eq. (18), sm (m = -1, 0, 1) is the spherical component of the spin operator of electron j in the j ligand. The μz is the component z of its dipole operator, and S is the total spin operator of the lanthanide ion. Typical values of the squared matrix element of the coupled dipole and spin operators lie in the range 10-34 - 10-36 esu2 cm2 [44]. Malta proposed some corrections for the energy transfer rates equations in 2008 [44]. The first one corresponds to the addition of a shielding factor,  to the first term of Eq. (15). This contribution had been initially neglected for dipole-dipole mechanism. The second one corresponds to the replacement of the quantity  in Eq. (18) by the  overlap integral. This last correction causes a change of three orders in the magnitude of the energy transfer rate calculated by the exchange mechanism. Nevertheless, as these rates are still much higher than the radiative and non-radiative rates, the general conclusions for the theoretical quantum yield obtained from previous work remain valid [44]. The energy transfer rates depend on the distance difference between donor and acceptor states involved in the process of energy transfer. This distance is known as RL, and for its determination, it is necessary to estimate the molecular orbital coefficients of the i atom (ci) that contributes to the ligand states (triplet or singlet). It is important to know the distance from i atom (Ri) to the lanthanide ion. The quantities ci and Ri are calculated by data obtained from excited states calculations using semiempirical INDO/S method. This way, the RL is given by Eq. (19).                            Eq. (19) The energy back-transfer rates (WBT) are obtained by multiplying the transfer rate (WET) by the Boltzmann factor, , considering the room temperature. ∆ refers to the energy difference between the donor and acceptor levels, and kB is Boltzmann constant. The most important transfer channels for systems containing europium ion, according to Malta [45], are shown in Figure 2. Descrição: C:\Users\Diogo\Desktop\teste.tif Figure 2. Transfer channels involved in the energy transfer rate processes of systems containing europium ion. The total angular momentum selection rules, J, of the lanthanide 4f states, are complementary. The europium excited states that are more likely to accept energy from ligands through the direct Coulomb interaction mechanism are 5D2, 5L6, 5G6, and 5D4. The energy transfer from ligand excited states to the 5D1 level is allowed by the exchange interaction mechanism (Eq. 18). Although the energy transfer to the 5D0 level, in principle, is forbidden by direct interaction or exchange mechanism, the selection rule can be relaxed by a mix of the total angular moments (J’s) [43, 45]. Emission Quantum Yield Calculation The emission quantum yield, given by Eq. 20, is defined as the ratio between the emitted and absorbed light intensities.                            Eq. (20) where is the 5D0 level population. and  correspond to the S0 singlet level population and absorption rate, respectively. The normalized population levels, ηj, are obtained from the appropriate rate equations given by Eq. (21).                     Eq. (21) From Eq. 21, Wij or Wji represent the transfer rates between i and j states, or j and i states. The 5D0 level population depends on the non-radiative emission rate Anrad, which still cannot be theoretically calculated.  However, Anrad can be quantified via Eq. 22 from the Arad and the experimental lifetime (τ). Because of this, the theoretical emission quantum yield depends on the experimental lifetime.                      Eq. (22) The normalized populations of the states involved in the process of energy transfer are obtained from diagonalizing the matrix that contains the rate equations showed in Fig. 3. As can been in Fig. 3, the matrix is assembled from energy transfer and back-transfer rates, Arad and Anrad. The matrix diagonal contains the transfer channels responsible for the energy depopulation of the states in the matrix columns. The channels in red (Fig. 2) are not normally included in the energy transfer rates diagrams (Fig. 2) due to the non-resonance condition presented between some ligand and the europium excited states. The emission quantum yield is then calculated from the population given by Eq. (20). -(WS-5D4 + ϕ1 + ϕ2 + WS-5D1 + WS-5D0) -(WT-5D1 + WT-5D0 + ϕ3 + WT-5D4) -(W5D4-S + k1 + W5D4-T) -(W5D1-T + k2 + W5D1-S) -(W5D0-S+ W5D0-T + Arad + Anrad) Figure 3. Matrix for obtaining the normalized energy level population, enabling the theoretical calculation of the emission quantum yield. Bibliographic References 1.    Lewars, E.G., Computational Chemistry: Introduction to the Theory and Applications of Molecular and Quantum Mechanics2010: Springer. 2.    Freire, R.O., G.B. Rocha, and A.M. Simas, Lanthanide complex coordination polyhedron geometry prediction accuracies of ab initio effective core potential calculations. Journal of Molecular Modeling, 2006. 12(4): p. 373-389. 3.    Rodrigues, D.A., N.B. da Costa, and R.O. Freire, Would the Pseudocoordination Centre Method Be Appropriate To Describe the Geometries of Lanthanide Complexes? Journal of Chemical Information and Modeling, 2011. 51(1): p. 45-51. 4.    de Andrade, A.V.M., et al., Sparkle Model for the Quantum-Chemical Am1 Calculation of Europium Complexes. Chemical Physics Letters, 1994. 227(3): p. 349-353. 5.    Rocha, G.B., et al., Sparkle Model for AM1 Calculation of Lanthanide Complexes: Improved Parameters for Europium. Inorganic Chemistry, 2004. 43(7): p. 2346-2354. 6.    Freire, R.O., G.B. Rocha, and A.M. Simas, Sparkle model for the calculation of lanthanide complexes: AM1 parameters for Eu(III), Gd(III), and Tb(III). Inorganic Chemistry, 2005. 44(9): p. 3299-3310. 7.    da Costa, N.B., et al., Sparkle/AM1 modeling of holmium (III) complexes. Polyhedron, 2005. 24(18): p. 3046-3051. 8.    Freire, R.O., G.B. Rocha, and A.M. Simas, Modeling lanthanide complexes: Sparkle/AM1 parameters for ytterbium (III). Journal of Computational Chemistry, 2005. 26(14): p. 1524-1528. 9.    Freire, R.O., et al., Modeling lanthanide coordination compounds: Sparkle/AM1 parameters for praseodymium (III). Journal of Organometallic Chemistry, 2005. 690(18): p. 4099-4102. 10.  da Costa, N.B., et al., Sparkle model for the AM1 calculation of dysprosium (III) complexes. Inorganic Chemistry Communications, 2005. 8(9): p. 831-835. 11.   Freire, R.O., G.B. Rocha, and A.M. Simas, Modeling rare earth complexes: Sparkle/AM1 parameters for thulium (III). Chemical Physics Letters, 2005. 411(1-3): p. 61-65. 12.  Freire, R.O., et al., AM1 sparkle modeling of Er(III) and Ce(III) coordination compounds. Journal of Organometallic Chemistry, 2006. 691(11): p. 2584-2588. 13.  Freire, R.O., et al., Sparkle/AM1 structure modeling of lanthanum (III) and lutetium (III) complexes. Journal of Physical Chemistry A, 2006. 110(17): p. 5897-5900. 14.  Freire, R.O., et al., Sparkle/AM1 parameters for the modeling of samarium(III) and promethium(III) complexes. Journal of Chemical Theory and Computation, 2006. 2(1): p. 64-74. 15.  Freire, R.O., G.B. Rocha, and A.M. Simas, Modeling rare earth complexes: Sparkle/PM3 parameters for thulium(III). Chemical Physics Letters, 2006. 425(1-3): p. 138-141. 16.  Freire, R.O., et al., Sparkle/PM3 parameters for the modeling of neodymium(III), promethium(III), and samarium(III) complexes. Journal of Chemical Theory and Computation, 2007. 3(4): p. 1588-1596. 17.  Freire, R.O., G.B. Rocha, and A.M. Simas, Sparkle/PM3 parameters for praseodymium(III) and ytterbium(III). Chemical Physics Letters, 2007. 441(4-6): p. 354-357. 18.  da Costa, N.B., et al., Structure modeling of trivalent lanthanum and lutetium complexes: Sparkle/PM3. Journal of Physical Chemistry A, 2007. 111(23): p. 5015-5018. 19.  Simas, A.M., R.O. Freire, and G.B. Rocha, Cerium (III) complexes modeling with Sparkle/PM3. Computational Science - Iccs 2007, Pt 2, Proceedings, 2007. 4488: p. 312-318. 20. Simas, A.M., R.O. Freire, and G.B. Rocha, Lanthanide coordination compounds modeling: Sparkle/PM3 parameters for dysprosium (III), holmium (III) and erbium (III). Journal of Organometallic Chemistry, 2008. 693(10): p. 1952-1956. 21.  Freire, R.O., G.B. Rocha, and A.M. Simas, Sparkle/PM3 for the Modeling of Europium(III), Gadolinium(III), and Terbium(III) Complexes. Journal of the Brazilian Chemical Society, 2009. 20(9): p. 1638-1645. 22.  Freire, R.O. and A.M. Simas, Sparkle/PM6 Parameters for all Lanthanide Trications from La(III) to Lu(III). Journal of Chemical Theory and Computation, 2010. 6(7): p. 2019-2023. 23.  Dutra, J.D.L., et al., Sparkle/PM7 Lanthanide Parameters for the Modeling of Complexes and Materials. Journal of Chemical Theory and Computation, 2013. 9(8): p. 3333-3341. 24.  Filho, M.A.M., et al., Sparkle/RM1 parameters for the semiempirical quantum chemical calculation of lanthanide complexes. RSC Advances, 2013. 3(37): p. 16747-16755. 25.  Stewart, J.J.P., MOPAC2009, 2009, Colorado Springs: USA. p. Stewart Computational Chemistry. 26.  Stratmann, R.E., G.E. Scuseria, and M.J. Frisch, An efficient implementation of time-dependent density-functional theory for the calculation of excitation energies of large molecules. Journal of Chemical Physics, 1998. 109(19): p. 8218-8224. 27.  Ridley, J.E. and M.C. Zerner, Triplet-States Via Intermediate Neglect of Differential Overlap - Benzene, Pyridine and Diazines. Theoretica Chimica Acta, 1976. 42(3): p. 223-236. 28. Zerner, M.C., et al., Intermediate Neglect of Differential-Overlap Technique for Spectroscopy of Transition-Metal Complexes - Ferrocene. Journal of the American Chemical Society, 1980. 102(2): p. 589-599. 29.  Santos, J.G., et al., Theoretical Spectroscopic Study of Europium Tris(bipyridine) Cryptates. Journal of Physical Chemistry A, 2012. 116(17): p. 4318-4322. 30. Zerner, M.C., ZINDO manual QTP, 1990, University of Florida: Gainesville. 31.  Neese, F., The ORCA program system. Wiley Interdisciplinary Reviews-Computational Molecular Science, 2012. 2(1): p. 73-78. 32.  de Andrade, A.V.M., et al., Theoretical model for the prediction of electronic spectra of lanthanide complexes. Journal of the Chemical Society-Faraday Transactions, 1996. 92(11): p. 1835-1839. 33.  Judd, B.R., Optical Absorption Intensities of Rare-Earth Ions. Physical Review, 1962. 127(3): p. 750-&. 34.  Ofelt, G.S., Intensities of Crystal Spectra of Rare-Earth Ions. Journal of Chemical Physics, 1962. 37(3): p. 511-&. 35.  Freeman, A.J. and J.P. Desclaux, Dirac-Fock Studies of Some Electronic Properties of Rare-Earth Ions. Journal of Magnetism and Magnetic Materials, 1979. 12(1): p. 11-21. 36.  Malta, O.L., et al., Theoretical Intensities of 4f-4f Transitions between Stark Levels of the Eu3+ Ion in Crystals. Journal of Physics and Chemistry of Solids, 1991. 52(4): p. 587-593. 37.  Mason, S.F., R.D. Peacock, and B. Stewart, Dynamic coupling contributions to the intensity of hypersensitive lanthanide transitions. Chemical Physics Letters, 1974. 29(2): p. 149-153. 38. Malta, O.L., A Simple Overlap Model in Lanthanide Crystal-Field Theory. Chemical Physics Letters, 1982. 87(1): p. 27-29. 39.  Malta, O.L., Theoretical Crystal-Field Parameters for the Yoc1 - Eu-3+ System - a Simple Overlap Model. Chemical Physics Letters, 1982. 88(3): p. 353-356. 40. Carnall, W.T., H. Crosswhite, and H.M. Crosswhite, Energy level structure and transition probabilities of the trivalent lanthanides in LaF3, 1977: Argonne National Laboratory. 41.  Peacock, R., The intensities of lanthanide f f transitions, in Rare Earths1975, Springer Berlin Heidelberg. p. 83-122. 42.  Malta, O.L., Ligand-rare-earth ion energy transfer in coordination compounds. A theoretical approach. Journal of Luminescence, 1997. 71(3): p. 229-236. 43.  Silva, F.R.G.E. and O.L. Malta, Calculation of the ligand-lanthanide ion energy transfer rate in coordination compounds: Contributions of exchange interactions. Journal of Alloys and Compounds, 1997. 250(1-2): p. 427-430. 44.  Malta, O.L., Mechanisms of non-radiative energy transfer involving lanthanide ions revisited. Journal of Non-Crystalline Solids, 2008. 354(42-44): p. 4770-4776. 45.  de Sa, G.F., et al., Spectroscopic properties and design of highly luminescent lanthanide coordination complexes. Coordination Chemistry Reviews, 2000. 196: p. 165-195.
ddb2c74b74fdc8f0
wave function Show Summary Details Quick Reference A function ψ(x,y,z) appearing in Schrödinger's equation in wave mechanics. The wave function is a mathematical expression involving the coordinates of a particle in space. If the Schrödinger equation can be solved for a particle in a given system (e.g. an electron in an atom) then, depending on the boundary conditions, the solution is a set of allowed wave functions (eigenfunctions) of the particle, each corresponding to an allowed energy level (eigenvalue). The physical significance of the wave function is that the square of its absolute value, |ψ|2, at a point is proportional to the probability of finding the particle in a small element of volume, dxdydz, at that point. For an electron in an atom, this gives rise to the idea of atomic and molecular orbitals. Subjects: Chemistry. Reference entries
f12b66ad60c05c5c
Take the 2-minute tour × In a paper, I ran into the following definition of the zero point fluctuation of our favorite toy, the harmonic oscillator: $$x_{ZPF} = \sqrt{\frac{\hbar}{2m\Omega}} $$ where m is its mass and $\Omega$ its natural frequency. However, when I try to derive it with simple arguments, I think of the equality: $$E = \frac12 \hbar\Omega=\frac12 m \Omega^2 x_{ZPF}^²$$ (using the energy eigenvalue of the $n=0$ state) giving me: $$x_{ZPF} = \sqrt{\frac{\hbar}{m\Omega}} $$ differing from the previous one by a factor $\sqrt2$. I am just puzzled, is it a matter of conventions or is there a fundamental misconception in my (too?) naive derivation? share|improve this question If you do simple dimensional estimates you should not expect the numerical factors to come out right! –  Fabian Dec 15 '12 at 17:09 Yeah, but I tried to do more that simple dimensional analysis, I wrote down an equality between the energy of the vacuum and the oscillating energy of an harmonic oscillator with $<x^2> = x_{ZPF}^2$ when I should have taken $<x^2> = \sqrt2 x_{ZPF}^2$ and so my question is again is it coming from a convention or is it physically motivated? –  Learning is a mess Dec 15 '12 at 21:59 writing down an equality between the energy of the vacuum and the oscillating energy of an harmonic oscillator is nothing more than dimensional analysis. I hope that you would agree with me that in principle you are supposed to solve the Schrödinger equation to find the ground state wave function! If you do that you will find the expression for $\langle x^2 \rangle$ with all the numerical prefactors. –  Fabian Dec 15 '12 at 22:29 I agree with Fabian: you can get any expectation value you want from the wavefunction (Hermite Polynomial) for your chosen state. Also, when you used $\frac{1}{2}m\Omega^2x_{zpf}^2$ for the energy, didn't you ignore the kinetic term in the Hamiltonian? –  twistor59 Dec 16 '12 at 21:36 Yes but equipartition says that in average the energy is distributed between the potential and the kinetic one. So I should add a factor 1/2 in front of it, when I should, for an unknown reason, add a factor 2 to get the correct formula. –  Learning is a mess Dec 16 '12 at 23:05 2 Answers 2 up vote 2 down vote accepted I think this is a combination of both a convention and a physical problem. You are equating the energy eigenvalue (ie, the total energy) to an expression that contains only $x_{ZPF}$, and does not contain $p$ at all. In other words, you are equating the total energy to a potential energy. This would be analogous to equating $E_\mathrm{total} = \frac{1}{2}kA^2$ to find the amplitude $A$ of a classical harmonic oscillator. The result is that you are using $x_{ZPF}$ to mean the "amplitude" of the zero-point fluctuation. The true result, as Ondrej Cernotik's answer derives, uses the rms value $x_{ZPF} = \sqrt{\langle\hat x^2\rangle}$. So that's the sense in which it is a convention. The sense in which it is a real physical problem is that the "amplitude" of a quantum oscillator isn't really a well-defined, measurable thing. The quantum oscillator has a non-zero probability amplitude going all the way out to infinity. The rms value is well-defined and easy to measure. So that's the preferred definition. share|improve this answer You could also argue that when defining zero point fluctuations in terms of variance of the position, the value will be smaller than the amplitude. I bet you would find that the variance (or its square root) will be exactly $\sqrt{2}$ times smaller. –  Ondřej Černotík May 24 '13 at 16:20 @OndřejČernotík, that's my guess too. I know that's the case for the classical oscillator, and I suspect someone could prove it for the quantum oscillator. I wasn't confident enough in that claim to include it in my answer, but I might add it after I think about it for awhile. –  Colin McFaul May 24 '13 at 16:34 You can find the value of zero point fluctuations just by calculating the variance $\langle(\Delta\hat{x})^2\rangle = \langle\hat{x}^2\rangle$ in the vacuum state. You can do this either using the $x$-representation or expressing the $\hat{x}$ operator using creation and annihilation operators. These are usually introduced by $$ \hat{a} = \sqrt{\frac{m\Omega}{2\hbar}}\left(\hat{x}+i\frac{\hat{p}}{m\Omega}\right), $$ so that you get $$ \hat{x} = \sqrt{\frac{\hbar}{2m\Omega}}(\hat{a}+\hat{a}^\dagger). $$ Using this to calculate $\langle\hat{x}^2\rangle = \langle 0|\hat{x}^2|0\rangle$ indeed gives you $$x_{ZPF} = \sqrt{\langle\hat{x}^2\rangle} = \sqrt{\frac{\hbar}{2m\Omega}}.$$ share|improve this answer Your Answer
52d3f83b34fd6e69
Thursday, March 7, 2013 Zero Hour: Zero Credibility A Bad Science on TV post by David Zaslavsky (Check out his awesome physics blog here) There's a new TV show on ABC, Zero Hour, whose previews really piqued my interest earlier this year. Highly skilled assassin, greatest conspiracy in the history of the world, something about clocks. Seems like good clean utterly ridiculous fun. I like horrible disaster movies, so I figured this should fit right in. But only three episodes in, Zero Hour has already managed to butcher the science so badly I'm not sure I can stand it anymore. Let me set the stage: the main character, Hank, is searching for his wife, who has been kidnapped, and to find her he has to locate a series of clocks, each of which has clues leading to the next location. Clock number 2's clue was a map of the constellation Cepheus. Combined with the time and date on the clock (the hands were frozen in place), this supposedly led Hank to the exact location you'd have to be to view the constellation at that time: Chennai, India. But maybe you can see the problem here: a constellation is not visible from only a single location! By their criteria, Hank could be looking for any place in that entire hemisphere of the Earth. Sure, maybe the clue was supposed to identify where Cepheus would be seen directly overhead, but that's not anywhere close to India. Cepheus is a northern hemisphere constellation, very close to the north celestial pole, so the only places it appears directly overhead are in the Arctic. On the other hand, here's the view from Chennai at 8:15 AM on March 8, 1938, the date and time named in the show: Cepheus is almost right on the horizon. The constellation that was directly overhead at the time was Aquila, which they could just as easily have named. To be fair, I guess that wouldn't make for very good TV because it's basically a nondescript rhombus, but then again, Cepheus is just a square with a hat, so you can't be too picky. But that snafu with the constellation, which I could have lived with, pales in comparison to the next (and most recent) episode. The clue from the third clock leads Hank, after some floundering on the acronym IAS (which anyone as smart as he is should instantly recognize), to the Institute of Advanced Study in Princeton, NJ. Let's just bypass the fact that they managed to get almost every location in Princeton wildly wrong — I mean, if you've never been there, you probably wouldn't care. (For the record, the Princeton Public Library looks like an entirely normal public library, not a stuffy university library as it's shown in the show.) What really irks me is the equation they found on Einstein's blackboard at the end. In the show, Einstein erased something from the blackboard he was working on just before he died, which was rumored to be the formula for a new power source that he considered too dangerous for humankind to control. • Real power sources come from engineers tinkering in workshops, not from equations on a blackboard. In the show, the erased formula turns out to be a key to a coded message Einstein left on the rest of his blackboard. • Real physics formulas are neither keys nor coded messages. In reality, you might recognize this formula: Yep, that's the time-independent Schrödinger equation, an entirely mundane equation that forms the basis for nonrelativistic quantum mechanics. It would have had relatively little to do with what Einstein was working on at the time, and certainly there's no way it could have been turned into the key for a coded message which would also make sense as a physics formula. Anyway, it's kind of a moot point by now. The latest news from the "TV gods" is that Zero Hour has been canceled after just these three episodes. Honestly, I'm not surprised. I only wish the lesson to take away from this would be that you can't get away with terrible science on television, and not just that sucky TV is sucky.
14abb129f97eab16
Cannot create two identical organisms and cannot defeat cancer ” One of the most striking aspects of physics is the simplicity of its laws. The Maxwell equations, the Schrödinger equation, and the Hamiltonian mechanics can be expressed in several lines. The ideas that form the basis of our worldview are also very simple: the world obeys the laws, and all the basic laws are observed everywhere. Everything is simple, accurate and expressive from the point of view of everyday mathematics, or partial differential equations, or ordinary differential equations. Everything is simple and neat – except, of course, the world. Everywhere you look, of course, outside the walls of the class of physics – a person sees a world of surprising complexity ” [1] . Kadanov and Goldenfeld [1] give some recommendations on how to explore a complex world. These recommendations are as simple as physical laws: ” To extract physical knowledge from a complex system, you need to focus on the correct level of description … Use the right level of description to catch phenomena of interest. Do not model bulldozers with quarks … you need to realize that complexity requires attitudes that are completely different from those that have so far been common in physics. So far, physicists have been looking for fundamental laws that are valid for all times and in all places. But each complex system is different from the other. Apparently, there are no general laws for complexity. Instead, it is necessary to extract “lessons” that, with insight and understanding, can be studied in one system and applied to another. ” Such an outstanding physicist as Niels Bohr formulated the unknowability of life, for ” we would undoubtedly kill the animal if we tried to bring the investigation of its organs to the point that it was possible to say what role individual atoms play in its vital functions … The minimal freedom that we are forced to provide to the body is just enough to allow him, so to speak, to hide his last secrets from us ” [2] . This is the principle of uncertainty in biology, which is similar to the principle of uncertainty in physics. If we do not even kill the living, then by interfering with the instrument for the investigation inside the living system, we distort its properties so that we investigate not its product but the product of its interaction with the device at the site of this interaction. Since it is in principle impossible to exclude the interaction of an electron with an instrument by means of which we investigate the properties of an electron, we can not determine its velocity and coordinate simultaneously. (The interaction of the device with an object that distorts the properties of this object is called the observer effect .) But the principle of uncertainty is one of the fundamental laws of physics. Perhaps not being able to formulate such positive fundamental laws as Schrödinger’s equation or Newton’s laws, we can still formulate prohibitive laws for biology. The remarkable Soviet astrophysicist Shklovsky expressed this view: ” Science is the sum of prohibitions. You can not create a perpetual motion machine. You can not transmit a signal at a speed greater than the speed of light in a vacuum, you can not simultaneously measure the coordinate and velocity of an electron ” [3] . This very elegant definition provides a possible way of defining some basic fundamental laws not only for physics. The laws of prohibitive. Laws “You can not.” And then you can ask: are there any prohibitions on biology? Awareness of such prohibitions would not allow carrying out studies that fall within the scope of the ban. On fundamentally unresolved problems, one should not waste time and money. Just like creating a perpetual motion machine. I tried to answer this question in my review “Fundamental prohibitions of biology”, published in the journal “Biochemistry” in 2009 [4] . Part of the provisions expressed here I published earlier [5] , [6] . The full version of this review was published in Biochemistry in 2018 [7] . Categories of unsolvable problems I. Unsolvable problems due to stochastic mutations in DNA replication 1. You can not create two identical individuals. Including two identical complex cells [4] . 2. You can not defeat cancer. I would also like very much to formulate such a ban: you can not defeat old age and natural death, but I can not stop on this issue due to the limited volume of the article and the complexity of the problem, and I refer readers to recent reviews [8-10] , leaving the problem for their trial. II. Unsolvable problems due to interactions in complex systems, leading to unpredictable “emergent” properties 1. It is impossible on the basis of properties of a sign to establish its reasons (the inverse problem ). 2. It is impossible on the basis of known reasons, if they interact with each other, to establish unambiguously the properties of the attribute, due to the emerging properties ( direct problem ). 3. It is impossible to predict with certainty the reaction of a complex system to an external effect. III. Unsolvable problems due to the existence of the uncertainty principle and observer effect in biology 1. It is impossible to obtain adequate information about the cells in their tissue microenvironment by isolating and analyzing a single cell – transcriptome, proteome, etc. In particular, it is impossible to draw conclusions on the properties of stem cells in their niches on the basis of stem cell cultures. 2. It must be remembered that the probe introduced into the system for observation changes its properties, at least at the position of the probe ( observer effect ). I mentioned this problem in the introduction. Her, apparently, was first formulated by Niels Bohr. I also can not discuss it and also refer the reader to the reviews [11] , [12] . This system of prohibitions, in particular the prohibition of the identity of organisms due to unavoidable stochastic mutations, leading to extreme intraorganism and interorganizational heterogeneity, calls for caution, more precisely puts limits to the hopes for personalized medicine [13] , [14] . On the problems that I have already discussed in recent reviews, I will dwell very briefly, mainly using the most recent data and referring readers to them and to the literature cited therein. The main focus of this article will focus on problem II “Cancer can not be defeated”. It fully illustrates all the complexities that the biological sciences have to deal with, and all the inadequacy of the research apparatus they are now using. I will consider other problems concisely. 1. You can not create two identical individuals, including two identical complex cells These prohibitions are associated with the constantly occurring DNA replication of various kinds of mutations – in different tissues of different speeds, but on average about three mutations per cell division. The adult human body consists of approximately 10 14 cells. Ignoring that different tissues can achieve complete differentiation at different times and that cells can die, an estimate of the number of cell divisions N resulting in the formation of a finite differentiated cell yields a value of N?46. If we take, based on the available data, the mutation rate of 10 -9 and the genome length of 3 × 10 9nucleotides, the final somatic cell as a result will receive about 120 mutations that differentiate it from the original one. The neighboring cell will receive the same number, but they will be located in other places (stochastic!). So, every two cells of an adult organism will distinguish from each other more than 200 mutational substitutions. The probability that there are two cells in which the positions of all 100 mutations will coincide is very low. Thus, the individual is a mosaic of different cells. “Biomolecule” has recently published a review on genetic mosaicism: ” Genomic puzzle: open a mosaic ” [15] . – Ed. Such a theoretical conclusion with the advent of the era of full genomic sequencing has practical confirmation [16-18] . We add to this stochastics of epigenetic changes [19] , and we come to the conclusion that there are no two genetically and epigenetically identical individuals. Each person in terms of the structure of the genome and the epigenome is unique. Even identical (homozygous) twins are not identical [20-22] . 2. You can not defeat cancer. While genes continue to mutate spontaneously, the cancer will never be eradicated completely. He will constantly arise. Cancer treatment is problematic There are two sides to this problem: the inevitability of cancer in the population and the problems with its prognostic diagnosis and treatment. A bit of history. In the victory over cancer invested a lot of money In December 2016, the US Congress passed the Cures Act of the 21st century, allocating $ 1.8 billion for 7 years to fund a cancer shot of cancer ( Cancer Moonshot ). Vice President of the United States Joe Biden expressed his hope that by 2030 we will live in a world where cancer as such will not exist . When John F. Kennedy promised in the presidential speech of 1961 to put a man on the moon and safely return it to Earth, he created a metaphor for “a shot on the moon.” Today this metaphor is used to characterize the beginning of ambitious ambitious projects designed to raise society to a new stage of development. The promise to cure the cancer that President Nixon gave in 1971, declaring war on cancer and investing $ 100 million in this project is an example of such a shot. He ended with a formal failure. Cancer was not defeated. The second cancer war, albeit a less ambitious one, announced in 2005 by Andrew von Eschenbach , then head of the National Institute of Oncology , ended in failure . Her goal was to defeat cancer by 2015. The idea that one billion dollars can eliminate cancer, misleads society. Each year of new research increasingly reveals how much the problem of cancer is complex, and makes the idea of its complete elimination less and less real. Undoubtedly, the breakthrough in cancer immunotherapy, when new fundamental knowledge about the mechanisms of the immune organization of the body led to the development of a technique that gives a small number of patients such long-term remissions that it may even be a cure. Cancer research is in the middle of the revolution, and may be on the verge of even greater success. Nevertheless, in general, we are very far from victory over cancer. “Biomolecule” repeatedly wrote about the interaction of the immune system with cancer cells and about the success of cancer immunotherapy: ” Good, bad, evil, or How to anger lymphocytes and destroy a tumor ” [23] , ” Immunostimulating vaccines ” [24] , ” T cells – puppets, or how to reprogram T-lymphocytes to cure cancer ” [25] . – Ed. A scientific lunar shot is a grandiose, optimistic and worthy enterprise. But the project should not mislead the public and damage its trust in science. In this respect, the ultimate goal of the project, as former Vice President Biden formulated it, is unrealistic. The inevitability of cancer in the population In 1996, a widely publicized interview with prominent oncologist Alfred G. Knudson was published , where he said: ” As genes continue to mutate spontaneously, the cancer will never be eradicated completely. To think otherwise is unrealistic … But we can hope that in a quarter of a century we minimize the death rate from cancer ” [26] . On what did Knudson base his statement? The human body plays with the fire of evolution. Evolutionary inevitability of cancer Cancer is the result of the evolutionary process of the development of the organism, which requires the renewal of tissues in the process of vital activity of the multicellular organism [27] . In evolution, a mechanism has evolved that consists in the withering away of old cells and replacing them with new ones. This process requires a constant division of cells throughout the life of the body. But not all cells are divided. After the completion of the development of the multicellular animal (man is no exception), in almost every tissue there remain fissionable and not fully differentiated cells – the so-called adult stem cells ( VSC ) [28] . When old tissue cells die, VSCs are divided and differentiated, becoming final adult cells. So in place of the deceased come fresh, and so it lasts a lifetime. But each cell division leads to the appearance of mutations in daughter cells [29]. Some of the mutations in VSC can initiate intraorganism evolutionary events leading to fatal malignancy. Thus, cancer is a payment for multicellularity [29-31] . One might think that the likelihood of developing cancer increases with the number of cells in the body. In turn, it follows that with the increase in life expectancy increases the likelihood of cancer. Here is how the Norwegian scientist Jarle Breivik formulated this : ” Cancer is a natural consequence of aging, and the better the medical science helps to prolong the life of people, the higher the number of cancer patients in the population ” [32] . Now axiomatically, the root cause of cancer is the damage to genes, which further lead to the subsequent evolution of a complex system that is a cancer tumor [33] . A number of experimental facts reliably substantiate this conclusion. Most recently, the brilliant works of Christian Thomasetti and Bert Vogelstein have shown that the probability of cancer of a certain tissue is almost proportional to the frequency of stem cell division ( see below ), that is, the frequency of mutations in it [34] . Thomasetti and Vogelstein came to the conclusion that DNA copy errors are responsible for 66% of the mutations, while 29% are related to environmental factors and 5% to heredity. In these 66%, cancer is a stochastic failure of an individual. Bad luck. It’s not his fault. He behaved well, did not drink, did not smoke. Just cells mutated, and by chance he received an unsuccessful mutation. Hence the most important strategic conclusion is that the main strategy of the struggle should be early diagnosis. In the early stages, cancer is much easier to cure, it does not yet have time to build up protective mechanisms. The incidence can be reduced due to external factors. But changes in environmental conditions reduce the incidence only to a level determined by stochastic mutations. In 2008, the cost of cancer treatment in the United States was $ 93 billion. Less than 15% of research funding goes to early detection, although early intervention is much more effective than late treatment. The greatest success can be achieved by reorienting research for prevention and early detection. About molecular biomarkers for early diagnosis of cancer, their search and significance for medicine, Sergei Moshkovsky told in a recent article on Biomolecule – ” Omiks biomarkers and early diagnosis: when happiness is possible” [35] . He also commented on this review, which you can see at the very end. – Ed. Cancer treatment is problematic A tumor cell in its genome can contain 10,000 mutations at the time of tumor detection (10 9 cells, 1 g.). Doctor Gleyzir ( Glazier ) ( see. Quotation in [29] ) evaluated the possible number of different cells with the number of mutations randomly distributed as a 10 ~ 68000 ! Thus, there are no two identical cells in one tumor, there are no two identical cells in different tumors. In addition, tumors of the same type differ in genetically different patients [36] . The tumor is heterogeneous genetically and epigenetically [37] . All cells in it are different in genetic structure. Among them are resistant to almost any impact [38]. With the therapeutic effect, sensitive cells die, stable remain and give rise to a new tumor – resistant to the used therapeutic effect. The so-called molecular targeted therapy, based on the use as targets of individual molecules or groups of molecules, altered in cancer cells compared to normal, is inadequate multilayered complexity of cancer. The impression is that the deeper we penetrate intimate molecular details, the more we focus on specific targets, the more the methods of treatment become inadequate to the complexity of the problem. Exhaustive genomic genotyping is likely to help only a small proportion of patients [39] . The development of intra-tumor heterogeneity also creates serious limitations for the detection of mutated molecules or signaling pathways based on molecular analysis of tumor biopsy. The result of the molecular analysis of a single biopsy specimen from a tumor is not required to be reproduced in its other parts. Therefore, treatment based on this analysis is unlikely to be of much use, since other cells with other molecular characteristics that are not susceptible to this effect are active in other parts [13] . To understand the further presentation, it seems to me useful to give a brief description of the complex systems that I have already mentioned and to which any living organism and most of its pathologies belong. A short excursion into unsolvable problems caused by the complexity of systems Problems of complex systems in recent reviews [5] , [6] , [40] are considered in detail . Here I give only a very brief summary of the main points. A complex system – a multi-component system consisting of interacting subunits, the interaction of which there are so-called emerging ( emergent ) properties inherent in the whole system and are not predictable based on the properties of the starting subunits ( see below. ). Emerging properties are the most important quality of complex systems. They can not be attributed to individual interacting components, these are properties of the whole system. In this case, the system can consist of hierarchical levels, each of which has its own emerging properties [41-45] . Complex systems are nonlinear and extremely sensitive to initial conditions [46] . This means that the trajectory of the system [41] , defined as the change in its state, for example, in time, is unpredictable. Two systems, whose states are very close in the initial period, and which function according to the same rules, will have different trajectories over time. The immune system, for example, consists of various elements (macrophages, T- and B-cells, etc.), which interact with each other by the exchange of signals (in particular cytokines ). Even under the influence of absolutely identical stimuli, the immune system, like other complex systems, including a cancer tumor, can respond absolutely differently. Small changes in the impact on complex systems do not necessarily give a small response to the system. Often, a large unexpected effect occurs in response to a small impact. In complex systems, it is impossible to accurately predict the effect of environmental factors. In the body it (as well as the influence of stochastic factors) begin in utero and continue throughout the life of the individual [22] . Finally, complex systems, as a whole, do not lend themselves to computer simulation  [46] , [47] . The editorial board will take the liberty to note that, although absolutely accurate modeling of complex systems is really impossible, in many practical cases such “simulations” are not only feasible, but can also be very useful: ” Spatial-temporal modeling in biology ” [48] , ” 12 methods in pictures:” dry “biology ” [49] . – Ed. 1. Cancer is a complex system with a large number of interactions with the environment that generates unpredictable emerging properties The cancer tumor combines a complex, varying in time and space a variety of cells, each of which has its own signal cascades, replication, transcription, etc. and undergoes numerous changes in the way of transformation into a cancer cell. It is inherent in the complexity of a growing evolving system with all its characteristics and properties that enable it to withstand anticancer agents and induce that intracellular cellular heterogeneity that makes the tumor unique to each patient [36] . In this respect, cancer differs from all other diseases [50] . However, the complexity of the tumor is far from being limited to sets of cancerous genes and cells, which to some extent influence the progression of the tumor. In its latest version of the distinctive features ( hallmarks of ) cancer, Douglas Hanahan and Robert Weinberg [51] indicate that tumors showing still another dimension of complexity ( Tumors not exhibit another dimension of complexity ): tumor, attracted to its evolution with a wide repertoire of normal cells, they adapt to their needs, and which contribute to the acquisition of distinctive criteria, creating what is called a tumor microenvironment ( tumor microenvironment), its ecological niche, and that plays an important role both in the evolution of the primary tumor itself and in its metastasis [52] , [53] . Today, it can be safely assumed that, perhaps, the main complexity of the tumor is the enormous amount of interactions between the cancer cells themselves (usually epithelial cells) and the various stromal cells that make up the tumor microenvironment [54] . A cancer tumor carries a symbiosis with its environment. Therapeutic approaches can be directed not at cancer cells, but at destroying interactions within an evolving cancerous tumor In recent years, a fundamentally new approach has received a great response. Instead of treating mutations in cancer cells, the new therapy focuses on destroying the complex interactions of cancer cells with the immune components of the stroma that determine the success of the evolution of cancer in the body. These interactions allow cancer cells to inhibit immune cells in their environment and thus avoid destruction by the immune system. The successful use of inhibitors of these interactions in the clinic over the past five years [55-57]have demonstrated that cancer can be recognized by the immune system, and the immune system can regulate and even eliminate tumors [24] . Although these methods of treatment immeasurably increased the life expectancy of many cancer patients, a large number of patients with malignant diseases do not respond to therapy [58-60] . In addition, successes are accompanied by numerous adverse autoimmune effects [61] , [62] . In general, the effect of therapy on a particular patient is unpredictable. Future studies are likely to open new promising immunological targets or old targets in combination with other immunotherapeutic approaches, chemo- and radiotherapy, oncolytic viruses and small molecule therapy. The obtained results once again demonstrate that complexity remains a challenge. Its response to impacts is unpredictable. 2. Unsolvable problems in the study of genotype relationships with the phenotype and decoding of the functional architecture of the genome Now I would like to consider the problem of the inability to determine the exact map of genomic elements that determine the phenotypes of the organism, in particular, the definition of the functionality of non-coding and non-regulating genomic elements (the so-called junk ) [63] . The community of researchers of the functional elements of the genome was divided into two almost equal irreconcilable camps, like the Jonathan Swift lilliputians in the debate between “spiked” and “stupidly” on the correct practice of breaking eggs. Many believe that, for example, very low-level transcripts represent a huge world of functional RNAs only because they exist. Their opponents think that there is reason to question this view. Undoubtedly, many functional coding and non-coding RNAs can be found among such transcripts, but it is even more likely that the overwhelming majority of these transcripts are simply junk. So who is right? To answer this question, we need, first of all, to define the meaning of the term “function”. In 2012, the authors of the consortium ENCODE [64] attributed the “biochemical functions” of 80% of the human genome [65] on the grounds that they are transcribed. The figure of 80% contradicted the notion that up to 90% of the genome is a junk. But it was enthusiastically accepted by “determinists” (and believers in intelligent design of the genome ( Intelligent design )), because it seemed to indicate the absence of non-functional elements in the genome. But this interpretation was sharply criticized by proponents of the evolutionary origin of organisms and their genomes: if an element has some kind of biochemical activity, it does not necessarily mean that it has any significance for the functioning of the cell and, especially, of the whole multicellular organism. According to [66] and other authors, the functional element must differ in that it is selected in the process of evolution and therefore remains in the genome. In my comment in BioEssays I defined it this way: a transcribed junk remains junk if it does not acquire a function that is selected in evolution [67] . Such functionality is protected by natural selection; If this protection ceases to work, the functional element will accumulate harmful mutations and eventually lose its functional activity [66] . Graur Dan ( Dan of Graur ) [68] suggested that only the functionality of the genome can be damaged by harmful mutations, while mutations in the non-functional parts must be neutral. Due to harmful mutations, each couple of each generation must produce more than two children in order to maintain a constant population size. The larger the proportion of the functionally important part of the genome, the more descendants should be born by each pair in order to maintain the size of the population. Graur found that if 80% of the genome were to function, an unacceptably high birth rate would be required. According to his calculations, the maximum proportion of functional elements in the human genome does not exceed 25%. His conclusions are confirmed by recent data, that 8.2% (7.1-9.2%) of the human genome are currently subject to negative selection and, therefore, are likely to be functional [69] . Graur speculates : ” There is no need to sequester everything under the sun. We need to sequester only those parts that, as we know, are functional . ” The determination of the position and functions of all functional elements of the genome is problematic Graur’s assessments are extremely valuable, especially from an evolutionary point of view, but they do not give an indication of specific elements of the genome that are functional, and the specific functions that they perform. The answer to this question can not be given on the basis of the analysis of phenotypes, as it would require solving the so-called inverse problems ( to inverse problem ), which in general can not be solved [70] . In the case of complex systems, especially complex ones such as the organism, it is impossible to solve a direct problem – the derivation of the phenotype properties from the genome structures and other molecular components involved in the formation of the phenotype. This inability is due to the interactions of these components, generating unpredictable emerging properties. The simplest paradigmatic example is a direct problem: from the properties of hydrogen and oxygen molecules it is impossible to predict all the properties of water – its boiling point, surface tension, properties as a solvent, specific gravity, the ability to freeze, giving snowflakes of various forms, etc. This is the “phenotype” of water. It arises from the interactions of hydrogen and oxygen. The inverse problem: simply from the properties (the “phenotype”) of water it is impossible to deduce which components make up it, and to predict their properties. To go to the links of the genome and the body, I will quote [the text in square brackets – approx. author] of one of the most respected modern scientists and philosophers of science Sidney Brenner, a Nobel laureate who introduced into science a remarkable model – the nematode. ” The sequence of the human genome was once likened to sending man to the moon. The comparison turns out to be literally correct, because it’s easy to send a person to the moon; his return, that’s what is difficult and expensive. Today the sequence of the human genome, so to speak, is stuck on the metaphorical moon, and our task is to bring it back to Earth and give it the life it deserves. Everyone understood that getting the sequence would be very simple, this is the problem of 3M Science – sufficiency of money, machines and management (Money, Machines and Management). Interpreting the sequence to identify the functions it encodes and the regulatory elements and understanding how they are integrated into the complex physiology of a person has always been considered a difficult task, but since it is easier to continue collecting data, this task[interpretation] were in fact not seriously engaged ” [70] . ” There is no simple way to” map “organisms to their genomes if they have reached a certain level of complexity. … The proposals to base everything on the sequence of the genome, annotating it with additional data [direct problem] , will only lead to an increase in its incomprehensibility ” [70] . Thus, we fall into the “scissors of impossibility.” I suggest that the reader draw conclusions by reading a very interesting article by Brenner [70] . To convince the skeptics, I decided to give such an illustration. Look at the picture. Can anyone imagine how the Einstein chromosomes look when they look at this person? This is the reverse problem. And on the other hand – is it possible, looking at the chromosomes of Einstein, to imagine his appearance or mental abilities? 1. N. Goldenfeld. (1999). Simple Lessons from Complexity . Science . 284 , 87-89; 2. Bohr N. Atomic physics and human cognition . M .: Publishing house of foreign literature, 1961. – p. 22-23; 3. Shklovsky I. Echelon. Moscow: “News.” – from. 109; 4. ED Sverdlov. (2009). Fundamental taboos of biology . Biochemistry Moscow . 74 , 939-944; 5. ED Sverdlov. (2016). Multidimensional complexity of cancer. Simple solutions are needed . Biochemistry Moscow . 81 , 731-738; 6. Sverdlov E.D. (2014). System biology and personalized medicine: to be or not to be? Russian Journal of Physiology. THEM. Sechenov . 100 , 505-541; 7. Sverdlov E.D. (2018). Intolerable problems of biology: you can not create two identical organisms, you can not defeat cancer, you can not map the organism to the genome . Biochemistry . 4 , 515-527; 8. Anne Granger, Rosalind Mott, Nikla Emambokus. (2016). Is Aging as Inevitable as Death and Taxes? . Cell Metabolism . 23 , 947-948; 9. Jan Vijg, Eric Le Bourg. (2017). Aging and the Inevitable Limit to Human Life Span . Gerontology . 63 , 432-434; 10. Eric Le Bourg, Jan Vijg. (2017). The Future of Human Longevity: Time for a Reality Check . Gerontology . 63 , 527-528; 11. Joseph P. Zbilut, Alessandro Giuliani. (2008). Biological uncertainty . Theory Biosci. . 127 , 223-227; 12. Pierluigi Strippoli, Silvia Canaider, Francesco Noferini, Pietro D’Addabbo, Lorenza Vitale, et. al .. (2005).. Theor Biol Med Model . 2 , 40; 13. Ian F. Tannock, John A. Hickman. (2016). Limits to Personalized Cancer Medicine . N Engl J Med . 375 , 1289-1294; 14. From medicine to everyone – to medicine for everyone! ; 15. Genomic puzzle: open the mosaic ; 16. Donald Freed, Eric Stevens, Jonathan Pevsner. (2014). Somatic Mosaicism in the Human Genome . Genes . 5 , 1064-1094; 17. Ian M. Campbell, Chad A. Shaw, Pawel Stankiewicz, James R. Lupski. (2016). Erratum to: Somatic Mosaicism: Implications for Disease and Transmission Genetics . Trends in Genetics . 32 , 138; 18. 12 methods in pictures: sequencing of nucleic acids ; 19. Na Zhang, Shumin Zhao, Su-Hua Zhang, Jinzhong Chen, Daru Lu, et. al .. (2015). Intra-Monozygotic Twin Pair Discordance and Longitudinal Variation of Whole-Genome Scale DNA Methylation in Adults. PLoS ONE . 10 , e0135022; 20. Alan G Baxter, Philip D Hodgkin. (2015). No luck replicating the immune response in twins . Genome Med . 7 ; 21. Ray Greek, Mark J. Rice. (2013). Monozygotic Twins . Anesthesiology . 118 , 230; 22. Jenny van Dongen, P. Eline Slagboom, Harmen HM Draisma, Nicholas G. Martin, Dorret I. Boomsma. (2012). The continuing value of twin studies in the omics era . Nat Rev Genet . 13 , 640-653; 23. Good, bad, evil, or How to anger lymphocytes and destroy a tumor ; 24. Immunostimulating vaccines ; 25. T-cells are puppets, or how to reprogram T-lymphocytes to cure cancer ; 26. H. McIntosh. (1996). 25 Years Ahead: Will Cancer Be a “Background-Noise Kind of Disease”? . JNCI Journal of the National Cancer Institute . 88 , 1794-1798; 27. Robert A. Gatenby, Robert J. Gillies, Joel S. Brown. (2010). The evolutionary dynamics of cancer prevention . Nat Rev Cancer . 10 , 526-527; 28. Trunk and branches: stem cells ; 29. Eugene D. Sverdlov. (2011). Genetic Surgery – A Right Strategy to Attack Cancer . CGT . 11 , 501-531; 30. Aging is the cost of suppressing cancerous tumors? ; 31. Why do cells grow old ? 32. Jarle Breivik. (2016). Reframing the “Cancer Moonshot” . EMBO Rep. . 17 , 1685-1687; 33. Andrii I. Rozhok, James DeGregori. (2016). The Evolution of Lifespan and Age-Dependent Cancer Risk. Trends in Cancer . 2 , 552-560; 34. C. Tomasetti, B. Vogelstein. (2015). Variation in cancer risk among tissues can be explained by the number of stem cell divisions . Science . 347 , 78-81; 35. Omiks biomarkers and early diagnostics: when happiness is possible ; 36. LD Wood, DW Parsons, S. Jones, J. Lin, T. Sjoblom, et. al. (2007). The Genomic Landscapes of Human Breast and Colorectal Cancers . Science . 318 , 1108-1113; 37. Hariharan Easwaran, Hsing-Chen Tsai, Stephen B. Baylin. (2014). Cancer Epigenetics: Tumor Heterogeneity, Plasticity of Stem-like States, and Drug Resistance . Molecular Cell . 54 , 716-727; 38. A. Pribluda, CC de la Cruz, EL Jackson. (2015). Intratumoral Heterogeneity: From Diversity Comes Resistance . Clinical Cancer Research . 21 , 2916-2923; 39. J. Kaiser. (2009). Looking for a Target On Every Tumor . Science . 326 , 218-220; 40. Parag Mallick. (2016). Complexity and Information: Cancer as a Multi-Scale Complex Adaptive System. Physical Sciences and Engineering Advances in Life Sciences and Oncology . 5-29; 41. D. Rickles, P. Hawe, A. Shiell. (2007). A simple guide to chaos and complexity . Journal of Epidemiology & Community Health . 61 , 933-937; 42. Béla Suki, Jason HT Bates, Urs Frey. (2011) Complexity and Emergent Phenomena ; 43. Denis Noble. (2013). A biological relativity view of the relationships between genomes and phenotypes . Progress in Biophysics and Molecular Biology . 111 , 59-65; 44. Robert W. Korn. (2005). The Emergence Principle in the Biological Hierarchies . Biol Philos . 20 , 137-151; 45. Marc HV Van Regenmortel. (2004). Reductionism and complexity in molecular biology . EMBO Rep . 5, 1016-1020; 46. Ray Greek, Lawrence A. Hansen. (2013). Questions about the predictive value of the evolved complex adaptive system for a second: Exemplified by the SOD1 mouse . Progress in Biophysics and Molecular Biology . 113 , 231-253; 47. Ray Greek, Andre Menache. (2013). Systematic Reviews of Animal Models: Methodology versus Epistemology . Int. J. Med. Sci. . 10 , 206-221; 48. Spatial-temporal modeling in biology ; 49. 12 methods in pictures: “dry” biology ; 50. Lauren MF Merlo, John W. Pepper, Brian J. Reid, Carlo C. Maley. (2006). Cancer as an evolutionary and ecological process . Nat Rev Cancer . 6 , 924-935; 51. Douglas Hanahan, Robert A. Weinberg. (2011). Hallmarks of Cancer: The Next Generation . Cell . 144 , 646-674; 52. Tumor conversations, or the role of microenvironment in the development of cancer ; 53. Metastasis of tumors ; 54. Mina J Bissell, William C Hines. (2011). Why do not we get more cancer? A proposed role of the microenvironment in restraining cancer progression . Nat Med . 17 , 320-329; 55. Yvonne Bordon. (2015). Checkpoint parley . Nat Rev Cancer . 15 , 3-3; 56. Mark J. Smyth, Shin Foong Ngiow, Antoni Ribas, Michele WL Teng. (2016). Combination cancer immunotherapies tailored to the tumour microenvironment . Nat Rev Clin Oncol . 13 , 143-158; 57. Nadiah Abu, M. Nadeem Akhtar, Swee Keong Yeap, Kian Lam Lim, Wan Yong Ho, et. al .. (2016). Flavokawain B induced cytotoxicity in two breast cancer cell lines, MCF-7 and MDA-MB231 and inhibited the metastatic potential of MDA-MB231 via the regulation of several tyrosine kinases. In vitro . BMC Complement Altern Med . 16 ; 58. Margaret K. Callahan, Michael A. Postow, Jedd D. Wolchok. (2015). CTLA-4 and PD-1 Pathway Blockade: Combinations in the Clinic . Front. Oncol. . 4 ; 59. Yael Diesendruck, Itai Benhar. (2017). Novel, immune checkpoint inhibiting antibodies in cancer therapy-Opportunities and challenges . Drug Resistance Updates . 30 , 39-47; 60. TJ Vreeland, GT Clifton, GS Herbert, DF Hale, DO Jackson, et. al .. (2016). Gaining ground on synergy: combining checkpoint inhibitors with cancer vaccines . Expert Review of Clinical Immunology . 12 , 1347-1357; 61. Leonard Calabrese, Vamsidhar Velcheti. (2017). Checkpoint immunotherapy: good for cancer therapy, bad for rheumatic diseases . Ann Rheum Dis . 76 , 1-3; 62. Claire F. Friedman, Tracy A. Proverbs-Singh, Michael A. Postow. (2016). Treatment of the Immune-Related Adverse Effects of Immune Checkpoint Inhibitors . JAMA Oncol . 2 , 1346; 63. How much rubbish in our DNA ; 64. Dream called? ; 65. The ENCODE Project Consortium. (2012). An integrated encyclopedia of DNA elements in the human genome . Nature . 489 , 57-74; 66. Graur D. (2016). Rubbish DNA: the functionless fraction of the human genome . Cornell University library; 67. Eugene Sverdlov. (2017). Transcribed Junk Remains Junk If It Does Not Acquire A Selected Function in Evolution . BioEssays . 39 , 1700164; 68. Dan Graur. (2017). An Upper Limit on the Functional Fraction of the Human Genome . Genome Biology and Evolution . 9 , 1880-1885; 69. Chris M. Rands, Stephen Meader, Chris P. Ponting, Gerton Lunter. (2014). 8.2% of the Human Genome Is Constrained: Variation in Rates of Turnover across the Functional Element Classes in the Human Lineage . PLoS Genet . 10 , e1004525; 70. S. Brenner. (2010). Sequences and consequences . Philosophical Transactions of the Royal Society B: Biological Sciences . 365 , 207-212; 71. Erika Check Hayden. (2010). Human genome at ten: Life is complicated . Nature . 464 , 664-667; 72. John H. Doonan, Robert Sablowski. (2010). Walls around tumours . Nat Rev Cancer . 10 , 794-802.
df4661668fa88c86
Saturday, November 29, 2014 Letter to Hom on Christian Salvation Alright, now we do a bit of theology! Aren't you excited? From math to quantum to cosmology to the heavenly Father of Jesus! Quite a healthy wide range of things for the spirit to grow, wouldn't you say? ;-) This specific post/thread is for dear brother Hom on twitter who wrote me to discuss and share my thoughts on the issue of whether a Christian who is saved can lose or keep his/her salvation after numerous sins post their commitment to faith in Christ. In his tweets to me, Hom said: "I was taught that in my early years and I "kinda" maintain that belief. Yet I know NO passages in the Biblical text that make the point clear. Can you help?" The belief that Hom alluded to here is the view I expressed in our twitter discussion that there can be cases of alleged Christians who could 'lose' their salvation, or grace, by the continued sinful lives that they can lead subsequently -- even as replete that life may be with sins of egregious proportions (such as committing genocide, indulging in the major sins stated in the Bible, etc, without naming them all). Some believers are of the view that once you have given your life to Christ as your Savior, you are also saved of your subsequent sins too, no matter how big or small they may be after your pronounced belief. Consequently, even if you live a life of crime, rape, murder, stealing, lying, and any sin you can imagine (so much for the Ten Commandments!), you are still saved because you cannot lose it. So consequently, Hitler, being a Christian would still be saved by grace even after all the inhumanity and genocide that he has caused to tens of millions of human beings. So, you are free to do as you please, sin to whatever degree that you wish (to whatever extreme), and you are assured that you are saved and will go to heaven. Of course, I cannot accept such a view. (And I never have.) That is not at all how I read the Bible, especially the teachings of the New Testament as to how Christians should conduct their lives. (The Hebrew Scriptures already prescribe divine punishment to various of these sins even for alleged believers of the Mosaic community!) Indeed, St Paul has dedicated a fair amount of his time, travels, and writings to some churches in Asia Minor where he heard that numerous egregious sins continue to be committed by (alleged) Christian members who believed that they had been saved so that now they can do whatever they wanted. That is why St Paul explicitly emphasized: The Letter of James also teaches that a Christian is responsible to showing how his/her life conduct is to be exemplified through their behavior, actions, or works -- backed, of course, through their faith in Christ. The Lord Jesus taught us to judge a tree by the fruit that it bears - very eloquently, simply, and without any complicated theology. When the tree produces bad fruit, what is to be done unto it? The Master said "Every tree that does not bear good fruit is cut down and thrown into the fire. 20 Thus, by their fruit you will recognize them." (Matthew 7:19-20, NIV.) Clearly, such a tree cannot then have been saved or ascribed salvation to begin with, even if that tree lead others to believe that it was a good tree. Therefore, in extreme cases like Hitler, or anyone allegedly claiming to be a Christian, such continued bearing of bad fruit would at the very least cast serious doubt on their claims to being believers. Would we believe someone who claims allegiance to the US Constitution only to see that individual violate its articles and laws time and again (even in extreme ways)? I would certainly at least question them. So we're not saying that they had grace and then lost it, but that maybe they didn't have grace in the first place (as we assumed by taking their claim at face value). There are many examples like the above in the Bible that tell me that the view I expressed is far more reasonable (or at least less problematic) than the view that the Christian community could be allowed to harbor such horrific individuals who do such harm to the faith. If Christians are serious about Jesus' teaching, they are responsible to act it out in their hearts and minds as well as with their fellow man, their neighbor. I hope that I have shared my thoughts with you, Hom, in a gentle spirit, even as I am no Bible teacher nor do I have a degree in theology! But I speak as just one believer, sharing my thoughts and experiences. Ultimately, Jesus knows the full precise answers. As St Paul said, we know in part and we prophecy in part, and in another place he says "For now we see through a glass, darkly." Yours in Christ, Saturday, November 8, 2014 A game with $\pi$ It's a little game you can play with any irrational number. I took $\pi$ as an example. As an example, if you truncate the above last expansion where the 292 appears (so you omit the "1 over 292" part) you get the rational number $\frac{355}{113}$ which approximates $\pi$ to 6 decimal places. (Better than $\frac{22}{7}$.) (all 2's after the 1). For the square root of 3 the continued fraction sequence is (so it starts with 1 and then the pair "1, 2" repeat periodically forever). Monday, August 18, 2014 3-4-5 complex number has Infinite order This is a good exercise/challenge with complex numbers. Consider the complex number  $Z = \large \frac35 + \frac45 i$. (Where $\large i = \sqrt{-1}$.) Prove that $Z^n$ is never equal to 1 for any positive whole number $n = 1, 2, 3, 4, \dots $. This complex number $Z$ comes from the familiar 3-4-5 right triangle that you all know: $3^2 + 4^2 = 5^2$. In math we sometimes say that an object $X$ has "infinite order" when no positive power of it can be the identity (1, in this multiplicative case). For example, $i$ itself has finite order 4 since $i^4 = 1$, while 2 has infinite order since no positive power of 2 can be equal to 1. The distinct feature of $Z$ above is that it has modulus 1, so is on the unit circle $\mathbb T$ in the complex plane. Wednesday, July 30, 2014 Multiplying Spaces! Believe it or not, in Math we can not only multiply numbers but we can multiply spaces! We can multiply two spaces to get bigger spaces - usually of bigger dimensions. The 'multiplication' that I'm referring to here is known as Tensor products. The things/objects in these spaces are called tensors. (Tensors are like vectors in a way.) Albert Einstein used tensors in his Special and his General Theory of Relativity (his theory of gravity). Tensors are also used in several branches of Physics, like the theory of elasticity where various stresses and forces act in various ways. And definitely in quantum field theory. It may sound crazy to say you can "multiply spaces," as we would multiply numbers, but it can be done in a precise and logical way. But here I will spare you the technical details and try to manage to show you the idea that makes it possible to do. Q. What do you mean by 'spaces'? I mean a set of things that behave like 'vectors' so that you can add two vectors and get a third vector, and where you can scale a vector by any real number. The latter is called scalar multiplication, so if $v$ is a vector, you can multiply it by $0.23$ or by $-300.87$ etc and get another vector: $0.23v$, $-300.87v$, etc.) The technical name is vector space. A straight line that extends in both directions indefinitely would be a good example (an Euclidean line). Another example is you take the $xy$-plane, 2D-space or simply 2-space, or you can take $xyz$-space, or if you like you can take $xyzt$-spacetime known also as Minkowski space which has 4 dimensions. Q. How do you 'multiply' such spaces? First, the notation. If $U$ and $V$ are spaces, their tensor product space is written as $U \otimes V$. (It's the multiplication symbol with a circle around it.) If this is to be an actual multiplication of spaces there is one natural requirement we would want. That the dimensions of this tensor product space $U \otimes V$ should turn out to be the multiplication of the dimensions of U and of V. So if $U$ has dimension 2 and $V$ has dimension 3, then $U \otimes V$ ought to have dimension $2 \times 3 = 6$.  And if $U$ and $V$ are straight lines, so each of dimension 1, then $U \otimes V$ will also be of dimension 1. Q. Hey, wait a second, that doesn't quite answer my question. Are you dodging the issue!? Ha! Yeah, just wanted to see if you're awake! ;-) And you are! Ok, here's the deal without going into too much detail. We pointed out above how you can scale vectors by real numbers. So if you have a vector $v$ from the space $V$ you can scale it by $0.23$ and get the vector $0.23v$. Now just imagine if we can scale the vector $v$ by the vectors in the other space $U$! So if $u$ is a vector from $U$ and $v$ a vector from $V$, then you can scale $v$ by $u$ to get what we call their tensor product which we usually write like $u \otimes v$. So with numbers used to scale vectors, e.g. $0.23v$, we could also write it as $0.23 \otimes v$. But we don't normally write it that way when numbers are involved, only when non-number vectors are. Q. So can you also turn this around and refer to $u \otimes v$ as the vector $u$ scaled by the vector $v$? Absolutely! So we have two approaches to this and you can show (by a proof) that the two approaches are in fact equivalent. In fact, that's what gives rise to a theorem that says Theorem. $U \otimes V$ is isomorphic to $V \otimes U$. (In Math, the word 'isomorphism' gives a precise meaning to what I mean by 'equivalent'.) Anyway, the point has been made to describe multiplying spaces: you take their vectors and you 'scale' those of one space by the vectors of the other space. There's a neat way to actually see and appreciate this if we use matrices as our vectors. (Yes, matrices can be viewed as vectors!) Matrices are called arrays in computer science. One example / experiment should drive the point home: Let's take these two $2 \times 2$ matrices $A$ and $B$: $A = \begin{bmatrix} 2 & 3 \\ -1 & 5 \end{bmatrix}, \ \ \ \ \ \ \  B = \begin{bmatrix} -5 & 4 \\ 6 & 7 \end{bmatrix}$ To calculate their tensor product $A \otimes B$, you can take $B$ and scale it by each of the numbers contained in $A$! Like this: $A\otimes B = \begin{bmatrix} 2B & 3B \\ -1B & 5B \end{bmatrix}$ If you write this out you will get a 4 x 4 matrix when you plug B into it: $A\otimes B = \begin{bmatrix} -10 & 8 & -15 & 12 \\ 12 & 14 & 18 & 21 \\ 5 & -4 & -25 & 20 \\ -6 & -7 & 30 & 35 \end{bmatrix}$ Oh, and 4 times 4 is 16, yes so the matrix $A\otimes B$ does in fact have 16 entries in it! Check! Q. You could also do this the other way, by scaling $A$ using each of the numbers in $B$, right? Right! That would then give $B\otimes A$. When you do this you will get different matrices/arrays but if you look closely you'll see that they have the very same set of numbers except that they're permuted around in a rather simple way.  How? Well, if you switch the two inner columns and the two inner rows of $B\otimes A$ you will get exactly $A\otimes B$! Try this experiment with the above $A$ and $B$ examples by working out $B\otimes A$ as we've done. This illustrates what we mean in Math by 'isomorphism': that even though the results may look different, they are actually related to one another in a sort of 'linear' or 'algebraic' fashion. Ok, that's enough. We get the idea. You can multiply spaces by scaling their vectors by each other. Amazing how such an abstract idea turns out to be a powerful tool in understanding the geometry of spaces, in Relativity Theory, and also in quantum mechanics (quantum field theory). Warm Regards, Saturday, July 26, 2014 Bertrand's "postulate" and Legendre's Conjecture Bertrand's "postulate" states that for any positive integer $n > 1$, you can always find a prime number $p$ in the interval $n < p < 2n$. It use to be called "postulate" until it became a theorem when Chebyshev proved it in 1850. (I saw this while browsing thru a group theory book and got interested to read up a little more.) What if instead of looking at $n$ and $2n$ you looked at consecutive squares? So for example you take a positive integer $n$ and you ask whether we can always find at least one prime number between $n^2$ and $(n+1)^2$. Turns out this is a much harder problem and it's still an open question called: Legendre's Conjecture. For each positive integer $n$ there is at least one prime $p$ such that $n^2 < p < (n+1)^2$. People have used programming to check this for large numbers and have always found such primes, but no proof (or counterexample) is known. If you compare Legendre's with Bertrand's you will notice that $(n+1)^2$ is a lot less than $2n^2$. (At least for $n > 2$.) In fact, the asymptotic ratio of the latter divided by the former is 2 (not 1) for large $n$'s. This shows that the range of numbers in the Legendre case is much narrower than in Bertrand's. The late great mathematician Erdos proved similar results by obtaining k primes in certain ranges similar to Bertand's. A deep theorem related to this is the Prime Number Theorem which gives an asymptotic approximation for the number of primes up to $x$. That approximating function is the well-known $x/\ln(x)$. Great sources: [1] Bertrand's "postulate" [2] Legendre's Conjecture (See also wiki's entries under these topics.) Friday, July 25, 2014 Direct sum of finite cyclic groups The purpose of this post is to show how a finite direct sum of finite cyclic groups $\Large \Bbb Z_{m_1} \oplus \Bbb Z_{m_2} \oplus \dots \oplus \Bbb Z_{m_n}$ can be rearranged so that their orders are in increasing divisional form: $m_1|m_2|\dots | m_n$. We use the fact that if $p, q$ are coprime, then $\large \Bbb Z_p \oplus \Bbb Z_q = \Bbb Z_{pq}$. (We'll use equality $=$ for isomorphism $\cong$ of groups.) Let $p_1, p_2, \dots p_k$ be the list of prime numbers in the prime factorizations of all the integers $m_1, \dots, m_n$. Write each $m_j$ in its prime power factorization $\large m_j = p_1^{a_{j1}}p_2^{a_{j2}} \dots p_k^{a_{jk}}$. Therefore $\Large \Bbb Z_{m_j} = \Bbb Z_{p_1^{a_{j1}}} \oplus \Bbb Z_{p_2^{a_{j2}}} \oplus \dots \oplus \Bbb Z_{p_k^{a_{jk}}}$ and so the above direct sum  $\large \Bbb Z_{m_1} \oplus \Bbb Z_{m_2} \oplus \dots \oplus \Bbb Z_{m_n}$ can be written out in matrix/row form as the direct sum of the following rows: $\Large\Bbb Z_{p_1^{a_{11}}} \oplus \Bbb Z_{p_2^{a_{12}}} \oplus \dots \oplus \Bbb Z_{p_k^{a_{1k}}}$ $\Large\Bbb Z_{p_1^{a_{21}}} \oplus \Bbb Z_{p_2^{a_{22}}} \oplus \dots \oplus \Bbb Z_{p_k^{a_{2k}}}$ $\Large \vdots$ $\Large\Bbb Z_{p_1^{a_{n1}}} \oplus \Bbb Z_{p_2^{a_{n2}}} \oplus \dots \oplus \Bbb Z_{p_k^{a_{nk}}}$ Here, look at the powers of $p_1$ in the first column. They can be permuted / arranged so that their powers are in increasing order. The same with the powers of $p_2$ and the other $p_j$, arrange their groups so that the powers are increasing order. So we get the above direct sum isomorphic to $\Large\Bbb Z_{p_1^{b_{11}}} \oplus \Bbb Z_{p_2^{b_{12}}} \oplus \dots \oplus \Bbb Z_{p_k^{b_{1k}}}$ $\Large\Bbb Z_{p_1^{b_{21}}} \oplus \Bbb Z_{p_2^{b_{22}}} \oplus \dots \oplus \Bbb Z_{p_k^{b_{2k}}}$ $\Large \vdots$ $\Large\Bbb Z_{p_1^{b_{n1}}} \oplus \Bbb Z_{p_2^{b_{n2}}} \oplus \dots \oplus \Bbb Z_{p_k^{b_{nk}}}$ where, for example, the exponents $b_{11} \le b_{21} \le \dots \le b_{n1}$ are a rearrangement of the numbers $a_{11}, a_{21}, \dots, a_{n1}$ (in the first column) in increasing order.  Do the same for the other columns. Now put together each of these rows into cyclic groups by multiplying their orders, thus $\Large\ \ \Bbb Z_{N_1}$ $\Large \oplus \Bbb Z_{N_2}$ $\Large \vdots$ $\Large \oplus \Bbb Z_{N_n}$ $\large N_1 = p_1^{b_{11}} p_2^{b_{12}} \dots p_k^{b_{1k}}$, $\large N_2 = p_1^{b_{21}} p_2^{b_{22}} \dots p_k^{b_{2k}}$, $\large \vdots$ $\large N_n = p_1^{b_{n1}} p_2^{b_{n2}} \dots p_k^{b_{nk}}$. In view of the fact that the $b_{1j} \le b_{2j} \le \dots \le b_{nj}$ is increasing for each $j$, we see that $N_1 | N_2 | \dots | N_n$, as required. $\blacksquare$ Latex on Blogger LaTeX work here if you add the small script below: Short exact sequence: $\large 0 \to H \to G \to G/H \to 0$ and an integral $\large \int \sqrt{x} dx$. Each finite Abelian group is isomorphic to a direct sum of cyclic groups where $m_1|m_2|\dots | m_n$. (One of my favorite results from group theory.) Thanks to a gentle soul's responding at tex.stackexchange: To get LaTeX to work on Blogger, go to Design, then to "edit HTML", then to "edit template". In the HTML file insert the following script right after where it says < head >: extensions: ["tex2jax.js","TeX/AMSmath.js","TeX/AMSsymbols.js"], tex2jax: { Tuesday, June 17, 2014 Richard Feynman on Erwin Schrödinger I thought it is interesting to see what the great Nobel Laureate physicist Richard Feynman said about Erwin Schrödinger's attempts to discover the famous Schrödinger equation in quantum mechanics: When Schrödinger first wrote it [his equation] down, He gave a kind of derivation based on some heuristic Arguments and some brilliant intuitive guesses. Some Of the arguments he used were even false, but that does Not matter; the only important thing is that the ultimate Equation gives a correct description of nature.                                     -- Richard P. Feynman (The Feynman Lectures on Physics, Vol. III, Chapter 16, 1965.) It has been my experience in reading physics books that this sort of `heuristic' reasoning is part of doing physics. It is a very creative (sometimes not logical!) art with mathematics in attempting to understand the physical world. Dirac did it too when he obtained his Dirac equation for the electron. Tuesday, June 3, 2014 Entangled and Unentangled States Let's take a simple example of electron spin states. The reason it is 'simple' is that you only have two states to consider: either an electron's spin is 'up' or it is 'down'. We can use the notation ↑ to stand for the state that its spin is 'up' and the down arrow ↓ to indicate that its spin is down. If we write, say, two arrows together like this ↑↓ it means we have two electrons, the first one has spin up and the second one spin down. So, ↓↓ means both particles have spin down. Now one way in which the two particles can be arranged experimentally is in an entangled form. One state that would describe such a situation is a wavefunction state like this: Ψ = ↑↓ - ↓↑. This is a superposition state combining (in some mysterious fashion!) two basic states: the first one ↑↓ describes is a situation where the first particle has spin up and the second spin down, and the second state ↓↑ describes a situation where the first particle has spin down and the second particle has spin up. But when the two particles are in the combined superposition state Ψ (as above), it's in some sort of mix of those two scenarios. Like the case of the cat that is half dead and half alive! :-) Why exactly is this state Ψ 'entangled' -- and what exactly do we mean by that? Well, it means that if you measure the spin of the first electron and you discover that its spin is down ↓, let's say, that picks out the part "↓↑" of the state Ψ! And this means that the second electron must have spin up! They're entangled! They're tied up together so knowing some spin info about one tells you the spin info of the other - instantly! This is so because the system has been set up to be in the state described by Ψ. Now what about an unentangled state? What would that look like for our 2-electron example. Here's one: Φ = ↑↑ + ↓↓ + ↑↓ - ↓↑. Here this state is made up of two electrons that can have both their spins up (namely, ↑↑), both their spins down (↓↓), or they could be in the state Ψ (consisting of the opposite spins). In this wavefunction state Φ (called a "product state" which are generally not entangled), if you measure the spin, say, of the first electron and you find that it is up ↑, then what about the spin of the other one? Well, here you have two possibilities, namely ↑↑ and ↑↓ involved in Φ, which means that the second electron can be either in the up spin or the down spin. No entanglement, no correlation as in the Ψ case above. Knowing the spin state of one particle doesn't tell you what the other one has to be. You can illustrate the same kind of examples with photon polarization, so you can have their polarizations entangeled or unentangled - depending on how the system is set up by us or by nature. Thursday, May 29, 2014 Periodic Table of Finite Simple Groups Chemistry has its well known and fantastic periodic table of elements. In group theory we have an analogous 'periodic table' that describes the classification of the finite simple groups (shown below). (A detailed PDF is available.) It summarizes decades worth of research work by many great mathematicians to determine all the finite simple groups. It is an amazing feat! And a work of great beauty. Groups are used in studies of symmetry - in Math and in the sciences, especially in Physics and Chemistry. A group is basically a set $G$ of objects that can be "combined", so that two objects $x, y$ in $G$ produce a third object $x \ast y$ in G. Loosely, we refer to this combination or operation as a 'multiplication' (or it could be an 'addition'). This operation has to have three basic rules: 1. The operation must associative, i.e. $(x\ast y)\ast z = x\ast(y\ast z)$ for all objects $x, y, z$ in $G$. 2. $G$ contains a special object $e$ such that $e\ast x = x = x\ast e$ for all objects $x$ in $G$. 3. Each object $x$ in $G$ has an associated object $y$ such that $x\ast y = e = y\ast x$. Condition 2 says that the object $e$ has no effect on any other object - it is called the "identity" object. It behaves much like the real number 0 in relation to the addition + operation since $x + 0 = x = 0 + x$ for all real numbers. (Here, in this example, $\ast$ is addition + and e is 0.) As a second example, $e$ could also be the real number 1 if $ast$ stood for multiplication (in which case we take $G$ to be all real numbers except 0). Condition 3 says that each object has an 'inverse' object. Or, that each object could be 'reversed'. It turns out that you can show that the $y$ in condition 3 is unique for each $x$ and is instead denote by $y = x^{-1}$. The commutative property -- namely that $x\ast y = y\ast x$ -- is not assumed, so almost all of the groups in the periodic table do not have this property. (Groups that do have this property are called Abelian or commutative groups.) The Abelian simple groups are the 'cyclic' ones that appear in the right most column of the table. (Notice that their number of objects is a prime number $2, 3, 5, 7, 11, \dots$ etc.) The periodic table lists all of the finite simple groups. So they are groups as we just described. And they are finite in that each group $G$ has finitely many elements. (There are many infinite groups used in physics but these aren't part of the table.) But now what are 'simple' groups? Basically, they are ones that cannot be 'made up' of yet smaller groups or other groups. (More technically, a group $G$ is said to be simple when there isn't a nontrivial normal subgroup $H$ inside of $G$ -- i.e., $H$ is a subset of $G$ and is also a group under the same $\ast$ operation of $G$, and further $xHx^{-1}$ is contained in $H$ for any object $x \in G$.) So a simple group is like a basic object that cannot be "broken down" any further, like an 'atom', or a prime number. One of the deepest results in the theory of groups that helped in this classification is the Feit-Thompson Theorem which says: each group with an odd number of objects is solvable. (The proof was written in the 1960s and is over 200 pages - published in Pacific Journal of Mathematics.) Wednesday, May 28, 2014 St Augustine on the days of Creation In his City of God, St Augustine said in reference to the first three days of creation: "What kind of days these were it is extremely difficult, or perhaps impossible for us to conceive, and how much more to say!" (See Chapter 6.) So a literal 24 hour day seemed not the plain meaning according to St Augustine. He does not venture to speculate but leaves the matter open. That was long long before modern science. Sunday, May 25, 2014 Experiment on Dark Matter yields nothing The most sensitive experiment to date by the Large Underground Xenon (LUX), which was designed to detect dark matter, has not registered any sign of the substance. See Nature article: No sign of dark matter in underground experiment Eugenie Samuel Reich As a result, some scientists are considering other (exotic) possibilities: Dark-matter search considers exotic possibilities, by Clara Moskowitz Saturday, May 17, 2014 This is my first (test) post on this blog. I'm just examining it, maybe to add a few thoughts and ideas now and then. Thank you for reading.
52f7cc26c1ae0181
From Wikipedia, the free encyclopedia Jump to navigation Jump to search Helium atom Helium atom ground state. An illustration of the helium atom, depicting the nucleus (pink) and the electron cloud distribution (black). The nucleus (upper right) in helium-4 is in reality spherically symmetric and closely resembles the electron cloud, although for more complicated nuclei this is not always the case. The black bar is one angstrom (10−10 m or 100 pm). Smallest recognized division of a chemical element Mass range: 1.67×10−27 to 4.52×10−25 kg Electric charge: zero (neutral), or ion charge Diameter range: 62 pm (He) to 520 pm (Cs) (data page) Components: Electrons and a compact nucleus of protons and neutrons Atoms are small enough that attempting to predict their behavior using classical physics – as if they were billiard balls, for example – gives noticeably incorrect predictions due to quantum effects. Through the development of physics, atomic models have incorporated quantum principles to better explain and predict this behavior. Every atom is composed of a nucleus and one or more electrons bound to the nucleus. The nucleus is made of one or more protons and typically a similar number of neutrons. Protons and neutrons are called nucleons. More than 99.94% of an atom's mass is in the nucleus. The protons have a positive electric charge, the electrons have a negative electric charge, and the neutrons have no electric charge. If the number of protons and electrons are equal, that atom is electrically neutral. If an atom has more or fewer electrons than protons, then it has an overall negative or positive charge, respectively, and it is called an ion. The electrons of an atom are attracted to the protons in an atomic nucleus by this electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by a different force, the nuclear force, which is usually stronger than the electromagnetic force repelling the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force, and nucleons can be ejected from the nucleus, leaving behind a different element: nuclear decay resulting in nuclear transmutation. The number of protons in the nucleus defines to what chemical element the atom belongs: for example, all copper atoms contain 29 protons. The number of neutrons defines the isotope of the element. The number of electrons influences the magnetic properties of an atom. Atoms can attach to one or more other atoms by chemical bonds to form chemical compounds such as molecules. The ability of atoms to associate and dissociate is responsible for most of the physical changes observed in nature and is the subject of the discipline of chemistry. History of atomic theory Atoms in philosophy The idea that matter is made up of discrete units is a very old idea, appearing in many ancient cultures such as Greece and India. The word "atom" was coined by the ancient Greek philosophers Leucippus and his pupil Democritus.[1][2] However, these ideas were founded in philosophical and theological reasoning rather than evidence and experimentation. As a result, their views on what atoms look like and how they behave were incorrect. They also could not convince everybody, so atomism was but one of a number of competing theories on the nature of matter. It was not until the 19th century that the idea was embraced and refined by scientists, when the blossoming science of chemistry produced discoveries that only the concept of atoms could explain. First evidence-based theory Various atoms and molecules as depicted in John Dalton's A New System of Chemical Philosophy (1808). In the early 1800s, John Dalton used the concept of atoms to explain why elements always react in ratios of small whole numbers (the law of multiple proportions). For instance, there are two types of tin oxide: one is 88.1% tin and 11.9% oxygen and the other is 78.7% tin and 21.3% oxygen (tin(II) oxide and tin dioxide respectively). This means that 100g of tin will combine either with 13.5g or 27g of oxygen. 13.5 and 27 form a ratio of 1:2, a ratio of small whole numbers. This common pattern in chemistry suggested to Dalton that elements react in whole number multiples of discrete units—in other words, atoms. In the case of tin oxides, one tin atom will combine with either one or two oxygen atoms.[3] Dalton also believed atomic theory could explain why water absorbs different gases in different proportions. For example, he found that water absorbs carbon dioxide far better than it absorbs nitrogen.[4] Dalton hypothesized this was due to the differences between the masses and configurations of the gases' respective particles, and carbon dioxide molecules (CO2) are heavier and larger than nitrogen molecules (N2). Brownian motion In 1827, botanist Robert Brown used a microscope to look at dust grains floating in water and discovered that they moved about erratically, a phenomenon that became known as "Brownian motion". This was thought to be caused by water molecules knocking the grains about. In 1905, Albert Einstein proved the reality of these molecules and their motions by producing the first Statistical physics analysis of Brownian motion.[5][6][7] French physicist Jean Perrin used Einstein's work to experimentally determine the mass and dimensions of atoms, thereby conclusively verifying Dalton's atomic theory.[8] Discovery of the electron The Geiger–Marsden experiment Discovery of the nucleus In 1909, Hans Geiger and Ernest Marsden, under the direction of Ernest Rutherford, bombarded a metal foil with alpha particles to observe how they scattered. They expected all the alpha particles to pass straight through with little deflection, because Thomson's model said that the charges in the atom are so diffuse that their electric fields could not affect the alpha particles much. However, Geiger and Marsden spotted alpha particles being deflected by angles greater than 90°, which was supposed to be impossible according to Thomson's model. To explain this, Rutherford proposed that the positive charge of the atom is concentrated in a tiny nucleus at the center of the atom.[11] Rutherford compared his findings to one firing a 15-inch shell at a sheet of tissue paper and it coming back to hit the person who fired it.[12] Discovery of isotopes While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one type of atom at each position on the periodic table.[13] The term isotope was coined by Margaret Todd as a suitable name for different atoms that belong to the same element. J.J. Thomson created a technique for separating atom types through his work on ionized gases, which subsequently led to the discovery of stable isotopes.[14] Bohr model The Bohr model of the atom, with an electron making instantaneous "quantum leaps" from one orbit to another. This model is obsolete. In 1913 the physicist Niels Bohr proposed a model in which the electrons of an atom were assumed to orbit the nucleus but could only do so in a finite set of orbits, and could jump between these orbits only in discrete changes of energy corresponding to absorption or radiation of a photon.[15] This quantization was used to explain why the electrons orbits are stable (given that normally, charges in acceleration, including circular motion, lose kinetic energy which is emitted as electromagnetic radiation, see synchrotron radiation) and why elements absorb and emit electromagnetic radiation in discrete spectra.[16] Later in the same year Henry Moseley provided additional experimental evidence in favor of Niels Bohr's theory. These results refined Ernest Rutherford's and Antonius Van den Broek's model, which proposed that the atom contains in its nucleus a number of positive nuclear charges that is equal to its (atomic) number in the periodic table. Until these experiments, atomic number was not known to be a physical and experimental quantity. That it is equal to the atomic nuclear charge remains the accepted atomic model today.[17] Chemical bonding explained Chemical bonds between atoms were now explained, by Gilbert Newton Lewis in 1916, as the interactions between their constituent electrons.[18] As the chemical properties of the elements were known to largely repeat themselves according to the periodic law,[19] in 1919 the American chemist Irving Langmuir suggested that this could be explained if the electrons in an atom were connected or clustered in some manner. Groups of electrons were thought to occupy a set of electron shells about the nucleus.[20] Further developments in quantum physics The Stern–Gerlach experiment of 1922 provided further evidence of the quantum nature of atomic properties. When a beam of silver atoms was passed through a specially shaped magnetic field, the beam was split in a way correlated with the direction of an atom's angular momentum, or spin. As this spin direction is initially random, the beam would be expected to deflect in a random direction. Instead, the beam was split into two directional components, corresponding to the atomic spin being oriented up or down with respect to the magnetic field.[21] In 1925 Werner Heisenberg published the first consistent mathematical formulation of quantum mechanics (Matrix Mechanics) [17]. One year earlier, in 1924, Louis de Broglie had proposed that all particles behave to an extent like waves and, in 1926, Erwin Schrödinger used this idea to develop a mathematical model of the atom (Wave Mechanics) that described the electrons as three-dimensional waveforms rather than point particles. A consequence of using waveforms to describe particles is that it is mathematically impossible to obtain precise values for both the position and momentum of a particle at a given point in time; this became known as the uncertainty principle, formulated by Werner Heisenberg in 1927 [17]. In this concept, for a given accuracy in measuring a position one could only obtain a range of probable values for momentum, and vice versa.[22] This model was able to explain observations of atomic behavior that previous models could not, such as certain structural and spectral patterns of atoms larger than hydrogen. Thus, the planetary model of the atom was discarded in favor of one that described atomic orbital zones around the nucleus where a given electron is most likely to be observed.[23][24] Discovery of the neutron The development of the mass spectrometer allowed the mass of atoms to be measured with increased accuracy. The device uses a magnet to bend the trajectory of a beam of ions, and the amount of deflection is determined by the ratio of an atom's mass to its charge. The chemist Francis William Aston used this instrument to show that isotopes had different masses. The atomic mass of these isotopes varied by integer amounts, called the whole number rule.[25] The explanation for these different isotopes awaited the discovery of the neutron, an uncharged particle with a mass similar to the proton, by the physicist James Chadwick in 1932. Isotopes were then explained as elements with the same number of protons, but different numbers of neutrons within the nucleus.[26] Fission, high-energy physics and condensed matter In 1938, the German chemist Otto Hahn, a student of Rutherford, directed neutrons onto uranium atoms expecting to get transuranium elements. Instead, his chemical experiments showed barium as a product.[27][28] A year later, Lise Meitner and her nephew Otto Frisch verified that Hahn's result were the first experimental nuclear fission.[29][30] In 1944, Hahn received the Nobel prize in chemistry. Despite Hahn's efforts, the contributions of Meitner and Frisch were not recognized.[31] In the 1950s, the development of improved particle accelerators and particle detectors allowed scientists to study the impacts of atoms moving at high energies.[32] Neutrons and protons were found to be hadrons, or composites of smaller particles called quarks. The standard model of particle physics was developed that so far has successfully explained the properties of the nucleus in terms of these sub-atomic particles and the forces that govern their interactions.[33] Subatomic particles Though the word atom originally denoted a particle that cannot be cut into smaller particles, in modern scientific usage the atom is composed of various subatomic particles. The constituent particles of an atom are the electron, the proton and the neutron; all three are fermions. However, the hydrogen-1 atom has no neutrons and the hydron ion has no electrons. The electron is by far the least massive of these particles at 9.11×10−31 kg, with a negative electrical charge and a size that is too small to be measured using available techniques.[34] It was the lightest particle with a positive rest mass measured, until the discovery of neutrino mass. Under ordinary conditions, electrons are bound to the positively charged nucleus by the attraction created from opposite electric charges. If an atom has more or fewer electrons than its atomic number, then it becomes respectively negatively or positively charged as a whole; a charged atom is called an ion. Electrons have been known since the late 19th century, mostly thanks to J.J. Thomson; see history of subatomic physics for details. Protons have a positive charge and a mass 1,836 times that of the electron, at 1.6726×10−27 kg. The number of protons in an atom is called its atomic number. Ernest Rutherford (1919) observed that nitrogen under alpha-particle bombardment ejects what appeared to be hydrogen nuclei. By 1920 he had accepted that the hydrogen nucleus is a distinct particle within the atom and named it proton. Neutrons have no electrical charge and have a free mass of 1,839 times the mass of the electron,[35] or 1.6929×10−27 kg, the heaviest of the three constituent particles, but it can be reduced by the nuclear binding energy. Neutrons and protons (collectively known as nucleons) have comparable dimensions—on the order of 2.5×10−15 m—although the 'surface' of these particles is not sharply defined.[36] The neutron was discovered in 1932 by the English physicist James Chadwick. In the Standard Model of physics, electrons are truly elementary particles with no internal structure. However, both protons and neutrons are composite particles composed of elementary particles called quarks. There are two types of quarks in atoms, each having a fractional electric charge. Protons are composed of two up quarks (each with charge +2/3) and one down quark (with a charge of −1/3). Neutrons consist of one up quark and two down quarks. This distinction accounts for the difference in mass and charge between the two particles.[37][38] The quarks are held together by the strong interaction (or strong force), which is mediated by gluons. The protons and neutrons, in turn, are held to each other in the nucleus by the nuclear force, which is a residuum of the strong force that has somewhat different range-properties (see the article on the nuclear force for more). The gluon is a member of the family of gauge bosons, which are elementary particles that mediate physical forces.[37][38] The binding energy needed for a nucleon to escape the nucleus, for various isotopes All the bound protons and neutrons in an atom make up a tiny atomic nucleus, and are collectively called nucleons. The radius of a nucleus is approximately equal to 1.07 3A fm, where A is the total number of nucleons.[39] This is much smaller than the radius of the atom, which is on the order of 105 fm. The nucleons are bound together by a short-ranged attractive potential called the residual strong force. At distances smaller than 2.5 fm this force is much more powerful than the electrostatic force that causes positively charged protons to repel each other.[40] Atoms of the same element have the same number of protons, called the atomic number. Within a single element, the number of neutrons may vary, determining the isotope of that element. The total number of protons and neutrons determine the nuclide. The number of neutrons relative to the protons determines the stability of the nucleus, with certain isotopes undergoing radioactive decay.[41] The proton, the electron, and the neutron are classified as fermions. Fermions obey the Pauli exclusion principle which prohibits identical fermions, such as multiple protons, from occupying the same quantum state at the same time. Thus, every proton in the nucleus must occupy a quantum state different from all other protons, and the same applies to all neutrons of the nucleus and to all electrons of the electron cloud. However, a proton and a neutron are allowed to occupy the same quantum state.[42] For atoms with low atomic numbers, a nucleus that has more neutrons than protons tends to drop to a lower energy state through radioactive decay so that the neutron–proton ratio is closer to one. However, as the atomic number increases, a higher proportion of neutrons is required to offset the mutual repulsion of the protons. Thus, there are no stable nuclei with equal proton and neutron numbers above atomic number Z = 20 (calcium) and as Z increases, the neutron–proton ratio of stable isotopes increases.[42] The stable isotope with the highest proton–neutron ratio is lead-208 (about 1.5). Illustration of a nuclear fusion process that forms a deuterium nucleus, consisting of a proton and a neutron, from two protons. A positron (e+)—an antimatter electron—is emitted along with an electron neutrino. The number of protons and neutrons in the atomic nucleus can be modified, although this can require very high energies because of the strong force. Nuclear fusion occurs when multiple atomic particles join to form a heavier nucleus, such as through the energetic collision of two nuclei. For example, at the core of the Sun protons require energies of 3–10 keV to overcome their mutual repulsion—the coulomb barrier—and fuse together into a single nucleus.[43] Nuclear fission is the opposite process, causing a nucleus to split into two smaller nuclei—usually through radioactive decay. The nucleus can also be modified through bombardment by high energy subatomic particles or photons. If this modifies the number of protons in a nucleus, the atom changes to a different chemical element.[44][45] If the mass of the nucleus following a fusion reaction is less than the sum of the masses of the separate particles, then the difference between these two values can be emitted as a type of usable energy (such as a gamma ray, or the kinetic energy of a beta particle), as described by Albert Einstein's mass–energy equivalence formula, E = mc2, where m is the mass loss and c is the speed of light. This deficit is part of the binding energy of the new nucleus, and it is the non-recoverable loss of the energy that causes the fused particles to remain together in a state that requires this energy to separate.[46] The fusion of two nuclei that create larger nuclei with lower atomic numbers than iron and nickel—a total nucleon number of about 60—is usually an exothermic process that releases more energy than is required to bring them together.[47] It is this energy-releasing process that makes nuclear fusion in stars a self-sustaining reaction. For heavier nuclei, the binding energy per nucleon in the nucleus begins to decrease. That means fusion processes producing nuclei that have atomic numbers higher than about 26, and atomic masses higher than about 60, is an endothermic process. These more massive nuclei can not undergo an energy-producing fusion reaction that can sustain the hydrostatic equilibrium of a star.[42] Electron cloud A potential well, showing, according to classical mechanics, the minimum energy V(x) needed to reach each position x. Classically, a particle with energy E is constrained to a range of positions between x1 and x2. The electrons in an atom are attracted to the protons in the nucleus by the electromagnetic force. This force binds the electrons inside an electrostatic potential well surrounding the smaller nucleus, which means that an external source of energy is needed for the electron to escape. The closer an electron is to the nucleus, the greater the attractive force. Hence electrons bound near the center of the potential well require more energy to escape than those at greater separations. Electrons, like other particles, have properties of both a particle and a wave. The electron cloud is a region inside the potential well where each electron forms a type of three-dimensional standing wave—a wave form that does not move relative to the nucleus. This behavior is defined by an atomic orbital, a mathematical function that characterises the probability that an electron appears to be at a particular location when its position is measured.[48] Only a discrete (or quantized) set of these orbitals exist around the nucleus, as other possible wave patterns rapidly decay into a more stable form.[49] Orbitals can have one or more ring or node structures, and differ from each other in size, shape and orientation.[50] Wave functions of the first five atomic orbitals. The three 2p orbitals each display a single angular node that has an orientation and a minimum at the center. How atoms are constructed from electron orbitals and link to the periodic table Each atomic orbital corresponds to a particular energy level of the electron. The electron can change its state to a higher energy level by absorbing a photon with sufficient energy to boost it into the new quantum state. Likewise, through spontaneous emission, an electron in a higher energy state can drop to a lower energy state while radiating the excess energy as a photon. These characteristic energy values, defined by the differences in the energies of the quantum states, are responsible for atomic spectral lines.[49] The amount of energy needed to remove or add an electron—the electron binding energy—is far less than the binding energy of nucleons. For example, it requires only 13.6 eV to strip a ground-state electron from a hydrogen atom,[51] compared to 2.23 million eV for splitting a deuterium nucleus.[52] Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms that have either a deficit or a surplus of electrons are called ions. Electrons that are farthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this mechanism, atoms are able to bond into molecules and other types of chemical compounds like ionic and covalent network crystals.[53] Nuclear properties By definition, any two atoms with an identical number of protons in their nuclei belong to the same chemical element. Atoms with equal numbers of protons but a different number of neutrons are different isotopes of the same element. For example, all hydrogen atoms admit exactly one proton, but isotopes exist with no neutrons (hydrogen-1, by far the most common form,[54] also called protium), one neutron (deuterium), two neutrons (tritium) and more than two neutrons. The known elements form a set of atomic numbers, from the single proton element hydrogen up to the 118-proton element oganesson.[55] All known isotopes of elements with atomic numbers greater than 82 are radioactive, although the radioactivity of element 83 (bismuth) is so slight as to be practically negligible.[56][57] About 339 nuclides occur naturally on Earth,[58] of which 254 (about 75%) have not been observed to decay, and are referred to as "stable isotopes". However, only 90 of these nuclides are stable to all decay, even in theory. Another 164 (bringing the total to 254) have not been observed to decay, even though in theory it is energetically possible. These are also formally classified as "stable". An additional 34 radioactive nuclides have half-lives longer than 80 million years, and are long-lived enough to be present from the birth of the solar system. This collection of 288 nuclides are known as primordial nuclides. Finally, an additional 51 short-lived nuclides are known to occur naturally, as daughter products of primordial nuclide decay (such as radium from uranium), or else as products of natural energetic processes on Earth, such as cosmic ray bombardment (for example, carbon-14).[59][note 1] For 80 of the chemical elements, at least one stable isotope exists. As a rule, there is only a handful of stable isotopes for each of these elements, the average being 3.2 stable isotopes per element. Twenty-six elements have only a single stable isotope, while the largest number of stable isotopes observed for any element is ten, for the element tin. Elements 43, 61, and all elements numbered 83 or higher have no stable isotopes.[60][page needed] Stability of isotopes is affected by the ratio of protons to neutrons, and also by the presence of certain "magic numbers" of neutrons or protons that represent closed and filled quantum shells. These quantum shells correspond to a set of energy levels within the shell model of the nucleus; filled shells, such as the filled shell of 50 protons for tin, confers unusual stability on the nuclide. Of the 254 known stable nuclides, only four have both an odd number of protons and odd number of neutrons: hydrogen-2 (deuterium), lithium-6, boron-10 and nitrogen-14. Also, only four naturally occurring, radioactive odd–odd nuclides have a half-life over a billion years: potassium-40, vanadium-50, lanthanum-138 and tantalum-180m. Most odd–odd nuclei are highly unstable with respect to beta decay, because the decay products are even–even, and are therefore more strongly bound, due to nuclear pairing effects.[60][page needed] The large majority of an atom's mass comes from the protons and neutrons that make it up. The total number of these particles (called "nucleons") in a given atom is called the mass number. It is a positive integer and dimensionless (instead of having dimension of mass), because it expresses a count. An example of use of a mass number is "carbon-12," which has 12 nucleons (six protons and six neutrons). The actual mass of an atom at rest is often expressed using the unified atomic mass unit (u), also called dalton (Da). This unit is defined as a twelfth of the mass of a free neutral atom of carbon-12, which is approximately 1.66×10−27 kg.[61] Hydrogen-1 (the lightest isotope of hydrogen which is also the nuclide with the lowest mass) has an atomic weight of 1.007825 u.[62] The value of this number is called the atomic mass. A given atom has an atomic mass approximately equal (within 1%) to its mass number times the atomic mass unit (for example the mass of a nitrogen-14 is roughly 14 u). However, this number will not be exactly an integer except in the case of carbon-12 (see below).[63] The heaviest stable atom is lead-208,[56] with a mass of 207.9766521 u.[64] As even the most massive atoms are far too light to work with directly, chemists instead use the unit of moles. One mole of atoms of any element always has the same number of atoms (about 6.022×1023). This number was chosen so that if an element has an atomic mass of 1 u, a mole of atoms of that element has a mass close to one gram. Because of the definition of the unified atomic mass unit, each carbon-12 atom has an atomic mass of exactly 12 u, and so a mole of carbon-12 atoms weighs exactly 0.012 kg.[61] Shape and size Atoms lack a well-defined outer boundary, so their dimensions are usually described in terms of an atomic radius. This is a measure of the distance out to which the electron cloud extends from the nucleus.[65] However, this assumes the atom to exhibit a spherical shape, which is only obeyed for atoms in vacuum or free space. Atomic radii may be derived from the distances between two nuclei when the two atoms are joined in a chemical bond. The radius varies with the location of an atom on the atomic chart, the type of chemical bond, the number of neighboring atoms (coordination number) and a quantum mechanical property known as spin.[66] On the periodic table of the elements, atom size tends to increase when moving down columns, but decrease when moving across rows (left to right).[67] Consequently, the smallest atom is helium with a radius of 32 pm, while one of the largest is caesium at 225 pm.[68] When subjected to external forces, like electrical fields, the shape of an atom may deviate from spherical symmetry. The deformation depends on the field magnitude and the orbital type of outer shell electrons, as shown by group-theoretical considerations. Aspherical deviations might be elicited for instance in crystals, where large crystal-electrical fields may occur at low-symmetry lattice sites.[69][70] Significant ellipsoidal deformations have been shown to occur for sulfur ions[71] and chalcogen ions[72] in pyrite-type compounds. Atomic dimensions are thousands of times smaller than the wavelengths of light (400–700 nm) so they cannot be viewed using an optical microscope. However, individual atoms can be observed using a scanning tunneling microscope. To visualize the minuteness of the atom, consider that a typical human hair is about 1 million carbon atoms in width.[73] A single drop of water contains about 2 sextillion (2×1021) atoms of oxygen, and twice the number of hydrogen atoms.[74] A single carat diamond with a mass of 2×10−4 kg contains about 10 sextillion (1022) atoms of carbon.[note 2] If an apple were magnified to the size of the Earth, then the atoms in the apple would be approximately the size of the original apple.[75] Radioactive decay This diagram shows the half-life (T½) of various isotopes with Z protons and N neutrons. Every element has one or more isotopes that have unstable nuclei that are subject to radioactive decay, causing the nucleus to emit particles or electromagnetic radiation. Radioactivity can occur when the radius of a nucleus is large compared with the radius of the strong force, which only acts over distances on the order of 1 fm.[76] The most common forms of radioactive decay are:[77][78] • Alpha decay: this process is caused when the nucleus emits an alpha particle, which is a helium nucleus consisting of two protons and two neutrons. The result of the emission is a new element with a lower atomic number. • Beta decay (and electron capture): these processes are regulated by the weak force, and result from a transformation of a neutron into a proton, or a proton into a neutron. The neutron to proton transition is accompanied by the emission of an electron and an antineutrino, while proton to neutron transition (except in electron capture) causes the emission of a positron and a neutrino. The electron or positron emissions are called beta particles. Beta decay either increases or decreases the atomic number of the nucleus by one. Electron capture is more common than positron emission, because it requires less energy. In this type of decay, an electron is absorbed by the nucleus, rather than a positron emitted from the nucleus. A neutrino is still emitted in this process, and a proton changes to a neutron. • Gamma decay: this process results from a change in the energy level of the nucleus to a lower state, resulting in the emission of electromagnetic radiation. The excited state of a nucleus which results in gamma emission usually occurs following the emission of an alpha or a beta particle. Thus, gamma decay usually follows alpha or beta decay. Other more rare types of radioactive decay include ejection of neutrons or protons or clusters of nucleons from a nucleus, or more than one beta particle. An analog of gamma emission which allows excited nuclei to lose energy in a different way, is internal conversion—a process that produces high-speed electrons that are not beta rays, followed by production of high-energy photons that are not gamma rays. A few large nuclei explode into two or more charged fragments of varying masses plus several neutrons, in a decay called spontaneous nuclear fission. Each radioactive isotope has a characteristic decay time period—the half-life—that is determined by the amount of time needed for half of a sample to decay. This is an exponential decay process that steadily decreases the proportion of the remaining isotope by 50% every half-life. Hence after two half-lives have passed only 25% of the isotope is present, and so forth.[76] Magnetic moment Elementary particles possess an intrinsic quantum mechanical property known as spin. This is analogous to the angular momentum of an object that is spinning around its center of mass, although strictly speaking these particles are believed to be point-like and cannot be said to be rotating. Spin is measured in units of the reduced Planck constant (ħ), with electrons, protons and neutrons all having spin ½ ħ, or "spin-½". In an atom, electrons in motion around the nucleus possess orbital angular momentum in addition to their spin, while the nucleus itself possesses angular momentum due to its nuclear spin.[79] The magnetic field produced by an atom—its magnetic moment—is determined by these various forms of angular momentum, just as a rotating charged object classically produces a magnetic field. However, the most dominant contribution comes from electron spin. Due to the nature of electrons to obey the Pauli exclusion principle, in which no two electrons may be found in the same quantum state, bound electrons pair up with each other, with one member of each pair in a spin up state and the other in the opposite, spin down state. Thus these spins cancel each other out, reducing the total magnetic dipole moment to zero in some atoms with even number of electrons.[80] In ferromagnetic elements such as iron, cobalt and nickel, an odd number of electrons leads to an unpaired electron and a net overall magnetic moment. The orbitals of neighboring atoms overlap and a lower energy state is achieved when the spins of unpaired electrons are aligned with each other, a spontaneous process known as an exchange interaction. When the magnetic moments of ferromagnetic atoms are lined up, the material can produce a measurable macroscopic field. Paramagnetic materials have atoms with magnetic moments that line up in random directions when no magnetic field is present, but the magnetic moments of the individual atoms line up in the presence of a field.[80][81] The nucleus of an atom will have no spin when it has even numbers of both neutrons and protons, but for other cases of odd numbers, the nucleus may have a spin. Normally nuclei with spin are aligned in random directions because of thermal equilibrium. However, for certain elements (such as xenon-129) it is possible to polarize a significant proportion of the nuclear spin states so that they are aligned in the same direction—a condition called hyperpolarization. This has important applications in magnetic resonance imaging.[82][83] Energy levels These electron's energy levels (not to scale) are sufficient for ground states of atoms up to cadmium (5s2 4d10) inclusively. Do not forget that even the top of the diagram is lower than an unbound electron state. The potential energy of an electron in an atom is negative, its dependence of its position reaches the minimum (the most absolute value) inside the nucleus, and vanishes when the distance from the nucleus goes to infinity, roughly in an inverse proportion to the distance. In the quantum-mechanical model, a bound electron can only occupy a set of states centered on the nucleus, and each state corresponds to a specific energy level; see time-independent Schrödinger equation for theoretical explanation. An energy level can be measured by the amount of energy needed to unbind the electron from the atom, and is usually given in units of electronvolts (eV). The lowest energy state of a bound electron is called the ground state, i.e. stationary state, while an electron transition to a higher level results in an excited state.[84] The electron's energy raises when n increases because the (average) distance to the nucleus increases. Dependence of the energy on is caused not by electrostatic potential of the nucleus, but by interaction between electrons. For an electron to transition between two different states, e.g. grounded state to first excited level (ionization), it must absorb or emit a photon at an energy matching the difference in the potential energy of those levels, according to Niels Bohr model, what can be precisely calculated by the Schrödinger equation. Electrons jump between orbitals in a particle-like fashion. For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon; see Electron properties. The energy of an emitted photon is proportional to its frequency, so these specific energy levels appear as distinct bands in the electromagnetic spectrum.[85] Each element has a characteristic spectrum that can depend on the nuclear charge, subshells filled by electrons, the electromagnetic interactions between the electrons and other factors.[86] An example of absorption lines in a spectrum When a continuous spectrum of energy is passed through a gas or plasma, some of the photons are absorbed by atoms, causing electrons to change their energy level. Those excited electrons that remain bound to their atom spontaneously emit this energy as a photon, traveling in a random direction, and so drop back to lower energy levels. Thus the atoms behave like a filter that forms a series of dark absorption bands in the energy output. (An observer viewing the atoms from a view that does not include the continuous spectrum in the background, instead sees a series of emission lines from the photons emitted by the atoms.) Spectroscopic measurements of the strength and width of atomic spectral lines allow the composition and physical properties of a substance to be determined.[87] Close examination of the spectral lines reveals that some display a fine structure splitting. This occurs because of spin–orbit coupling, which is an interaction between the spin and motion of the outermost electron.[88] When an atom is in an external magnetic field, spectral lines become split into three or more components; a phenomenon called the Zeeman effect. This is caused by the interaction of the magnetic field with the magnetic moment of the atom and its electrons. Some atoms can have multiple electron configurations with the same energy level, which thus appear as a single spectral line. The interaction of the magnetic field with the atom shifts these electron configurations to slightly different energy levels, resulting in multiple spectral lines.[89] The presence of an external electric field can cause a comparable splitting and shifting of spectral lines by modifying the electron energy levels, a phenomenon called the Stark effect.[90] If a bound electron is in an excited state, an interacting photon with the proper energy can cause stimulated emission of a photon with a matching energy level. For this to occur, the electron must drop to a lower energy state that has an energy difference matching the energy of the interacting photon. The emitted photon and the interacting photon then move off in parallel and with matching phases. That is, the wave patterns of the two photons are synchronized. This physical property is used to make lasers, which can emit a coherent beam of light energy in a narrow frequency band.[91] Valence and bonding behavior Valency is the combining power of an element. It is equal to number of hydrogen atoms that atom can combine or displace in forming compounds.[92] The outermost electron shell of an atom in its uncombined state is known as the valence shell, and the electrons in that shell are called valence electrons. The number of valence electrons determines the bonding behavior with other atoms. Atoms tend to chemically react with each other in a manner that fills (or empties) their outer valence shells.[93] For example, a transfer of a single electron between atoms is a useful approximation for bonds that form between atoms with one-electron more than a filled shell, and others that are one-electron short of a full shell, such as occurs in the compound sodium chloride and other chemical ionic salts. However, many elements display multiple valences, or tendencies to share differing numbers of electrons in different compounds. Thus, chemical bonding between these elements takes many forms of electron-sharing that are more than simple electron transfers. Examples include the element carbon and the organic compounds.[94] The chemical elements are often displayed in a periodic table that is laid out to display recurring chemical properties, and elements with the same number of valence electrons form a group that is aligned in the same column of the table. (The horizontal rows correspond to the filling of a quantum shell of electrons.) The elements at the far right of the table have their outer shell completely filled with electrons, which results in chemically inert elements known as the noble gases.[95][96] Snapshots illustrating the formation of a Bose–Einstein condensate Quantities of atoms are found in different states of matter that depend on the physical conditions, such as temperature and pressure. By varying the conditions, materials can transition between solids, liquids, gases and plasmas.[97] Within a state, a material can also exist in different allotropes. An example of this is solid carbon, which can exist as graphite or diamond.[98] Gaseous allotropes exist as well, such as dioxygen and ozone. At temperatures close to absolute zero, atoms can form a Bose–Einstein condensate, at which point quantum mechanical effects, which are normally only observed at the atomic scale, become apparent on a macroscopic scale.[99][100] This super-cooled collection of atoms then behaves as a single super atom, which may allow fundamental checks of quantum mechanical behavior.[101] Scanning tunneling microscope image showing the individual atoms making up this gold (100) surface. The surface atoms deviate from the bulk crystal structure and arrange in columns several atoms wide with pits between them (See surface reconstruction). The scanning tunneling microscope is a device for viewing surfaces at the atomic level. It uses the quantum tunneling phenomenon, which allows particles to pass through a barrier that would normally be insurmountable. Electrons tunnel through the vacuum between two planar metal electrodes, on each of which is an adsorbed atom, providing a tunneling-current density that can be measured. Scanning one atom (taken as the tip) as it moves past the other (the sample) permits plotting of tip displacement versus lateral separation for a constant current. The calculation shows the extent to which scanning-tunneling-microscope images of an individual atom are visible. It confirms that for low bias, the microscope images the space-averaged dimensions of the electron orbitals across closely packed energy levels—the Fermi level local density of states.[102][103] An atom can be ionized by removing one of its electrons. The electric charge causes the trajectory of an atom to bend when it passes through a magnetic field. The radius by which the trajectory of a moving ion is turned by the magnetic field is determined by the mass of the atom. The mass spectrometer uses this principle to measure the mass-to-charge ratio of ions. If a sample contains multiple isotopes, the mass spectrometer can determine the proportion of each isotope in the sample by measuring the intensity of the different beams of ions. Techniques to vaporize atoms include inductively coupled plasma atomic emission spectroscopy and inductively coupled plasma mass spectrometry, both of which use a plasma to vaporize samples for analysis.[104] A more area-selective method is electron energy loss spectroscopy, which measures the energy loss of an electron beam within a transmission electron microscope when it interacts with a portion of a sample. The atom-probe tomograph has sub-nanometer resolution in 3-D and can chemically identify individual atoms using time-of-flight mass spectrometry.[105] Spectra of excited states can be used to analyze the atomic composition of distant stars. Specific light wavelengths contained in the observed light from stars can be separated out and related to the quantized transitions in free gas atoms. These colors can be replicated using a gas-discharge lamp containing the same element.[106] Helium was discovered in this way in the spectrum of the Sun 23 years before it was found on Earth.[107] Origin and current state Atoms form about 4% of the total energy density of the observable Universe, with an average density of about 0.25 atoms/m3.[108] Within a galaxy such as the Milky Way, atoms have a much higher concentration, with the density of matter in the interstellar medium (ISM) ranging from 105 to 109 atoms/m3.[109] The Sun is believed to be inside the Local Bubble, a region of highly ionized gas, so the density in the solar neighborhood is only about 103 atoms/m3.[110] Stars form from dense clouds in the ISM, and the evolutionary processes of stars result in the steady enrichment of the ISM with elements more massive than hydrogen and helium. Up to 95% of the Milky Way's atoms are concentrated inside stars and the total mass of atoms forms about 10% of the mass of the galaxy.[111] (The remainder of the mass is an unknown dark matter.)[112] Electrons are thought to exist in the Universe since early stages of the Big Bang. Atomic nuclei forms in nucleosynthesis reactions. In about three minutes Big Bang nucleosynthesis produced most of the helium, lithium, and deuterium in the Universe, and perhaps some of the beryllium and boron.[113][114][115] Ubiquitousness and stability of atoms relies on their binding energy, which means that an atom has a lower energy than an unbound system of the nucleus and electrons. Where the temperature is much higher than ionization potential, the matter exists in the form of plasma—a gas of positively charged ions (possibly, bare nuclei) and electrons. When the temperature drops below the ionization potential, atoms become statistically favorable. Atoms (complete with bound electrons) became to dominate over charged particles 380,000 years after the Big Bang—an epoch called recombination, when the expanding Universe cooled enough to allow electrons to become attached to nuclei.[116] Since the Big Bang, which produced no carbon or heavier elements, atomic nuclei have been combined in stars through the process of nuclear fusion to produce more of the element helium, and (via the triple alpha process) the sequence of elements from carbon up to iron;[117] see stellar nucleosynthesis for details. Isotopes such as lithium-6, as well as some beryllium and boron are generated in space through cosmic ray spallation.[118] This occurs when a high-energy proton strikes an atomic nucleus, causing large numbers of nucleons to be ejected. Elements heavier than iron were produced in supernovae through the r-process and in AGB stars through the s-process, both of which involve the capture of neutrons by atomic nuclei.[119] Elements such as lead formed largely through the radioactive decay of heavier elements.[120] Most of the atoms that make up the Earth and its inhabitants were present in their current form in the nebula that collapsed out of a molecular cloud to form the Solar System. The rest are the result of radioactive decay, and their relative proportion can be used to determine the age of the Earth through radiometric dating.[121][122] Most of the helium in the crust of the Earth (about 99% of the helium from gas wells, as shown by its lower abundance of helium-3) is a product of alpha decay.[123] There are a few trace atoms on Earth that were not present at the beginning (i.e., not "primordial"), nor are results of radioactive decay. Carbon-14 is continuously generated by cosmic rays in the atmosphere.[124] Some atoms on Earth have been artificially generated either deliberately or as by-products of nuclear reactors or explosions.[125][126] Of the transuranic elements—those with atomic numbers greater than 92—only plutonium and neptunium occur naturally on Earth.[127][128] Transuranic elements have radioactive lifetimes shorter than the current age of the Earth[129] and thus identifiable quantities of these elements have long since decayed, with the exception of traces of plutonium-244 possibly deposited by cosmic dust.[121] Natural deposits of plutonium and neptunium are produced by neutron capture in uranium ore.[130] The Earth contains approximately 1.33×1050 atoms.[131] Although small numbers of independent atoms of noble gases exist, such as argon, neon, and helium, 99% of the atmosphere is bound in the form of molecules, including carbon dioxide and diatomic oxygen and nitrogen. At the surface of the Earth, an overwhelming majority of atoms combine to form various compounds, including water, salt, silicates and oxides. Atoms can also combine to create materials that do not consist of discrete molecules, including crystals and liquid or solid metals.[132][133] This atomic matter forms networked arrangements that lack the particular type of small-scale interrupted order associated with molecular matter.[134] Rare and theoretical forms Superheavy elements While isotopes with atomic numbers higher than lead (82) are known to be radioactive, an "island of stability" has been proposed for some elements with atomic numbers above 103. These superheavy elements may have a nucleus that is relatively stable against radioactive decay.[135] The most likely candidate for a stable superheavy atom, unbihexium, has 126 protons and 184 neutrons.[136] Exotic matter Each particle of matter has a corresponding antimatter particle with the opposite electrical charge. Thus, the positron is a positively charged antielectron and the antiproton is a negatively charged equivalent of a proton. When a matter and corresponding antimatter particle meet, they annihilate each other. Because of this, along with an imbalance between the number of matter and antimatter particles, the latter are rare in the universe. The first causes of this imbalance are not yet fully understood, although theories of baryogenesis may offer an explanation. As a result, no antimatter atoms have been discovered in nature.[137][138] However, in 1996 the antimatter counterpart of the hydrogen atom (antihydrogen) was synthesized at the CERN laboratory in Geneva.[139][140] Other exotic atoms have been created by replacing one of the protons, neutrons or electrons with other particles that have the same charge. For example, an electron can be replaced by a more massive muon, forming a muonic atom. These types of atoms can be used to test the fundamental predictions of physics.[141][142][143] See also 1. ^ For more recent updates see Interactive Chart of Nuclides (Brookhaven National Laboratory). 2. ^ A carat is 200 milligrams. By definition, carbon-12 has 0.012 kg per mole. The Avogadro constant defines 6×1023 atoms per mole. 1. ^ Pullman, Bernard (1998). The Atom in the History of Human Thought. Oxford, England: Oxford University Press. pp. 31–33. ISBN 0-19-515040-6.  2. ^ Cohen, Henri; Lefebvre, Claire, eds. (2017). Handbook of Categorization in Cognitive Science (Second ed.). Amsterdam, The Netherlands: Elsevier. p. 427. ISBN 978-0-08-101107-2.  3. ^ Andrew G. van Melsen (1952). From Atomos to Atom. Mineola, NY: Dover Publications. ISBN 0-486-49584-1.  4. ^ Dalton, John. "On the Absorption of Gases by Water and Other Liquids", in Memoirs of the Literary and Philosophical Society of Manchester. 1803. Retrieved on August 29, 2007. 5. ^ Einstein, Albert (1905). "Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen" (PDF). Annalen der Physik (in German). 322 (8): 549–560. Bibcode:1905AnP...322..549E. doi:10.1002/andp.19053220806. Retrieved 4 February 2007.  6. ^ Mazo, Robert M. (2002). Brownian Motion: Fluctuations, Dynamics, and Applications. Oxford University Press. pp. 1–7. ISBN 0-19-851567-7. OCLC 48753074.  7. ^ Lee, Y.K.; Hoon, K. (1995). "Brownian Motion". Imperial College. Archived from the original on 18 December 2007. Retrieved 18 December 2007.  8. ^ Patterson, G. (2007). "Jean Perrin and the triumph of the atomic doctrine". Endeavour. 31 (2): 50–53. doi:10.1016/j.endeavour.2007.05.003. PMID 17602746.  10. ^ "J.J. Thomson". Nobel Foundation. 1906. Retrieved 20 December 2007.  11. ^ Rutherford, E. (1911). "The Scattering of α and β Particles by Matter and the Structure of the Atom" (PDF). Philosophical Magazine. 21 (125): 669–688. doi:10.1080/14786440508637080.  12. ^ "The Gold Foil Experiment". myweb.usf.edu. Archived from the original on 19 November 2016.  13. ^ "Frederick Soddy, The Nobel Prize in Chemistry 1921". Nobel Foundation. Retrieved 18 January 2008.  14. ^ Thomson, Joseph John (1913). "Rays of positive electricity". Proceedings of the Royal Society. A. 89 (607): 1–20. Bibcode:1913RSPSA..89....1T. doi:10.1098/rspa.1913.0057Freely accessible.  15. ^ Stern, David P. (16 May 2005). "The Atomic Nucleus and Bohr's Early Model of the Atom". NASA/Goddard Space Flight Center. Retrieved 20 December 2007.  16. ^ Bohr, Niels (11 December 1922). "Niels Bohr, The Nobel Prize in Physics 1922, Nobel Lecture". Nobel Foundation. Retrieved 16 February 2008.  17. ^ a b c Pais, Abraham (1986). Inward Bound: Of Matter and Forces in the Physical World. New York: Oxford University Press. pp. 228–230. ISBN 0-19-851971-0.  18. ^ Lewis, Gilbert N. (1916). "The Atom and the Molecule". Journal of the American Chemical Society. 38 (4): 762–786. doi:10.1021/ja02261a002.  19. ^ Scerri, Eric R. (2007). The periodic table: its story and its significance. Oxford University Press US. pp. 205–226. ISBN 0-19-530573-6.  21. ^ Scully, Marlan O.; Lamb, Willis E.; Barut, Asim (1987). "On the theory of the Stern-Gerlach apparatus". Foundations of Physics. 17 (6): 575–583. Bibcode:1987FoPh...17..575S. doi:10.1007/BF01882788.  22. ^ TED-Ed (16 September 2014). "What is the Heisenberg Uncertainty Principle? - Chad Orzel" – via YouTube.  23. ^ Brown, Kevin (2007). "The Hydrogen Atom". MathPages. Retrieved 21 December 2007.  24. ^ Harrison, David M. (2000). "The Development of Quantum Mechanics". University of Toronto. Archived from the original on 25 December 2007. Retrieved 21 December 2007.  25. ^ Aston, Francis W. (1920). "The constitution of atmospheric neon". Philosophical Magazine. 39 (6): 449–455. doi:10.1080/14786440408636058.  26. ^ Chadwick, James (12 December 1935). "Nobel Lecture: The Neutron and Its Properties". Nobel Foundation. Retrieved 21 December 2007.  27. ^ Bowden, Mary Ellen (1997). "Otto Hahn, Lise Meitner, and Fritz Strassmann". Chemical achievers : the human face of the chemical sciences. Philadelphia, PA: Chemical Heritage Foundation. pp. 76–80, 125. ISBN 9780941901123.  28. ^ "Otto Hahn, Lise Meitner, and Fritz Strassmann". Science History Institute. Retrieved 20 March 2018.  29. ^ Meitner, Lise; Frisch, Otto Robert (1939). "Disintegration of uranium by neutrons: a new type of nuclear reaction". Nature. 143 (3615): 239–240. Bibcode:1939Natur.143..239M. doi:10.1038/143239a0.  30. ^ Schroeder, M. "Lise Meitner – Zur 125. Wiederkehr Ihres Geburtstages" (in German). Archived from the original on 19 July 2011. Retrieved 4 June 2009.  31. ^ Crawford, E.; Sime, Ruth Lewin; Walker, Mark (1997). "A Nobel tale of postwar injustice". Physics Today. 50 (9): 26–32. Bibcode:1997PhT....50i..26C. doi:10.1063/1.881933.  32. ^ Kullander, Sven (28 August 2001). "Accelerators and Nobel Laureates". Nobel Foundation. Retrieved 31 January 2008.  33. ^ "The Nobel Prize in Physics 1990". Nobel Foundation. 17 October 1990. Retrieved 31 January 2008.  34. ^ Demtröder, Wolfgang (2002). Atoms, Molecules and Photons: An Introduction to Atomic- Molecular- and Quantum Physics (1st ed.). Springer. pp. 39–42. ISBN 3-540-20631-0. OCLC 181435713.  35. ^ Woan, Graham (2000). The Cambridge Handbook of Physics. Cambridge University Press. p. 8. ISBN 0-521-57507-9. OCLC 224032426.  36. ^ MacGregor, Malcolm H. (1992). The Enigmatic Electron. Oxford University Press. pp. 33–37. ISBN 0-19-521833-7. OCLC 223372888.  37. ^ a b Particle Data Group (2002). "The Particle Adventure". Lawrence Berkeley Laboratory. Archived from the original on 4 January 2007. Retrieved 3 January 2007.  38. ^ a b Schombert, James (18 April 2006). "Elementary Particles". University of Oregon. Retrieved 3 January 2007.  39. ^ Jevremovic, Tatjana (2005). Nuclear Principles in Engineering. Springer. p. 63. ISBN 0-387-23284-2. OCLC 228384008.  40. ^ Pfeffer, Jeremy I.; Nir, Shlomo (2000). Modern Physics: An Introductory Text. Imperial College Press. pp. 330–336. ISBN 1-86094-250-4. OCLC 45900880.  41. ^ Wenner, Jennifer M. (10 October 2007). "How Does Radioactive Decay Work?". Carleton College. Retrieved 9 January 2008.  42. ^ a b c Raymond, David (7 April 2006). "Nuclear Binding Energies". New Mexico Tech. Archived from the original on 1 December 2002. Retrieved 3 January 2007.  43. ^ Mihos, Chris (23 July 2002). "Overcoming the Coulomb Barrier". Case Western Reserve University. Retrieved 13 February 2008.  44. ^ Staff (30 March 2007). "ABC's of Nuclear Science". Lawrence Berkeley National Laboratory. Archived from the original on 5 December 2006. Retrieved 3 January 2007.  45. ^ Makhijani, Arjun; Saleska, Scott (2 March 2001). "Basics of Nuclear Physics and Fission". Institute for Energy and Environmental Research. Archived from the original on 16 January 2007. Retrieved 3 January 2007.  46. ^ Shultis, J. Kenneth; Faw, Richard E. (2002). Fundamentals of Nuclear Science and Engineering. CRC Press. pp. 10–17. ISBN 0-8247-0834-2. OCLC 123346507.  47. ^ Fewell, M. P. (1995). "The atomic nuclide with the highest mean binding energy". American Journal of Physics. 63 (7): 653–658. Bibcode:1995AmJPh..63..653F. doi:10.1119/1.17828.  48. ^ Mulliken, Robert S. (1967). "Spectroscopy, Molecular Orbitals, and Chemical Bonding". Science. 157 (3784): 13–24. Bibcode:1967Sci...157...13M. doi:10.1126/science.157.3784.13. PMID 5338306.  49. ^ a b Brucat, Philip J. (2008). "The Quantum Atom". University of Florida. Archived from the original on 7 December 2006. Retrieved 4 January 2007.  50. ^ Manthey, David (2001). "Atomic Orbitals". Orbital Central. Archived from the original on 10 January 2008. Retrieved 21 January 2008.  51. ^ Herter, Terry (2006). "Lecture 8: The Hydrogen Atom". Cornell University. Archived from the original on 22 February 2012. Retrieved 14 February 2008.  52. ^ Bell, R. E.; Elliott, L. G. (1950). "Gamma-Rays from the Reaction H1(n,γ)D2 and the Binding Energy of the Deuteron". Physical Review. 79 (2): 282–285. Bibcode:1950PhRv...79..282B. doi:10.1103/PhysRev.79.282.  53. ^ Smirnov, Boris M. (2003). Physics of Atoms and Ions. Springer. pp. 249–272. ISBN 0-387-95550-X.  54. ^ Matis, Howard S. (9 August 2000). "The Isotopes of Hydrogen". Guide to the Nuclear Wall Chart. Lawrence Berkeley National Lab. Archived from the original on 18 December 2007. Retrieved 21 December 2007.  55. ^ Weiss, Rick (17 October 2006). "Scientists Announce Creation of Atomic Element, the Heaviest Yet". Washington Post. Retrieved 21 December 2007.  56. ^ a b Sills, Alan D. (2003). Earth Science the Easy Way. Barron's Educational Series. pp. 131–134. ISBN 0-7641-2146-4. OCLC 51543743.  57. ^ Dumé, Belle (23 April 2003). "Bismuth breaks half-life record for alpha decay". Physics World. Archived from the original on 14 December 2007. Retrieved 21 December 2007.  58. ^ Lindsay, Don (30 July 2000). "Radioactives Missing From The Earth". Don Lindsay Archive. Archived from the original on 28 April 2007. Retrieved 23 May 2007.  59. ^ Tuli, Jagdish K. (April 2005). "Nuclear Wallet Cards". National Nuclear Data Center, Brookhaven National Laboratory. Retrieved 16 April 2011.  60. ^ a b CRC Handbook (2002). 61. ^ a b Mills, Ian; Cvitaš, Tomislav; Homann, Klaus; Kallay, Nikola; Kuchitsu, Kozo (1993). Quantities, Units and Symbols in Physical Chemistry (PDF) (2nd ed.). Oxford: International Union of Pure and Applied Chemistry, Commission on Physiochemical Symbols Terminology and Units, Blackwell Scientific Publications. p. 70. ISBN 0-632-03583-8. OCLC 27011505.  62. ^ Chieh, Chung (22 January 2001). "Nuclide Stability". University of Waterloo. Retrieved 4 January 2007.  63. ^ "Atomic Weights and Isotopic Compositions for All Elements". National Institute of Standards and Technology. Archived from the original on 31 December 2006. Retrieved 4 January 2007.  64. ^ Audi, G.; Wapstra, A.H.; Thibault, C. (2003). "The Ame2003 atomic mass evaluation (II)" (PDF). Nuclear Physics A. 729 (1): 337–676. Bibcode:2003NuPhA.729..337A. doi:10.1016/j.nuclphysa.2003.11.003.  65. ^ Ghosh, D. C.; Biswas, R. (2002). "Theoretical calculation of Absolute Radii of Atoms and Ions. Part 1. The Atomic Radii". Int. J. Mol. Sci. 3 (11): 87–113. doi:10.3390/i3020087Freely accessible.  66. ^ Shannon, R. D. (1976). "Revised effective ionic radii and systematic studies of interatomic distances in halides and chalcogenides". Acta Crystallographica A. 32 (5): 751–767. Bibcode:1976AcCrA..32..751S. doi:10.1107/S0567739476001551.  67. ^ Dong, Judy (1998). "Diameter of an Atom". The Physics Factbook. Archived from the original on 4 November 2007. Retrieved 19 November 2007.  68. ^ Zumdahl, Steven S. (2002). Introductory Chemistry: A Foundation (5th ed.). Houghton Mifflin. ISBN 0-618-34342-3. OCLC 173081482. Archived from the original on 4 March 2008. Retrieved 5 February 2008.  69. ^ Bethe, Hans (1929). "Termaufspaltung in Kristallen". Annalen der Physik. 3 (2): 133–208. Bibcode:1929AnP...395..133B. doi:10.1002/andp.19293950202.  70. ^ Birkholz, Mario (1995). "Crystal-field induced dipoles in heteropolar crystals – I. concept". Z. Phys. B. 96 (3): 325–332. Bibcode:1995ZPhyB..96..325B. CiteSeerX accessible. doi:10.1007/BF01313054.  71. ^ Birkholz, M.; Rudert, R. (2008). "Interatomic distances in pyrite-structure disulfides – a case for ellipsoidal modeling of sulfur ions]". Physica Status Solidi B. 245 (9): 1858–1864. Bibcode:2008PSSBR.245.1858B. doi:10.1002/pssb.200879532.  72. ^ Birkholz, M. (2014). "Modeling the Shape of Ions in Pyrite-Type Crystals". Crystals. 4 (4): 390–403. doi:10.3390/cryst4030390Freely accessible.  73. ^ Staff (2007). "Small Miracles: Harnessing nanotechnology". Oregon State University. Retrieved 7 January 2007.  – describes the width of a human hair as 105 nm and 10 carbon atoms as spanning 1 nm. 74. ^ Padilla, Michael J.; Miaoulis, Ioannis; Cyr, Martha (2002). Prentice Hall Science Explorer: Chemical Building Blocks. Upper Saddle River, New Jersey: Prentice-Hall, Inc. p. 32. ISBN 0-13-054091-9. OCLC 47925884. There are 2,000,000,000,000,000,000,000 (that's 2 sextillion) atoms of oxygen in one drop of water—and twice as many atoms of hydrogen.  75. ^ Feynman, Richard (1995). Six Easy Pieces. The Penguin Group. p. 5. ISBN 978-0-14-027666-4. OCLC 40499574.  76. ^ a b "Radioactivity". Splung.com. Archived from the original on 4 December 2007. Retrieved 19 December 2007.  77. ^ L'Annunziata, Michael F. (2003). Handbook of Radioactivity Analysis. Academic Press. pp. 3–56. ISBN 0-12-436603-1. OCLC 16212955.  78. ^ Firestone, Richard B. (22 May 2000). "Radioactive Decay Modes". Berkeley Laboratory. Archived from the original on 29 September 2006. Retrieved 7 January 2007.  79. ^ Hornak, J. P. (2006). "Chapter 3: Spin Physics". The Basics of NMR. Rochester Institute of Technology. Archived from the original on 3 February 2007. Retrieved 7 January 2007.  80. ^ a b Schroeder, Paul A. (25 February 2000). "Magnetic Properties". University of Georgia. Archived from the original on 29 April 2007. Retrieved 7 January 2007.  81. ^ Goebel, Greg (1 September 2007). "[4.3] Magnetic Properties of the Atom". Elementary Quantum Physics. In The Public Domain website. Retrieved 7 January 2007.  82. ^ Yarris, Lynn (Spring 1997). "Talking Pictures". Berkeley Lab Research Review. Archived from the original on 13 January 2008. Retrieved 9 January 2008.  83. ^ Liang, Z.-P.; Haacke, E. M. (1999). Webster, J. G., ed. Encyclopedia of Electrical and Electronics Engineering: Magnetic Resonance Imaging. vol. 2. John Wiley & Sons. pp. 412–426. ISBN 0-471-13946-7.  84. ^ Zeghbroeck, Bart J. Van (1998). "Energy levels". Shippensburg University. Archived from the original on 15 January 2005. Retrieved 23 December 2007.  85. ^ Fowles, Grant R. (1989). Introduction to Modern Optics. Courier Dover Publications. pp. 227–233. ISBN 0-486-65957-7. OCLC 18834711.  86. ^ Martin, W. C.; Wiese, W. L. (May 2007). "Atomic Spectroscopy: A Compendium of Basic Ideas, Notation, Data, and Formulas". National Institute of Standards and Technology. Archived from the original on 8 February 2007. Retrieved 8 January 2007.  87. ^ "Atomic Emission Spectra – Origin of Spectral Lines". Avogadro Web Site. Archived from the original on 28 February 2006. Retrieved 10 August 2006.  88. ^ Fitzpatrick, Richard (16 February 2007). "Fine structure". University of Texas at Austin. Retrieved 14 February 2008.  89. ^ Weiss, Michael (2001). "The Zeeman Effect". University of California-Riverside. Archived from the original on 2 February 2008. Retrieved 6 February 2008.  90. ^ Beyer, H. F.; Shevelko, V. P. (2003). Introduction to the Physics of Highly Charged Ions. CRC Press. pp. 232–236. ISBN 0-7503-0481-2. OCLC 47150433.  91. ^ Watkins, Thayer. "Coherence in Stimulated Emission". San José State University. Archived from the original on 12 January 2008. Retrieved 23 December 2007.  92. ^ oxford dictionary – valency 93. ^ Reusch, William (16 July 2007). "Virtual Textbook of Organic Chemistry". Michigan State University. Archived from the original on 29 October 2007. Retrieved 11 January 2008.  94. ^ "Covalent bonding – Single bonds". chemguide. 2000.  95. ^ Husted, Robert; et al. (11 December 2003). "Periodic Table of the Elements". Los Alamos National Laboratory. Archived from the original on 10 January 2008. Retrieved 11 January 2008.  96. ^ Baum, Rudy (2003). "It's Elemental: The Periodic Table". Chemical & Engineering News. Retrieved 11 January 2008.  97. ^ Goodstein, David L. (2002). States of Matter. Courier Dover Publications. pp. 436–438. ISBN 0-13-843557-X.  98. ^ Brazhkin, Vadim V. (2006). "Metastable phases, phase transformations, and phase diagrams in physics and chemistry". Physics-Uspekhi. 49 (7): 719–724. Bibcode:2006PhyU...49..719B. doi:10.1070/PU2006v049n07ABEH006013.  99. ^ Myers, Richard (2003). The Basics of Chemistry. Greenwood Press. p. 85. ISBN 0-313-31664-3. OCLC 50164580.  100. ^ Staff (9 October 2001). "Bose-Einstein Condensate: A New Form of Matter". National Institute of Standards and Technology. Archived from the original on 3 January 2008. Retrieved 16 January 2008.  101. ^ Colton, Imogen; Fyffe, Jeanette (3 February 1999). "Super Atoms from Bose-Einstein Condensation". The University of Melbourne. Archived from the original on 29 August 2007. Retrieved 6 February 2008.  102. ^ Jacox, Marilyn; Gadzuk, J. William (November 1997). "Scanning Tunneling Microscope". National Institute of Standards and Technology. Archived from the original on 7 January 2008. Retrieved 11 January 2008.  103. ^ "The Nobel Prize in Physics 1986". The Nobel Foundation. Retrieved 11 January 2008.  – in particular, see the Nobel lecture by G. Binnig and H. Rohrer. 104. ^ Jakubowski, N.; Moens, Luc; Vanhaecke, Frank (1998). "Sector field mass spectrometers in ICP-MS". Spectrochimica Acta Part B: Atomic Spectroscopy. 53 (13): 1739–1763. Bibcode:1998AcSpe..53.1739J. doi:10.1016/S0584-8547(98)00222-5.  105. ^ Müller, Erwin W.; Panitz, John A.; McLane, S. Brooks (1968). "The Atom-Probe Field Ion Microscope". Review of Scientific Instruments. 39 (1): 83–86. Bibcode:1968RScI...39...83M. doi:10.1063/1.1683116.  106. ^ Lochner, Jim; Gibb, Meredith; Newman, Phil (30 April 2007). "What Do Spectra Tell Us?". NASA/Goddard Space Flight Center. Archived from the original on 16 January 2008. Retrieved 3 January 2008.  107. ^ Winter, Mark (2007). "Helium". WebElements. Archived from the original on 30 December 2007. Retrieved 3 January 2008.  108. ^ Hinshaw, Gary (10 February 2006). "What is the Universe Made Of?". NASA/WMAP. Archived from the original on 31 December 2007. Retrieved 7 January 2008.  109. ^ Choppin, Gregory R.; Liljenzin, Jan-Olov; Rydberg, Jan (2001). Radiochemistry and Nuclear Chemistry. Elsevier. p. 441. ISBN 0-7506-7463-6. OCLC 162592180.  110. ^ Davidsen, Arthur F. (1993). "Far-Ultraviolet Astronomy on the Astro-1 Space Shuttle Mission". Science. 259 (5093): 327–334. Bibcode:1993Sci...259..327D. doi:10.1126/science.259.5093.327. PMID 17832344.  111. ^ Lequeux, James (2005). The Interstellar Medium. Springer. p. 4. ISBN 3-540-21326-0. OCLC 133157789.  112. ^ Smith, Nigel (6 January 2000). "The search for dark matter". Physics World. Archived from the original on 16 February 2008. Retrieved 14 February 2008.  114. ^ Copi, Craig J.; Schramm, DN; Turner, MS (1995). "Big-Bang Nucleosynthesis and the Baryon Density of the Universe". Science. 267 (5195): 192–199. arXiv:astro-ph/9407006Freely accessible. Bibcode:1995Sci...267..192C. doi:10.1126/science.7809624. PMID 7809624.  116. ^ Abbott, Brian (30 May 2007). "Microwave (WMAP) All-Sky Survey". Hayden Planetarium. Archived from the original on 13 February 2013. Retrieved 13 January 2008.  117. ^ Hoyle, F. (1946). "The synthesis of the elements from hydrogen". Monthly Notices of the Royal Astronomical Society. 106 (5): 343–383. Bibcode:1946MNRAS.106..343H. doi:10.1093/mnras/106.5.343.  118. ^ Knauth, D. C.; Knauth, D. C.; Lambert, David L.; Crane, P. (2000). "Newly synthesized lithium in the interstellar medium". Nature. 405 (6787): 656–658. Bibcode:2000Natur.405..656K. doi:10.1038/35015028. PMID 10864316.  119. ^ Mashnik, Stepan G. (2000). "On Solar System and Cosmic Rays Nucleosynthesis and Spallation Processes". arXiv:astro-ph/0008382Freely accessible [astro-ph].  121. ^ a b Manuel 2001, pp. 407–430, 511–519. 122. ^ Dalrymple, G. Brent (2001). "The age of the Earth in the twentieth century: a problem (mostly) solved". Geological Society, London, Special Publications. 190 (1): 205–221. Bibcode:2001GSLSP.190..205D. doi:10.1144/GSL.SP.2001.190.01.14. Retrieved 14 January 2008.  123. ^ Anderson, Don L.; Foulger, G. R.; Meibom, Anders (2 September 2006). "Helium: Fundamental models". MantlePlumes.org. Archived from the original on 8 February 2007. Retrieved 14 January 2007.  124. ^ Pennicott, Katie (10 May 2001). "Carbon clock could show the wrong time". PhysicsWeb. Archived from the original on 15 December 2007. Retrieved 14 January 2008.  125. ^ Yarris, Lynn (27 July 2001). "New Superheavy Elements 118 and 116 Discovered at Berkeley Lab". Berkeley Lab. Archived from the original on 9 January 2008. Retrieved 14 January 2008.  126. ^ Diamond, H; et al. (1960). "Heavy Isotope Abundances in Mike Thermonuclear Device". Physical Review. 119 (6): 2000–2004. Bibcode:1960PhRv..119.2000D. doi:10.1103/PhysRev.119.2000.  127. ^ Poston Sr.; John W. (23 March 1998). "Do transuranic elements such as plutonium ever occur naturally?". Scientific American.  128. ^ Keller, C. (1973). "Natural occurrence of lanthanides, actinides, and superheavy elements". Chemiker Zeitung. 97 (10): 522–530. OSTI 4353086.  129. ^ Zaider, Marco; Rossi, Harald H. (2001). Radiation Science for Physicians and Public Health Workers. Springer. p. 17. ISBN 0-306-46403-9. OCLC 44110319.  130. ^ "Oklo Fossil Reactors". Curtin University of Technology. Archived from the original on 18 December 2007. Retrieved 15 January 2008.  131. ^ Weisenberger, Drew. "How many atoms are there in the world?". Jefferson Lab. Retrieved 16 January 2008.  132. ^ Pidwirny, Michael. "Fundamentals of Physical Geography". University of British Columbia Okanagan. Archived from the original on 21 January 2008. Retrieved 16 January 2008.  133. ^ Anderson, Don L. (2002). "The inner inner core of Earth". Proceedings of the National Academy of Sciences. 99 (22): 13966–13968. Bibcode:2002PNAS...9913966A. doi:10.1073/pnas.232565899. PMC 137819Freely accessible. PMID 12391308.  134. ^ Pauling, Linus (1960). The Nature of the Chemical Bond. Cornell University Press. pp. 5–10. ISBN 0-8014-0333-2. OCLC 17518275.  135. ^ Anonymous (2 October 2001). "Second postcard from the island of stability". CERN Courier. Archived from the original on 3 February 2008. Retrieved 14 January 2008.  136. ^ Jacoby, Mitch (2006). "As-yet-unsynthesized superheavy atom should form a stable diatomic molecule with fluorine". Chemical & Engineering News. 84 (10): 19. doi:10.1021/cen-v084n010.p019a.  137. ^ Koppes, Steve (1 March 1999). "Fermilab Physicists Find New Matter-Antimatter Asymmetry". University of Chicago. Retrieved 14 January 2008.  138. ^ Cromie, William J. (16 August 2001). "A lifetime of trillionths of a second: Scientists explore antimatter". Harvard University Gazette. Retrieved 14 January 2008.  139. ^ Hijmans, Tom W. (2002). "Particle physics: Cold antihydrogen". Nature. 419 (6906): 439–440. Bibcode:2002Natur.419..439H. doi:10.1038/419439aFreely accessible. PMID 12368837.  140. ^ Staff (30 October 2002). "Researchers 'look inside' antimatter". BBC News. Retrieved 14 January 2008.  141. ^ Barrett, Roger (1990). "The Strange World of the Exotic Atom". New Scientist (1728): 77–115. Archived from the original on 21 December 2007. Retrieved 4 January 2008.  142. ^ Indelicato, Paul (2004). "Exotic Atoms". Physica Scripta. T112 (1): 20–26. arXiv:physics/0409058Freely accessible. Bibcode:2004PhST..112...20I. doi:10.1238/Physica.Topical.112a00020.  143. ^ Ripin, Barrett H. (July 1998). "Recent Experiments on Exotic Atoms". American Physical Society. Archived from the original on 2016-05-17. Retrieved 15 February 2008.  • Manuel, Oliver (2001). Origin of Elements in the Solar System: Implications of Post-1957 Observations. Springer. ISBN 0-306-46562-0. OCLC 228374906.  Further reading External links Powered by YouTube Wikipedia content is licensed under the GFDL and (CC) license
c6428e4754ce8870
The Full Wiki Intermolecular force: Wikis From Wikipedia, the free encyclopedia In physics, chemistry, and biology, intermolecular forces are forces that act between stable molecules or between functional groups of macromolecules. Intermolecular forces include momentary attractions between molecules, diatomic free elements, and individual atoms. These forces, most notably London Dispersion forces, dipole-dipole interactions and hydrogen bonding, are significantly weaker than either ionic or covalent bonding, but still have a noticeable chemical effect (see hydrogen bonding in water). Intermolecular forces are due to differences in charge density in molecules. London dispersion forces (Instantaneous dipole/ induced dipole) The London dispersion force (one of the three types of van der Waals forces) is caused by instantaneous changes in the dipole of atoms, caused by the location of the electrons in the atoms' orbitals. The probability of an electron in an atom is given by the Schrödinger equation. When an electron is on one side of the nucleus, this side becomes slightly negative (indicated by δ-); this in turn repels electrons in neighbouring atoms, making these regions slightly positive (δ+). This induced dipole causes a brief electrostatic attraction between the two molecules. The electron immediately moves to another point and the electrostatic attraction is broken. London Dispersion forces are typically very weak (see the comparison below) because the attractions are so quickly broken, and the charges involved are so small.[1] Dipole-dipole interactions Dipole-Dipole interactions, also called Keesom interactions after Willem Hendrik Keesom, are caused by permanent dipoles in molecules. When one atom is covalently bonded to another with a significantly different electronegativity, the electronegative atom draws the electrons in the bond nearer to itself, becoming slightly negative. Conversely, the other atom becomes slightly positive. Electrostatic forces are generated between the opposing charges and the molecules align themselves to increase the attraction (reducing potential energy). An example of dipole-dipole interactions can be seen in hydrochloric acid This is not an example of hydrogen bonding (see below) because the chlorine atom is not electronegative enough. Note that almost always the dipole-dipole interaction between two atoms is zero, because atoms rarely carry a permanent dipole, see atomic dipoles. Often, molecules can have dipoles within them, but have no overall dipole moment. This occurs if there is symmetry within the molecule, causing each of the dipoles to cancel each other out. This occurs in molecules such as tetrachloromethane. Hydrogen bonding Hydrogen bonds are a stronger form of dipole-dipole interactions, caused by highly electronegative atoms. They only occur between hydrogen and oxygen, fluorine or nitrogen,[2] and are the strongest intermolecular force. The high electronegativities of F, O and N create highly polar bonds with hydrogen, which leads to strong bonding between hydrogen atoms on one molecule and the lone pairs of F, O or N atoms on adjacent molecules. The high boiling point of water is an effect of the extensive hydrogen bonding between the molecules: For quite some time it was believed that hydrogen bonding required an explanation that was different from the other intermolecular interactions. However, reliable computer calculations that became possible during the 1980s have shown that only the four effects listed above play a role, with the dipole-dipole interaction being particularly important. Since the four effects account completely for the bonding in small dimers like the water dimer, for which highly accurate calculations are feasible, it is now generally believed that no other bonding effects are operative. Hydrogen bonds are found throughout nature. In water the dynamics of these bonds produce unique properties essential to all known life-forms. Hydrogen bonds, between hydrogen atoms and nitrogen atoms, of adjacent DNA base pairs generate intermolecular forces that improve binding between the strands of the molecule. Hydrophobic effects between the double-stranded DNA and the solute nucleoplasm prevail in sustaining the double-helix structure of DNA. Relative strength of forces Bond type Dissociation energy (kcal)[3],[4] Covalent 400 Hydrogen bonds 12-16 Dipole-dipole 2.0 - 0.5 London (Van der Waals) Forces <1 Note: this comparison is only approximate- the actual relative strengths will vary depending on the molecules involved. Quantum mechanical theories See also 1. ^ "London Dispersion Forces". Retrieved 2009-09-20.   2. ^ Hydrogen Bonding. 3. ^ Volland, Dr. Walt. ""Intermolecular" Forces". Retrieved 2009-09-20.   4. ^ Organic Chemistry: Structure and Reactivity by Seyhan Ege, pp30-33, 67 External links Software for calculation of intermolecular forces Simple English These forces are generally much weaker than the chemical bonding forces. Their bonding energies are less than a few kcal/mol. But they are responsible for many different physical, chemical, and biological phenomena. In general one distinguishes short and long range intermolecular forces. ==== Hydrogen bonding ==== Hydrogen bonding is an intermolecular interaction with a hydrogen atom being present in the intermolecular bond. Other pages Other websites Software for calculation of intermolecular forces Got something to say? Make a comment. Your name Your email address
81e9692841a309b9
 Nonlinear Liouville Equation and Information Soliton Journal of Modern Physics Vol.06 No.14(2015), Article ID:61305,12 pages Nonlinear Liouville Equation and Information Soliton Bi Qiao Department of Physics, Science School, Wuhan University of Technology, Wuhan, China Copyright © 2015 by author and Scientific Research Publishing Inc. Received 21 September 2015; accepted 17 November 2015; published 20 November 2015 In this work, some types of nonlinear Liouville equation (NLE) and nonlinear Master equations (NME) are studied. We found that the nonlinear terms in the equation can resist state of system damping so that an information solitonic structure appears. Furthermore, the power in the nonlinear term is independent of limitation of the solution. This characteristic offers a possibility to construct complicated information solitons from some simple solutions, which allow one to solve complicated NLE or NME. The results obtained in this work may provide an innovated channel for the quantum information transmission over far distance against dissipation and decoherence, and also open a constructive way to resist age decaying of system by designing adjusted field interaction with the system nonlinearly. Quantum Information Density, Master Equation, Nonlinearity 1. Introduction It is well known that the nonlinear Schrödinger equation (NSE) provides the solitonic solution which provokes many applications such as optical soliton communications; however it seems to be not clear what is the solution of the corresponding nonlinear Liouville equation (NLE) and the relevant physical meanings? This issue is interesting because the solitonic information communication needs to develop stable and low dissipative channel to transmit or receive singles against dissipation and decoherence. Hence understanding the corresponding NLE with solitonic structure of the density operator may allow us to innovate some methods to control dissipation and decoherence in the information transmission. On the other hand, although quantum information theory in studying transmission and processing of quantum states, entanglement of states for quantum computation, quantum cypotography or quantum teleportation has achieved great progresses [1] - [6] , the efficient proposals for controlling decoherence and dissipation of states of the quantum information are still strongly required; so that in nowadays, one of major obstacles for realizing real quantum information devices or networks is decoherence. For investigating this issue, an interesting problem is arisen. What is basic equation for quantum information? In previous works [7] [8] , we have presented that the Liouville equation still holds for quantum information density (QID). In this way, a density operator can be considered as a minimum unit of QID [9] - [15] . This reveals an essential informational character of density operator as a sort of information density. Then we proposed a nonlinear master equation and studied its asympototic solution as a sort of information soliton with invariant structure locally when time elapsed enough long [16] . In this work, we continually introduce a sort of NLE which has certain solitonic solution in the sense of the information density based on NSE. Then, as extension, a type of nonlinear Master equation (NME) is also studied. We show that the nonlinear term in NME resists state of system damping so that an asymptotic solution appears. Hence the study of the long time evolution of these structures may shed more light on the soliton dynamic of information density as the asymptotic configuration can be determined by using the technique of integration within an ordered product plus the method to solve the nonlinear differential equation. Let us start firstly to derive a NLE from a NSE. 2. Nonlinear Liouville Equation Actually, if a NSE is defined as then a relevant NLE can be introduced by where b is defined as a complex (conjugate) coupling number, and notice here is introduced by a sort of “direct” product which is not usually scalar product, namely which is only true if the “direct” product of the pure state (such as a solitonic state) is defined as Thus an interested solitonic solution of density operator, which can be defined as an information soliton based on the information meaning of density operator, can be constructed by where may be a solitonic wave function [17] expressed as where the amplitude is a fuction of time and space. Then any NLE with higher order of power can be solved by using above formulation, e.g. if a NLE is ex- pressed by where corresponding NSE is then, by using to multiply into both sides of Equation (8), one can get which allows Equation (10) to become Now let, one can obtain a new formal NLE described by whose solution likes Equation (6), i.e. Therefore a general information soliton for Equation (8) can be constructed by Furthermore, if one assuming then using the same approach above, one can obtain which corresponds to a general NLE with a nonlinear term, where defining Finally a general information solitonic solution can be achieved by where notice again that is a soliton of wave function. One of significant meanings for this construction is that one can design an information soliton which especially satisfies our requirements. For example, if one needs an information soliton which has to be represented as a quantum entropy operator, then via choosing one can achieve which is just a sort of entropy operator. 3. Generalization The above mentioned NLE motivates us to consider logically to introduce a general nonlinear Liouville equation (GNLE) as where can be chosen as an analytic function of, whose physical meaning can be understood as a type of quantum information density [16] . Then using the Baker-Hausdorf formula and the Magnus lemma [18] , a derivative of QID (or ratio of negative entropy density) can be deduced as where, and notice is the entropy operator. So if is chosen as an analytic function of to allow rate of entropy operator to be zero (which may corresponds to certain equilibrium or stable state of the system), then a NLE can be achieved by where. This is an extension of the above NLE. Notice here is not only restrict in the pure state of solitonic wave. For example, if, then However, if or, then a GNLE can be given by This type of GNLE can be used to design a channel of quantum information against dissipation and decoherence. More concretely speaking, for open system plus complicated environment, if transfers to certain terms which are related to a master equation, consequently Equations (25)-(27) can be changed to sort of nonlinear master equations. The asymptotic solutions of these equations are just type of information solitons as previously mentioned. The author wants to enphasize here that GNLE can be seen as an extension of the original NLE since there is no necessary to restrict in the definition of “direct” prodct in the process to deduce GNLE. There- fore GNLE is also true for defined in usually scalar product, such as where is an integral measure. 4. Information Solitons For instance, in the amplitude damping model, a master equation with nonlinear term can be described by where is a damping number, a, is an annihilation or a creation operator, respectively. The nonlinear term can be supposed to originate from certain nonlinear interaction between the system and environment, such as a potential implying in a driven system. Then using the coherent and entangled state as a basis developed by Fan Hongyi [19] , one can get which approximately produces where is defined as the creation (annihilation) operator acting on the thermostats, e.g. developed by Takahashi and Umezawa [20] [21] , and is defined by so transformations , , and acting on the state are provoked, which guaran- tees to commute with right thermostats to produce above Equation (30), namely For solving this nonlinear Equation (30), by left acting on the both sides of Equation (31), one gets This yields so that Thus a formal solution of this equation is given by where corresponds to time. By using a thermo-coherent and entangled state (notice) left acting on Equation (37) one gets where the relations, and have been used. Therefore one gets an integral form of the normal product expressed as Then, in terms of the integral formula based on the Technique of Integration Within an Ordered Product of Operators (IWOP) [19] one obtains The integral term in Equation (41) is calculated as where the relevant parameters are introduced by So, this allows one to get Then by means of Equation (36), Equation (44) gives an asymptotic solution for: Furthermore, if the nonlinear term, then the above asymptotic solution can be extended by where notice the operator A commutes with a sort of particle number operator, so the both share common eigen- vectors, this physical condition [19] guarantees convergence of in the formal solution (38). Hence, from Equation (26) one can get where defining which gives an equation group described by and the relevant sum gives, Then based on the above formalism, a series of asymptotic solutions can be represented as which enables one to attain Therefore, when time the corresponding asymptotic solution of Equation (26) is achieved by In the same way, from Equation (52), one can also obtain so that one obtains then an asymptotic solution of Equation (27) is also given by The above asymptotic configurations can also be defined as a sort of information soliton in the sense: (1) they are invariant structure locally when time elapses enough long, and (2) these structures exist as form of density operator with meaning of QID. The study of the long time evolution of these structures may shed more light on the soliton dynamic of information density as the asymptotic configuration appear through kind of nonlinear self-interaction of the information density reduced from environment. In fact, a type of asymptotic configuration for the (non-)Markovian system with linear interaction between system and environment are found, and a wide class of nonlinear corrections of evolution equations are also found leading to superluminal effects [22] - [24] . However, here we want to emphasize that the information solitons obtained here is an asymptotic stable struc- ture of density operator (or QID) and have quantum information density meaning. The basic principle is based on the micro-representation of the second law of thermodynamics, i.e. since QID is just the negative entropy density, hence the physical meaning of Liouville equation for QID allows us to consider logically (also taking into account dissipative structure theory of Prigogine [25] [26] ) introduce a micro-representation of the second law of thermodynamics by which gives naturally a general Liouville equation for the open system constructed by where is assumed to be introduced by the difference of QID between the system and environment. More generally, this difference is supposed to be produced by a potential of information density, which drives the system to evolve along the direction described by the second law of thermodynamics. So, from the point of view of thermodynamical second law we can introduce a difference (or gradient) of QID to allow where is assumed to be introduced by a difference (or gradient) of QID. This expression looks like a microscopic representation of thermodynamical second law: when the QID in the two coupled systems are not equal to each other, then there exist a difference (or gradient) of QID, which will spontaneously drive the higher QID to transmit to the lower QID until the both arriving at equilibrium. 5. Application Generally, the above results provoke a constructive mechanism to realize a sort of self-organization from the non-equlibrium process for the open system. This may be useful, from physical justification, to construct the stable information transmission among remote space rockets or prolong life span against age decaying of systems. For example, if the original organization is expressed unfortunately by a master equation of the ampli- tude damping model, then in terms of well known characteristic of master equation, the final asymptotic solution tends to decaying zero state, So for prolonging life of the organization one can use a driving field F which satisfies to enable original master equation to become where is a coupling number to introduce a nonlinear term,. Consequently, an asymptotic solution for Equation (65) is obtained by where notice again This shows that the status of the system remain invariance by means of the interaction of the external field, which allows the original system to prolong life span without decaying. More generally along this line, a master equation of the amplitude damping model described various decaying processes, after considering a nonlinear term, can be given by where is a damping number, a, is a creation or an annihilation operator, respectively. Then utilizing the same approach as the formalism used above, one can let so, one obtains Thus a formal solution of this equation is given by where corresponds to time. By using a coherent and entangled state left acting on both sides of Equation (37), the equation becomes then one gets an integral form of the normal product as where the integral part is given by Then an asymptotic solution of Equation (68) can be achieved which gives an integral representation as This allows one to find that the power of is independent on the limitation of (75), so one can use this character to find asymptotic solution for the extended equation where if giving, then one has hence one obtains All of these are processed in the open system through a sort of nonlinear interaction expressed as a functional of. Therefore the above information soliton can also be used in quantum information channel to carry infor- mation over far distance for long time. The transmission of signals in this channel is not only dissipative, but also possibly decoherence-free when time past long. In this sense, previously proposed integration of squeezing coherent state, e.g. Equation (79), may provide an efficient way to transmit information by preserving coherence against decaying. The advantage of this sort of transmission comparing with the optic soliton transmission in the optic fiber is that this transmission channel may be applied into free space. For instance, it may be used in communication among basis, satellites and spacecrafts in remote space. However, the author want to stress again that the relevant carriers in the channel are information density whose solitonic structure is orientation from the nonlinear self-interaction term or. These terms transfer the original decaying result, , to an asymptotic configuration, as expression of Equation (75). It is this nonlinearity to eliminate the dissipation by producing a sort of self-organization, i.e. information soliton. 6. Conclusions and Remarks The nonlinear kinetic equations included NLE and NME based on the micro-representation of the second law of thermodynamics are studied. The nonlinear terms in the equation can resist state of system damping so that an information solitonic structure appears. While the power in the nonlinear term is independent of the limitation of the solution, which permits one to construct more complicated structures of information soliton in the solution of complicated equations. The information soliton can be understood as some invariant structure of information density locally as a self-organization created from nonlinearity. So, these results can provide an innovated channel for the quantum information transmission over far distance against decoherence or damping, and also offer a constructive way to prolong life span of the original system by designing adjusted field interacting with the system nonlinearly. Cite this paper Bi Qiao, (2015) Nonlinear Liouville Equation and Information Soliton. Journal of Modern Physics,06,2058-2069. doi: 10.4236/jmp.2015.614212 1. 1. Simon, D.R. (1997) SIAM Journal on Computing, 26, 1474. 2. 2. Shor, P.W. (1997) SIAM Journal on Computing, 26, 1484. 3. 3. Wiesner, S. (1983) Sigact News, 15, 78. 4. 4. Bennett, C., Bessette, F., Brassard, G., Salvail, L. and Smolin, J. (1992) Journal of Cryptology, 5, 3. 5. 5. Shor, P.W. (1994) Proceedings of the 35th Annual Symposium on Foundations of Computer Science, Santa Fe, 124. 6. 6. Deutch, D. (1985) Proceedings of the Royal Society of London A, 425, 73. 7. 7. Qiao, B., Song, K.Z. and Ruda, H.E. (2013) Journal of Modern Physics, 4, 49-55. 8. 8. Qiao, B., Fang, J.Q. and Ruda, H.E. (2012) Journal of Modern Physics, 3, 1070-1080. 9. 9. Grover, L. (1995) A Fast Quantum Mechanical Algorithm for Data Base Search. Proceedings of the 28th Annual ACM Symposium on the Theory of Computation, ACM Press, New York, 212. 10. 10. Tomonaga, S. (1946) Progress of Theoretical Physics, 1, 27. 11. 11. Breuer, H.P. (2002) The Theory of Quantum Open Systems. Oxford University Press, New York. 12. 12. Schweber, S.S. (1948) An Introduction to Relativistic Quantum Field Theory. Row, Peterson and Company, Evanston. 13. 13. Schwinger, J. (1948) Physical Review, 74, 1439-1461. 14. 14. Prugovecki, E. (1995) Principles of Quantum General Relativity. World Scientific Publishing, Co. Pte. Ltd., Singapore. 15. 15. Giulini, D., Kiefer, C. and Lämmerzahl, C. (2003) Quantum Gravity: From Theory to Experimental Search. Springer-Verlag, New York. 16. 16. Qiao, B. and Song, K.Z. (2013) Journal of Modern Physics, 4, 923-929. 17. 17. Pang, X.-F. and Feng, Y.-P. (2005) Quantum Mechanics in Nonlinear Systems. World Scientific Publishing, Co. Pte. Ltd., Singapore. 18. 18. Eu, B.C. (1998) Nonequilibrium Statistical Mechanics (Ensemble Method). Kluwer Academic Publishers, Dordrecht, Boston and London. 19. 19. Fan, S.Y. (2010) Quantum Decoherent Entangled States in Open System. Shanghai Jiao Tong University Press, Shanghai. (In Chinese) 20. 20. Takahashi, Y. and Umezawa, H. (1957) Collective Phenomena, 2, 55. 21. 21. Umezawa, H. (1993) Advanced Field Theory-Micro, and Thermal Physics. AIP, New York. 22. 22. Chrusciński, D., Kossakowski, A. and Pascazio, S. (2010) Physical Review A, 81, Article ID: 032101. 23. 23. Brown, D.W. and Lindenberg, K. (1998) Physica D, 113, 267-275. 24. 24. Gisin, N. and Rigo, M. (1995) Journal of Physics A, 28, 7375-7390. 25. 25. Prigogine, I. and Nicolis, G. (1977) Self-Organization in Non-Equilibrium Systems. Wiley, New York. 26. 26. Prigogine, I. (1980) From Being to Becoming. W.H. Freeman, San Francisco.
04cb89389f0fe111
söndag 30 augusti 2015 Quantum Information Can Be Lost Stephen Hawking claimed in lecture at KTH in Stockholm last week (watch the lecture here and check this announcement) that he had solved the "black hole information problem": • The information is not stored in the interior of the black hole as one might expect, but in its boundary — the event horizon,” he said. Working with Cambridge Professor Malcolm Perry (who spoke afterward) and Harvard Professor Andrew Stromberg, Hawking formulated the idea hat information is stored in the form of what are known as super translations. The problem arises because quantum mechanics is viewed to be reversible, because the mathematical equations supposedly describing atomic physics formally are time reversible: a solution proceeding in forward time from an initial to a final state, can also be viewed as a solution in backward time from the earlier final state to the initial state. The information encoded in the initial state can thus, according to this formal argument, be recovered and thus is never lost. On the other hand a black hole is supposed to swallow and completely destroy anything it reaches and thus it appears that a black hole violates the postulated time reversibility of quantum mechanics and non-destruction of information. Hawking's solution to this apparent paradox, is to claim that after all a black hole does not destroy information completely but "stores it on the boundary of the event horizon". Hawking thus "solves" the paradox by maintaining non-destruction of information and giving up complete black hole destruction of information. The question Hawking seeks to answer is the same as the fundamental problem of classical physics which triggered the development of modern physics in the late 19th century with Boltzmann's "proof" of the 2nd law of thermodynamics: Newton's equations describing thermodynamics are formally reversible, but the 2nd law of thermodynamics states that real physics is not always reversible: Information can be inevitably lost as a system evolves towards thermodynamical equilibrium and then cannot be recovered. Time has a direction forward and cannot be reversed.  Boltzmann's "proof" was based an argument that things that do happen do that because they are "more probable" than things which do not happen. This deep insight opened the new physics of statistical mechanics from which quantum borrowed its statistical interpretation. I have presented a different new resolution of the apparent paradox of irrreversible macrophysics based on reversible microphysics by viewing physics as analog computation with finite precision, on both macro- and microscales. A spin-off of this idea is a new resolution of d'Alemberts's paradox and a new theory of flight to be published shortly. The basic idea here is thus to replace the formal infinite precision of both classical and quantum mechanics, which leads to paradoxes without satisfactory solution, with realistic finite precision which allows the paradoxes to be resolved in a natural way without resort to unphysical statistics. See the listed categories for lots of information about this novel idea. The result is that reversible infinite precision quantum mechanics is fiction without physical realization, and that irreversible finite precision quantum mechanics can be real physics and in this world of real physics information is irreversibly lost all the time even in the atomic world. Hawking's resolution is not convincing. Here is the key observation explaining the occurrence of irreversibility in formally reversible systems modeled by formally non-dissipative partial differential equations such as the Euler equations for inviscid macroscopic fluid flow and the Schrödinger equations for atomic physics: Smooth solutions are strong solutions in the sense of satisfying the equations pointwise with vanishing residual and as such are non-dissipative and reversible.  But smooth solutions make break down into weak turbulent solutions, which are only solutions in weak approximate sense with pointwise large residuals and these solutions are dissipative and thus irreversible. An atom can thus remain in a stable ground state over time corresponding to a smooth reversible non-dissipative solution, while an atom in an excited state may return to the ground state as a non-smooth solution under dissipation of energy in an irreversible process.       fredag 28 augusti 2015 Finite Element Quantum Mechanics 4: Spherically Symmetric Model I have tested the new atomic model described in a previous post in setting of spherical symmetry with electrons filling a sequence of non-overlapping spherical shells around a kernel. The electrons in each shell are homogenized to spherical symmetry which reduces the model to a 1d free boundary problem with the free boundary represented by the inter-shell spherical surfaces adjusted so that the combined wave function is continuous along with derivates across the boundary. The repulsion energy is computed so as to take into account that electrons are not subject to self-repulsion, by a corresponding reduction of the repulsion within a shell. The remarkable feature of this atomic model, in the form of a 1d free boundary problem with continuity as free boundary condition and readily computable on a lap-top, is that computed ground state energies show to be surprisingly accurate (within 1%) for all atoms including ions (I have so far tested up to atomic number 54 and am now testing excited states). Recall that the wave function $\psi (x,t)$ solving the free boundary problem, has the form • $\psi (x,t) =\psi_1(x,t)+\psi_2(x,t)+...+\psi_S(x,t)$         (1) with $(x,t)$ a common space-time coordinate, where $S$ is the number of shells and $\psi_j(x,t)$ with support in shell $j$ is the wave function for the homogenized wave function for the electrons in shell $j$ with $\int\vert\psi_j(x,t)\vert^2\, dx$ equal to the number of electrons in shell $j$. Note that the free boundary condition expresses continuity of charge distribution across inter-shell boundaries, which appears natural. Note that the model can be used in time dependent form and then allows direct computation of vibrational frequencies, which is what can be observed.  Altogether, the model in spherical symmetric form indicates that the model captures essential features of the dynamics of an atom, and thus can useful in particular for studies of atoms subject to exterior forcing.  I have also tested the model without spherical homogenisation for atoms with up to 10 electrons, with  similar results. In this case the the free boundary separates diffferent electrons (and not just shells of electrons) with again continuous charge distribution across the corresponding free boundary.  In this model electronic wave functions share a common space variable and have disjoint supports and can be given a classical direct physical interpretation as charge distribution. There is no need of any Pauli exclusion principle: Electrons simply occupy different regions of space and do not overlap, just as in a classical multi-species continuum model. This is to be compared with standard quantum mechanics based on multidimensional wave functions $\psi (x_1,x_2,...,x_N,t)$ typically appearing as linear combinations of products of electronic wave functions • $\psi (x_1,x_2,...,x_N,t)=\psi_1(x_1,t)\times \psi_2(x_2,t)....\times\psi_N(x_N,t)$        (2) for an atom with $N$ electrons, each electronic wave function $\psi_j(x_j,t)$ being globally defined with its own independent space coordinate $x_j$. Such multidimensional wave functions can only be given statistical interpretation, which lacks direct physical meaning. In addition, Pauli's exclusion principle must be invoked and it should be remembered that Pauli himself did not like his principle since it was introduced ad hoc without any physical motivation, to save quantum mechanics from collapse from the very start... More precisely, while (1) is perfectly reasonable from a classical continuum physics point of view, and as such is computable and useful, linear combination of (2) represent a monstrosity which is both uncomputable and unphysical and thus dangerous, but nevertheless is supposed to represent the greatest achievement of human intellect all times in the form of the so called modern physics of quantum mechanics. How long will it take for reason and rationality to return to physics after the dark age of modern physics initiated in 1900 when Planck's "in a moment of despair" resorted to an ad hoc hypothesis of a smallest quantum of energy in order to avoid the "ultra-violet catastrophe" of radiation viewed to be  impossible to avoid in classical continuum physics. But with physics as finite precision computation, which I am exploring, there is no catastrophe of any sort and Planck's sacrifice of rationality serves no purpose. PS Here are the details of the spherical symmetric model starting from the following new formulation of a Schrödinger equation for an atom with $N$ electrons organised in spherical symmetric form into $S$ shells: Find a wave function as a sum of $N$ electronic complex-valued wave functions $\psi_j(x,t)$, depending on a common 3d space coordinate $x\in R^3$ and time coordinate $t$ with non-overlapping spatial supports $\Omega_1(t)$,...,$\Omega_N(t)$, filling 3d space, satisfying • $i\dot\psi (x,t) + H\psi (x,t) = 0$ for all $(x,t)$,       (1) where the (normalised) Hamiltonian $H$ is given by • $V_k(x)=\int\frac{\vert\psi_k(y,t)\vert^2}{2\vert x-y\vert}dy$, for $x\in R^3$, • $\int_{\Omega_j}\vert\psi_j(x,t)\vert^2 =1$ for all $t$ for $j=1,..,N$. Assume the electrons fill a sequence of shells $S_k$ for $k=1,...,S$ centered at the atom kernel with $N_k$ electrons on shell $S_k$ and  • $\int_{S_k}\vert\psi (x,t)\vert^2 =N_k$ for all $t$ for $k=1,..,S$, • $\sum_k^S N_k = N$. The total wave function $\psi (x,t)$ is thus assumed to be continuously differentiable and the electronic potential of the Hamiltonian acting in $\Omega_j(t)$ is given as the attractive kernel potential together with the repulsive kernel potential resulting from the combined electronic charge distributions $\vert\psi_k\vert^2$ for $k\neq j$, with total electronic repulsion energy • $\sum_{k\neq j}\int\frac{\vert\psi_k(x,t)\vert^2\vert\psi_k(y,t)\vert^2}{2\vert x-y\vert}dxdy=\sum_{k\neq j}V_k(x)\vert\psi_k(x)\vert^2\, dx$. Assume now that the electronic repulsion energy is approximately determined by homogenising the $N_k$ electronic wave function $\psi_j$ in each shell $S_k$ into a spherically symmetric "electron cloud" $\Psi_k(x)$ with corresponding potential $W_k(y)$ given by • $W_k(y)=\int_{\vert x\vert <\vert y\vert}R_k\frac{\vert\Psi_k(x)\vert ^2}{\vert y\vert}\, dx+\int_{\vert x\vert >\vert y\vert}R_k\frac{\vert\Psi_k(x)\vert ^2}{\vert x\vert}\, dx$, and $R_k(x)=\frac{N_k-1}{N_k}$ for $x\in S_k$ is a reduction factor reflecting non self-repulsion of each electron (and $R_k=1$ else): Of the $N_k$ electrons in shell $S_k$, thus only $N_k-1$ electrons contribute to the value of potential in shell $S_k$ from the electrons in shell $S_k$. We here use the fact that the potential $W(x)$ of a uniform charge distribution on a spherical surface $\{y:\vert y\vert =r\}$ of radius $r$ of total charge $Q$, is equal to $Q/\vert x\vert$ for $\vert x\vert >r$ and $Q/r$ for $\vert x\vert <r$. Our model then has spherical symmetry and is a 1d free boundary problem in the radius $r=\vert x\vert$ with the free boundary represented by the radii of the shells and the corresponding Hamiltonian is defined by the electronic potentials computed by spherical homogenisation in each shell. The free boundary is determined so that the combined wave function $\psi (x,t)$ is continuously differentiable across the free boundary.  torsdag 27 augusti 2015 Finite Element Quantum Mechanics 3: Explaining the Periodicity of the Periodic Table According to Eric Scerri, the periodic table is not well explained by quantum mechanics, contrary to common text book propaganda, not even the most basic aspect of the periodic table, namely its periodicity: • Pauli’s explanation for the closing of electron shells is rightly regarded as the high point in the old quantum theory. Many chemistry textbooks take Pauli’s introduction of the fourth quantum number, later associated with spin angular momentum, as the foundation of the modern periodic table. Combining this two-valued quantum number with the ear- lier three quantum numbers and the numerical relationships between them allow one to infer that successive electron shells should contain 2, 8, 18, or $2n^2$ electrons in general, where n denotes the shell number.  • This explanation may rightly be regarded as being deductive in the sense that it flows directly from the old quantum theory’s view of quantum numbers, Pauli’s additional postulate of a fourth quantum number, and the fact that no two electrons may share the same four quan- tum numbers (Pauli’s exclusion principle).  • However, Pauli’s Nobel Prize-winning work did not provide a solution to the question which I shall call the “closing of the periods”—that is why the periods end, in the sense of achieving a full-shell configuration, at atomic numbers 2, 10, 18, 36, 54, and so forth. This is a separate question from the closing of the shells. For example, if the shells were to fill sequentially, Pauli’s scheme would predict that the second period should end with element number 28 or nickel, which of course it does not. Now, this feature is important in chemical education since it implies that quantum mechanics can- not strictly predict where chemical properties recur in the periodic table. It would seem that quantum mechanics does not fully explain the single most important aspect of the periodic table as far as general chemistry is concerned.  • The discrepancy between the two sequences of numbers representing the closing of shells and the closing of periods occurs, as is well known, due to the fact that the shells are not sequentially filled. Instead, the sequence of filling fol- lows the so-called Madelung rule, whereby the lowest sum of the first two quantum numbers, n + l, is preferentially oc- cupied. As the eminent quantum chemist Löwdin (among others) has pointed out, this filling order has never been derived from quantum mechanics.  On the other hand, in the new approach to atomic physics I am exploring, the periodicity directly connects to a basic partitioning or packing problem, namely how to subdivide the surface of a sphere in equal parts, which gives the sequence $2n^2$ by dividing first into two half spheres and then subdividing each half spherical surface in $n\times n$ pieces,  in a way similar to dividing a square surface into $n\times n$ square pieces.  With increasing shell radius an increasing number of electrons, occupying a certain surface area (scaling with the inverse of the kernel charge), can be contained in a shell.  In this setting a "full shell" can contain 2, 8, 18, 32,.., electrons, and the observed periodicity 2, 8, 8, 18, 18, 32, 32, with each period ended by a noble gas with atomic numbers 2 (He), 10 (Neon), 18 (Argon), 36 (Krypton), 54 (Xenon),  86 (Radon), 118 (Ununoctium, unkown), with a certain repetition of shell numbers, can be seen as a direct consequence of such a full shell structure, if allowed to be repeated when the radius of a shell is not yet large enough to house a full shell of the next dignity.  Text book quantum mechanics thus does not explain the periodicity of the periodic table, while the new approach am I pursuing may well do so in a very natural way.   Think of that.    tisdag 25 augusti 2015 Ulf Danielsson om Klimathot, Hawking och Svarta Hål. Strängfysikern Ulf Danielsson har startat en blogg med Stephen Hawking's besök vid KTH och föreläsning på Stockholm Waterfront som initiellt dragplåster. Ulf skriver gärna om svarta hål, som han verkar tro inneha verklig fysisk existens som "singularitet" till lösningar till Einstein's ekvationer. Ulf verkar även tro på kimatalarmismen som den predikas av IPCC: • När det gäller människogenererad klimatpåverkan är huvudslutsatsen klar: den finns där, och risken att den får betydande följder för den mänskliga civilisationen om inget görs är överhängande. Den senaste IPCC-rapporten gör det omöjligt att dra någon annan generell slutsats. Vi skeptiker som granskat vetenskapen bakom IPCCs klimatalarmism, vet att Ulf i denna fråga blivit helt vilseförd. Frågan är om samma sak gäller för svarta hål?  Om det nu är så att man kan hitta singulariteter hos lösningar till Einstein's ekvationer, vilket i sig kan diskuteras eftersom dessa ekvationer är hart när omöjliga att lösa, betyder det att dessa singulariteter också har fysisk realitet?  Även om det finns massa i centrum på galaxer som man inte kan se, vilket observationer av galaxers dynamik verkar tyda på, så betyder det väl inte nödvändigtvis att denna osynliga massa utgörs av svart hål?  Kan det vara så att IPCCs (farligt tjocka enligt Ulf) rapport utgör ett svart hål ur vilken ingen sann information förmår utstråla? Finite Element Quantum Mechanics 2: Questions without Answers Hans Primas formulates in Chemistry, Quantum Mechanics and Reductionism, the following basic questions left without answers in textbook quantum mechanics: 1. Do isolated quantal systems exist at all? 2. Is the Pauli Principle a universal and inviolable fact of nature? 3. Does quantum mechanics apply to large molecular systems? 4. Is the superposition principle universally valid? 5. Why do so many stationary states not exist? 6. Why are macroscopic bodies localised? 7. Why does quantum mechanics fail to account for chemical systematics? 8. Why can approximations be better than the exact solutions? 9. Why is the Born-Oppenheimer picture so successful? 10. Is temperature an observable?  Despite now almost 100 years of giant efforts by giant scientific minds, no satisfactory answers to these basic questions have been delivered. There is no reason to believe that 100 more years will give any answers and the question must be posed if there is something fundamentally wrong with textbook quantum mechanics which prevents progress?  Yes, I think so: The origin of all these questions without answers is the starting point of textbook quantum mechanics with a wave function  • $\psi (x_1,....,x_N,t)$ depending on $3N$ space coordinates and time, • satsifying a linear scalar wave equation in $3N$ space dimensions and time,  for an atom with $N$ electrons as particles, with $\vert\psi (x_1,...,x_N,t)\vert^2$ interpreted as the probability that particle $j$ is at position $x_j$ at time $t$ for $j=1,...,N$.   Such a wave function is both uncomputable (because of the many spatial dimensions) and unphysical (because an atom is not an insurance company computing probabilities, as little as an individual person paying an insurance). The fact that textbook quantum mechanics still after almost hundred years is stuck with such a hopeless scientific misconception, is nothing less than a scientific tragedy. Hans Primas gives the following devastating verdict: • There is no general agreement about the referent (physical meaning) of pioneer (textbook) quantum mechanics. • Pioneer quantum mechanics has an agonising shortcoming: It cannot describe classical systems.  • From a fundamental point of view the only adequate interpretation of quantum mechanics is an ontic (realistic) interpretation.... Bohr's epistemic interpretation expresses merely states of knowledge and misses the point of genuine scientific inquiry...If we assume that pioneer quantum mechanics is a universal theory of molecular matter, then an ontic interpretation of this theory is impossible. • The Bohr Copenhagen (textbook) interpretation is not acceptable as a fundamental theory of matter.  In other words, pioneer (textbook) quantum mechanics is a failed scientific project, and it is an open problem to find an ontic description of atomic physics by "genuine scientific inquiry", that is, in the spirit of the device of this blog, "by critical constructive inquiry towards understanding".  Finite Element Quantum Mechanics 1: Listening to Bohm 1. The world can be analysed into distinct elements. If we here replace, "arbitrarily high precision" and  "exact" with "finite precision", the description 1-3 can be viewed as a description of  • the finite element method  • as digital physics as digital computation with finite precision My long-term goal is to bring quantum mechanics into a paradigm of classical physics modified by finite precision computation, as a form of computational quantum mechanics, thus bridging the present immense gap between quantum and classical physics. This gap is described by Bohm as follows: • The quantum properties of matter imply the indivisibility unity of all interacting systems. Thus we have contradicted 1 and 2 of the classical theory, since there exist on the quantum level neither well defined elements nor well defined dynamical variables, which describe the behaviour of these elements. My idea is thus to slightly modify classical physics by replacing "arbitrarily high precision" with "finite precision" to encompass quantum mechanics thus opening microscopic quantum mechanics to a machinery which has been so amazingly powerful in the form of finite element methods for macroscopic continuum physics, instead of throwing everything over board and resorting to a game of roulette as in the text book version of quantum mechanics which Bohm refers to. In particular, in this new form of computational quantum mechanics, an electron is viewed as an "element" or a "collection of elements", each element with a distinct non-overlapping spatial presence, with an interacting system of $N$ electrons described by a (complex-valued) wave function $\psi (x,t)$ depending on a 3d space coordinate $x$ and a time coordinate $t$ of the form  • $\psi (x,t) = \psi_1(x,t) + \psi_2(x,t)+...+\psi_N(x,t)$,                             (1) where the electronic wave functions $\psi_j(x,t)$ for $j=1,...,N$, have disjoint supports together filling 3d space, indicating the individual presence of the electrons in space and time. The system wave function $\psi (x,t)$ is required to satisfy a Schrödinger wave equation including a Laplacian  asking the composite wave functions $\psi (x,t)$ to be continuous along with derivatives across inter element boundaries. This a is a free boundary problem in 3d space and time and as such readily computable.  I have with satisfaction observed that a spherically symmetric shell version of such a finite element model does predict ground state energies in close comparison to observation (within a percent) for all elements in the periodic table, and I will report these results shortly. We may compare the wave function given by (1) with the wave function of text book quantum mechanics as a linear combination of terms of the multiplicative form: • $\psi (x_1,x_2,...x_N,t)=\psi_1(x_1,t)\times\psi_2(x_2,t)\times ...\times\psi_N(x_N,t)$, depending on $N$ 3d space coordinates $x_1,x_2,...,x_N$ and time, where each factor $\psi_j(x_j,t)$ is part of a (statistical) description of the global particle presence of an electron labeled $j$ with $x_j$ ranging over all of 3d space. Such a wave function is uncomputable as the solution to a Schrödinger equation in $3N$ space coordinates, and thus has no scientific value. Nevertheless, this is the text book foundation of quantum mechanics. Text book quantum mechanics is thus based on a model which is uncomputable (and thus useless from scientific point of view), but the model is not dismissed on these grounds. Instead it is claimed that the uncomputable model always is in exact agreement to all observations according to tests of this form:  • If a computable approximate version of this model (such as Hartree-Fock with a specific suitably chosen set of electronic orbitals) happens to be in correspondence with observation (due to some unknown happy coincidence), then this is taken as evidence that the exact version is always correct.  • If a computable approximate version happens to disagree with observation, which is often the case, then the approximate version is dismissed but the exact model is kept; after all, an approximate model which is wrong (or too approximate) should be possible to view as evidence that an exact model as being less approximate must be more (or fully) correct, right?   PS The fact that the finite element method has been such a formidable success for macroscopic problems as systems made up of very many small parts or elements, gives good hope that this method will be at least as useful for microscopic systems viewed to be formed by fewer and possibly simpler (rather than more complex) elements. This fits into a perspective (opposite to the standard view) where microscopics comes out to be more simple than macroscopics,  because macroscopics is built from microscopics, and a DNA molecule is more complex than a carbon atom, and a human being more complex than an egg cell.  lördag 15 augusti 2015 Popper vs Physics as Finite Precision Computation 1. Realism 2. Determinism                           (A) 3. Objectivism. 1. Idealism 2. Indeterminism                        (B) 3. Subjectivism. • finite precision computation   tisdag 4 augusti 2015 Dystert Resultat av KTH-Gate = Noll: SimuleringsTeknik Läggs Ner KTH-gate är benämningen på på den aktion som KTH riktade mot mitt verk som innebar att min ebok Mathematical Simulation Technology (MST) avsedd att användas inom det nya kandidatprogrammet i Simuleringsteknik och Virtuell Design (STVD), mitt under pågående testkurs under HT10 förbjöds av KTH (för en fullständig redogörelse för detta drama, som saknar motsvarighet inom demokratisk stats akademi, se här, härhär och här). Resultatet av censuringripandet blev att kandidatprogrammet separerades från den grupp av lärare som initierat programmet med avsikt att driva detsamma och för detta fått KTHs stöd. Sålunda startade STVD HT12 på en grund av gamla kurser i numerisk analys under ledning av en annan grupp lärare i numerisk analys, detta utan marknadsföring och resultatet blev därefter: Noll intresse, noll söktryck, noll intagningsbetyg, noll aktualitet = noll resultat. KTH insåg efter två år att det var totalt meningslöst att driva ett sådant program och HT14 fattade så Leif Kari, skolchef på skolan för Teknikvetenskap och huvudansvarig för censureringen av MST, det helt följdriktiga beslutet att lägga ner STVD (eller med omskrivning låta det vara "vilande") enligt denna offentliga handling (som registrator vänligt nog grävt fram då varken Leif Kari eller någon annan inblandad velat svara på mina upprepade frågor om status för STVD). Man kan i denna  skakande rapport läsa: • väldigt lågt söktryck • stora svårigheter för studenterna att klara studierna • förkunskaper alldeles för svaga • mindre an 20% klarar uppflyttningskraven.  Så har då KTH lyckats med sitt uppsåt att stoppa ett alltför lovande initiativ från en alltför internationellt stark gruppering på KTH, under uppvisande av komplett inkompetens på alla nivåer. KTH har således genom censur förstört ett potentiellt högt värde och ersatt det med noll. Bra jobbat enligt KTHs rektor Peter Gudmundson, som aktivt deltog i bokbränningen 2010; när böcker bränns återstår bara aska. Ironiskt nog har Leif Kari och Skolan för Teknikvetenskap dock inte låtit sig nedslås av detta dystra resultat utan arbetar nu aktivt för att uppgradera det havererade kandidatprogrammet i Simuleringsteknik till ett nytt civilingenjörsprogram i Teknisk Matematik enligt detta tilläggsbeslut. Fakultetsrådet har naturligtvis inte tillstyrkt inrättandet av detta program (se här 13d), då logiken saknas: Om KTH inte är kapabelt att driva en kandidatutbildning inom teknisk matematik/simuleringsteknik, är KTH (som landets främsta tekniska högskola) än mindre kapabelt att driva ett civilingenjörsprogram med samma inriktning. PS Så här beskrevs programmet av KTH när det startade 2012: • Simuleringsteknik och virtuell design är ett nytt program på KTH som utvecklats för att möta det ökande behovet av datorsimulering.  • Utbildningen ger dig karriärmöjligheter inom många branscher, från verkstads- och processindustri, miljö- och energisektorn, via dataspel och animering, medicin och bioteknik till finansbranschen.  • Du kan exempelvis jobba som beräkningskonsult, expert på visualisering och informationsgrafik eller som programdesigner.  • Det nya kandidatprogrammet bygger på en mycket stark forsknings- och utbildningsmiljö inom detta område på KTH och är unikt i Sverige. Ja, det är sannerligen unikt med sådan missskötsel, trots (eller kanske på grund av) KTHs priviligierade position.
49d794946faa3753
Dismiss Notice Join Physics Forums Today! Charge and uncertainty 1. May 10, 2010 #1 I would like to know if the electric charge of a particle, like electron, has always a definite value, or on the contrary the Heisenberg uncertainty principle should be applied to this magnitude. Or asking that in another way, what is the observable operator associated to the charge of a particle? Has the U(1) symmetry something to do with all that? Thanks in advance! 2. jcsd 3. May 11, 2010 #2 Kaluza-Klein again. In KK theory the charge is the momentum in fifth (compactified) dimension. Position in fifth dimension is related to charge with uncertainty. The U(1) group has something to do with this. In KK it's related to the fifth dimension, in standard QED it is unmeasurable. You can't exactly measure EM field - that is, you can always do an U(1) gauge transformation and you will get the same physical result. That means - the exact "position" of EM field in U(1) group can't be measured. 4. May 11, 2010 #3 I'm not aware of any uncertainty principle applying to charge (in "conventional" theories anyway). Isn't charge governed by a superselection rule ? 5. May 11, 2010 #4 Thanks a lot. This subject is more clear for me now. Just another question, what about mass and uncertainty? Can we apply HIP to mass? If mass is the charge for the gravity field ... 6. May 12, 2010 #5 User Avatar Science Advisor Please be aware that up to now we do not have a reasonable theory describing nature in terms of KK! KK is able to harmonize general relativity and classical electrodynamics, not more. No weak and strong force, no quantum gravity; KK as of today is just wishful thinking. 7. May 12, 2010 #6 User Avatar Science Advisor You wan't find an answer in quantum mechanics because there is no charge operator (e.g. for an electron in the Schrödinger equation). Charge is related to the charge density which itself is nothing else but the (square of the) wave function. So normalizing the wave function means that for the single particle Schrödinger equation you will always get "1e" for the charge. You have to look at a theory which allowes you to describe a charge operator and where you do not fix the charge of a quantum state in advance. In quantum field theory you can construct a charge operator and a corresponding conservation law. In QED this is related to the Gauss law of the U(1) symmetry and due to consistency (again coming from U(1)) you always get a constraint like Q|phys> = 0 That means all physical states must have zero charge, otherwise the theory is inconsistent! In QCD (or in general SU(N) gauge theories) the constraint is generalized to Qa|phys> = 0 where a labels the different charges. Note that the charge operators generate an su(n) algebra like [Qa, Qb] = ifabcQc which is similar to the angular momentum algebra. So the condition Qa|phys> = 0 means that all physical states are charge-singulett states. If you look at angular momentum you know that if the z-comopnent is fixed, the x- and y-component are not. But that does not apply in our case, as in te singulett states all components vanish; this is OK. The singulett state is rather special as it is the only state which is a simultaneous eigenstate of all charge operators! Now if you look at charges not related to local gauge symmetries there is no Gauss law and therefore no singulett condition. This applies e.g. to isospin. For isospin exactly the same SU(2) symmetry applies as for conventional spin: if there is a proton, which means I3|proton> = (+1/2)|proton> where I3 is the 3-component of the isospin, then the 1- and the 2-component are undefined and you can derive an uncertainty relation for these components. 8. May 12, 2010 #7 User Avatar Science Advisor Gold Member Depends on what you mean by charge. The UP certainly applies to things like charged metallic islands where you have [Q,P]=ie (Q is the charge and P is the phase); devices like Cooper pair boxes, Single electron transistors etc all "obey" this UP. Generally speaking, charge is the generalized momentum and the phase the generalized position for electronic systems (a trivial example of such a system would be an LC oscillator). 9. May 14, 2010 #8 User Avatar Science Advisor Perhaps it helps to look at the uncertainty relation for the angular momentum. I start with [tex]\Delta A \cdot \Delta B \ge \frac{1}{2} |\langle\psi|[A,B]|\psi\rangle|[/tex] For the angular momentum and its eigenstates I get [tex]\Delta L_x \cdot \Delta L_y \ge \frac{1}{2} |\langle\ell,m|L_z|\ell,m\rangle| = \frac{m}{2}[/tex] Last edited: May 14, 2010 10. May 15, 2010 #9 I'm not an expert, but I think that in the standard model, in the weak interaction, charge is uncertain due to CKM mixing. That is, weak interaction eigenstates are superpositions of the strong interaction eigenstates. The strong interaction eigenstates have definite charge (are eigenstates of charge), and the weak interaction eigenstates therefore have uncertain charge. Please correct me if I'm mistaken as this is relatively new to me. 11. May 15, 2010 #10 User Avatar Science Advisor CKM mixing cannot mix conserved charges, that means it cannot mix states with different electric charge and it does not mix states (quarks) with different color (but that is never mentioned). CKM mixing can mix states with different masses. 12. May 15, 2010 #11 Ok thanks for clearing that up.
cc1d2faf88f835d6
Forming bubbles in liquid light Filed under Science Ángel Paredes Galán got a Ph. D. in Particle Physics from the University of Santiago de Compostela in 2004. After postdoctoral stays at École Polytechnique (France), University of Utrecht (the Netherlands) and University of Barcelona (Spain), he joined the Optics group at University of Vigo as a Ramon y Cajal fellow. David Feijoo Pérez is a predoctoral student at the Optics group in Universidade de Vigo. He studied Physics at this university, finishing the degree in 2011. In 2012 he finished the master’s programme in Photonics and Laser Technologies. Humberto Michinel Álvarez. Phd. in Applied Physics (optics) from University of Santiago de Compostela in 1996. Full professor of optics at the University of Vigo in Ourense, Spain. Vice-president of the International Commission for Optics (ICO). Prof. Michinel is the director of the MSc program “Photonics and Laser Technologies”. For many centuries, the passage of light through matter was regarded as that of a wave in a fixed medium characterized by a refractive index (n), which may depend on the frequency. This description is consistent with plenty of phenomena, including diffraction, interference, refraction, dispersion or polarization, associated to distinguished names of the history of Physics such as Huygens, Newton or Fresnel. After the advent of lasers in the early sixties, it was soon proven that intense light can alter the optical properties of a medium and thus affect its own propagation. This was the outset of the fascinating discipline of nonlinear optics in which, for instance, light can act as a lens for itself or can even generate new frequencies while propagating. These phenomena have found numerous practical applications and, nowadays, are essential in many photonic devices. Depending on the material, the refractive index varies with light intensity in different forms. Taking this effect into account, the wave equation becomes nonlinear, typically leading to an enormous wealth of qualitatively distinct behaviors. We are interested here in the particular expression n = n0+n2I-n4I2, where n0, n2, n4are positive constants and I is the intensity of light (energy traversing the unit of area in the unit of time). After some manipulations, this setting leads to the so-called cubic-quintic nonlinear Schrödinger equation: i ∂z ψ = – (∂x2 +∂y2)ψ – (|ψ|2-|ψ|4) ψ (1) Remarkably, solutions of this equation in certain regimes share many properties with usual liquids 1. We could speak of droplet formation, surface tension or capillarity, but maybe the simplest way to convince oneself is by watching a video in which the collision of a light droplet with a barrier is simulated numerically solving equation (1). We plot the evolution of |ψ|2, which is proportional to the intensity. Different materials have nonlinear refractive indices of this sort, but typically accompanied by absorption coefficients which have hindered the clear observation of the mentioned properties. But recently, these difficulties have been overcome and the first neat realization of the liquid of light has been reported in 2, where a suitable coherent atomic medium was utilized to study the propagation of a laser beam. These results are an obvious motivation for further theoretical studies of equation (1). Despite its limitations, analogies as that of (1) with a liquid are certainly appealing and helpful in understanding complex phenomena. Moreover, they are useful to formulate new questions. For instance, can bubbles exist in the liquid of light? If so, how can they be formed? This was the starting point of our recent work 3 which, incidentally, was chosen for the cover of its issue of Physical Review Letters. The answer to the first question is yes. It comes to no surprise, since the result is similar to what happens with other nonlinear potentials in analogue Schrödinger equations. Those bubbles are only stable if they move in a certain range of velocities – for velocity here we mean displacement in the transverse direction or angle of propagation. Their technical name is “rarefaction pulses”. At lower velocities, one finds that the stable solutions are pairs of vortices rotating in opposite directions. But a distinct feature of cubic-quintic media is that, apart from those dark spots, there are also “bright solitons”, which are, say, liquid droplets of different sizes. If we make two of these droplets collide, it is conceivable that destructive interference (you cannot do this with water!) creates a void in the large one which may evolve into a stable moving bubble. This is confirmed by numerical simulation. Remarkably, when the dark blob reaches the other end, it is converted in a bright soliton again. You can watch the process in the following animation: It is worth mentioning that bright-dark-bright conversions are not unusual in nonlinear equations, especially in one-dimensional situations, see e.g. 4. In contrast, we are dealing here with a two-dimensional case. The term “cavitation” is used for the formation of vapor cavities in liquids. Following the analogy, and since the coherence of the beams is essential for interference, the process just explained can be suitably tagged as “coherent cavitation”. In order to stress the importance of coherence and interference, notice that the collision develops diverse features depending on the relative phases of the light droplets. Those relative phases are the only difference between the previous simulation and any of the following two. In the first one, the droplets coalesce. In the second one, they bounce against each other and a faint bubble appears too. In fact, notice that these disparities can turn out to be useful if this set-up is realized in the lab. Apart from being a handy probe for the parameters defining the beams themselves, one can think for instance of using the light droplets as filters letting pass only particular velocities and phases. Nonlinear systems are exciting and full and surprises. They are an alluring subject of research from both the theoretical and the experimental sides. The different processes found by numerical investigations in [3] and described above may eventually be instrumental in the control of light by light. And there is no doubt that there are still many questions waiting to be asked and answered. 1. H. Michinel, J. Campo-Táboas, R. García-Fernández, J. R. Salgueiro, and M. L. Quiroga-Teixeiro (2002) Liquid light condensates Phys. Rev. E 65, 066604 2. Zhenkun Wu, Yiqi Zhang, Chenzhi Yuan, Feng Wen, Huaibin Zheng, Yanpeng Zhang, and Min Xiao (2013) Cubic-quintic condensate solitons in four-wave mixing Phys. Rev. A 88, 063828 3. Paredes A., Feijoo D. & Michinel H. (2014). Coherent cavitation in the liquid of light., Physical review letters, PMID: 4. Julio Garralón, Francisco Rus, Francisco R. Villatoro (2013) Numerical interactions between compactons and kovatons of the Rosenau–Pikovsky K(cos) equation Commun Nonlinear Sci Numer Simulat 18 1576–1588 Leave a reply
4de2925193a5bf8a
Influence of Fiber Nonlinearity on the Capacity of Optical Channel Models Licentiatavhandling, 2017 The majority of today’s global Internet traffic is conveyed through optical fibers. The ever-increasing data demands has pushed the optical systems to evolve from using regenerators and direct-direction receivers to a coherent multi-wavelength network. The future services like cloud computing and virtual reality will demand more bandwidth, so much so that the so called capacity-crunch is anticipated to happen in near future. Therefore, studying the capacity of the optical system is needed to better utilizing the existing fiber network. The capacity of the dispersive and nonlinear optical fiber described by the nonlinear Schrödinger equation is an open problem. There is a number of lower bounds on the capacity which are mainly obtained based on the mismatched decoding principle or by analyzing simplified channels. These lower bounds either fall to zero at high powers or saturate. The question whether the fiber-optical capacity has the same behavior as the lower bounds at high power is still open as the only known upper bound increases with the power unboundedly. In this thesis, we investigate the influence of the simplifying assumption used in some optical channel models on the capacity. To do so, the capacity of three different memoryless simplified models of the fiber-optical channel are studied. The results show that in the high-power regime the capacities of these models grow with different pre-logs, which indicates the profound impact of the simplifying assumptions on the capacity of these channels. Next, we turn our attention to the demodulation process which is usually done by matched filtering and sampling. It is shown that by deploying a proper demodulation scheme the performance of optical systems can be improved substantially. Specifically, a two-user simplified memoryless WDM network is studied, where the effects of nonlinear distortion are considered in the model. It is shown that unlike matched filtering and sampling, with the optimal demodulator, the symbol error rate decreases to zero at high power. nonlinearity mitigation Fiber optics information theory channel capacity achievable rate Room EC, Hörsalsvägen 11, Campus Johanberg Opponent: Alex Alvarado, Eindhoven University of Technology, Netherlands. Kamran Keykhosravi K. Keykhosravi, G. Durisi, and E. Agrell, “Bounds on the Capacity of Memoryless Simplified Optical Channel Models,” K. Keykhosravi, E. Agrell, and G. Durisi, “Demodulation and Detection Schemes for a Memoryless Optical WDM Channel,” Chalmers tekniska högskola Room EC, Hörsalsvägen 11, Campus Johanberg Opponent: Alex Alvarado, Eindhoven University of Technology, Netherlands.
b61da8a77976cff8
Quantum Monte Carlo From Wikipedia, the free encyclopedia Jump to navigation Jump to search Quantum Monte Carlo encompasses a large family of computational methods whose common aim is the study of complex quantum systems. One of the major goals of these approaches is to provide a reliable solution (or an accurate approximation) of the quantum many-body problem. The diverse flavor of quantum Monte Carlo approaches all share the common use of the Monte Carlo method to handle the multi-dimensional integrals that arise in the different formulations of the many-body problem. The quantum Monte Carlo methods allow for a direct treatment and description of complex many-body effects encoded in the wave function, going beyond mean field theory and offering an exact solution of the many-body problem in some circumstances. In particular, there exist numerically exact and polynomially-scaling algorithms to exactly study static properties of boson systems without geometrical frustration. For fermions, there exist very good approximations to their static properties and numerically exact exponentially scaling quantum Monte Carlo algorithms, but none that are both. In principle, any physical system can be described by the many-body Schrödinger equation as long as the constituent particles are not moving "too" fast; that is, they are not moving at a speed comparable to that of light, and relativistic effects can be neglected. This is true for a wide range of electronic problems in condensed matter physics, in Bose–Einstein condensates and superfluids such as liquid helium. The ability to solve the Schrödinger equation for a given system allows prediction of its behavior, with important applications ranging from materials science to complex biological systems. The difficulty is however that solving the Schrödinger equation requires the knowledge of the many-body wave function in the many-body Hilbert space, which typically has an exponentially large size in the number of particles. Its solution for a reasonably large number of particles is therefore typically impossible, even for modern parallel computing technology in a reasonable amount of time. Traditionally, approximations for the many-body wave function as an antisymmetric function of one-body orbitals[1] have been used, in order to have a manageable treatment of the Schrödinger equation. This kind of formulation has however several drawbacks, either limiting the effect of quantum many-body correlations, as in the case of the Hartree–Fock (HF) approximation, or converging very slowly, as in configuration interaction applications in quantum chemistry. Quantum Monte Carlo is a way to directly study the many-body problem and the many-body wave function beyond these approximations. The most advanced quantum Monte Carlo approaches provide an exact solution to the many-body problem for non-frustrated interacting boson systems, while providing an approximate, yet typically very accurate, description of interacting fermion systems. Most methods aim at computing the ground state wavefunction of the system, with the exception of path integral Monte Carlo and finite-temperature auxiliary field Monte Carlo, which calculate the density matrix. In addition to static properties, the time-dependent Schrödinger equation can also be solved, albeit only approximately, restricting the functional form of the time-evolved wave function, as done in the time-dependent variational Monte Carlo. From the probabilistic point of view, the computation of the top eigenvalues and the corresponding ground states eigenfunctions associated with the Schrödinger equation relies on the numerical solving of Feynman–Kac path integration problems.[2][3] The mathematical foundations of Feynman–Kac particle absorption models and their Sequential Monte Carlo and mean field interpretations are developed in.[4][5][6][7][8] There are several quantum Monte Carlo methods, each of which uses Monte Carlo in different ways to solve the many-body problem: Quantum Monte Carlo methods[edit] Zero-temperature (only ground state)[edit] • Variational Monte Carlo: A good place to start; it is commonly used in many sorts of quantum problems. • Diffusion Monte Carlo: The most common high-accuracy method for electrons (that is, chemical problems), since it comes quite close to the exact ground-state energy fairly efficiently. Also used for simulating the quantum behavior of atoms, etc. • Reptation Monte Carlo: Recent zero-temperature method related to path integral Monte Carlo, with applications similar to diffusion Monte Carlo but with some different tradeoffs. • Gaussian quantum Monte Carlo • Path integral ground state: Mainly used for boson systems; for those it allows calculation of physical observables exactly, i.e. with arbitrary accuracy Finite-temperature (thermodynamic)[edit] Real-time dynamics (closed quantum systems)[edit] See also[edit] 1. ^ Functional form of the wave function 2. ^ Caffarel, Michel; Claverie, Pierre (1988). "Development of a pure diffusion quantum Monte Carlo method using a full generalized Feynman–Kac formula. I. Formalism". The Journal of Chemical Physics. 88 (2): 1088–1099. Bibcode:1988JChPh..88.1088C. doi:10.1063/1.454227. ISSN 0021-9606.  3. ^ Korzeniowski, A.; Fry, J. L.; Orr, D. E.; Fazleev, N. G. (August 10, 1992). "Feynman–Kac path-integral calculation of the ground-state energies of atoms". Physical Review Letters. 69 (6): 893–896. Bibcode:1992PhRvL..69..893K. doi:10.1103/PhysRevLett.69.893.  4. ^ "EUDML | Particle approximations of Lyapunov exponents connected to Schrödinger operators and Feynman–Kac semigroups – P. Del Moral, L. Miclo". eudml.org. Retrieved 2015-06-11.  5. ^ Del Moral, Pierre; Doucet, Arnaud (January 1, 2004). "Particle Motions in Absorbing Medium with Hard and Soft Obstacles". Stochastic Analysis and Applications. 22 (5): 1175–1207. doi:10.1081/SAP-200026444. ISSN 0736-2994.  7. ^ Del Moral, Pierre (2004). Feynman–Kac formulae. Genealogical and interacting particle approximations. Springer. p. 575. Series: Probability and Applications  8. ^ Del Moral, Pierre; Miclo, Laurent (2000). Branching and Interacting Particle Systems Approximations of Feynman–Kac Formulae with Applications to Non-Linear Filtering (PDF). Lecture Notes in Mathematics. 1729. pp. 1–145. doi:10.1007/bfb0103798.  9. ^ Rousseau, V. G. (20 May 2008). "Stochastic Green function algorithm". Physical Review E. 77: 056705. arXiv:0711.3839Freely accessible. Bibcode:2008PhRvE..77e6705R. doi:10.1103/physreve.77.056705. Retrieved 5 February 2015.  External links[edit]
d63cb351720f101b
Archive for March, 2011 The Elusive Object 29 March 2011 Behind the curtain The Reformed Realist Some of Bernard d’Espagnat’s best and dearest friends might be realists. Chapter nine of his On Physics and Philosophy, entitled “Various Realist Attempts,” describes with a perceptible tinge of sorrow how the conventional realist’s goal seems doomed to failure. If not certainly doomed, they are at least misguided, he feels, no matter how much he sympathizes with the impulse to believe in a knowable physical reality beyond the appearances. These attempts have some difficult hurdles to jump. A successful theory should— 1. Make the same (or almost the same) predictions as conventional quantum mechanics 2. Respect the results of Aspect-type experiments and the Bell Theorem 3. Show that the interpretation is more than just a calculating convenience 4. Be more than just a reassuring linguistic reconfiguration, and 5. Keep its conceptual building blocks pretty faithful to its roots in realism. The last criterion isn’t absolutely necessary, but if the only way a realist theory can work is by defining common terms (such as particles) in curiously non-realist ways then the project seems a bit dubious. Add to that the requirement to respect the Bell Theorem and (more or less) match conventional quantum theory’s predictions, which mandate nonlocality if you want physical realism, and these efforts look increasingly futile. In greater detail… D’Espagnat’s Realism vs Near Realism D’Espagnat says he very much sympathizes with realists, and says his own views don’t depart too radically from theirs. His disagreement, he says, developed not on a priori grounds but after he pondered the evidence of physics. Proof vs Sentiment Physical realism is an unprovable metaphysical stance, one among many. But “nobody” believes the moon disappears when we don’t look at it, says d’Espagnat. Commonsense arguments even convinced Einstein. Giving Up Physical Realism vs Locality John Bell (of Bell’s Theorem fame) continued to believe in a physical reality even after his theorem and experimental data shook the foundations of physical realism. He could have given up the idea of a physical reality knowable in principle, but instead he chose to believe this reality is nonlocal. Description vs Synthesis D’Espagnat makes up “Jack,” a physicist who’s a hardline physical realist. Jack believes science has succeeded magnificently on so many levels. Theories aren’t just some synthesis of observations. They are more-or-less accurate descriptions of reality (as d’Espagnat calls it, “reality-per-se”). Senses vs Reality Philosophers like Hume would counter that our knowledge of reality depends on our senses, yet we have no guarantee our sensations correspond with reality. Jack might call this argument overly broad as it applies to any piece of knowledge, including our ordinary experiences that we could hardly doubt. Words vs Reality The sceptic might then say that the results of experiments are communicated by words, but how do we know these words correspond to the building blocks of reality? Again Jack points to everyday experience and the concepts we seem to know instinctively works: objects, their positions, their motions, and so on. The hardline realist says an experiment described using these simple concepts surely must say something true about physical reality. Strong vs Weak Objectivity Jack the hardline realist might then lament all those physicists who claim to be realists but use standard quantum mechanics. Don’t they realize this theory is only “weakly objective”? In other words, it describes observations but doesn’t claim to describe reality itself. Standard vs Broglie-Bohm Interpretations D’Espagnat says Jack would be further perplexed because the Broglie-Bohm interpretation offers predictions identical to the standard interpretation (in the non-relativistic domain) and claims to be an explanation. It doesn’t just predict observations. It also may offer a (partial) way out of the “and-or” problem with mixed quantum states. We’d like to show why the pointer dial doesn’t indicate multiple values at the same time. Standard vs Broglie-Bohm Predictions D’Espagnat notes that Broglie-Bohm’s predictions match the standard model’s. The good news is that Broglie-Bohm’s predictions aren’t wrong. The bad news is the standard model uses simpler mathematics and predicts so much more. Superficial Realism vs Nonlocal Results Though not a critical deficiency, it’s definitely odd that Broglie-Bohm starts off with concepts intuitively familiar to us such as corpuscles and trajectories but ends up predicting a nonlocal reality. This doesn’t mean the theory is wrong, but it does mean the realist’s agenda is somewhat frustrated. Real vs Abstract Particles Broglie-Bohm replaces boson particles with abstract quantities (fields or their Fourier components). Photons are only “appearances,” somewhat undermining the realist model. The jury’s still out on how to deal with fermions. Measured vs Secret Properties Broglie-Bohm says momentum is really the product of mass and velocity even if quantum measurements show something else (see chapter seven). Also in this model detectors are sometimes “fooled,” acting as if a particle hit them even when it didn’t. Finally, a “quantum potential,” which doesn’t vary by distance, means “free” particles don’t really travel in straight lines. So some aspects of reality remain experimentally out of reach, yielding only illusions, an odd position for a realist model to take. Realism vs Observer Choices Consider two entangled particles, one going left and one going right. The Broglie-Bohm model says in some set-ups you’ll consistently get the same result if you measure the left-moving particle first, and a different result if you measure the right-moving particle first. Since the particles are entangled, the first one you measure matches the result of the other one you measure. The problem is that this doesn’t sound like it describes the world “as it really is” but rather just our observations. Our choices as observers seem to affect what’s “really” going on. This does not fit in very well with the realist agenda. Relativity vs Observer Choices It gets worse. Depending on who’s checking, the “time order” of these measurements may differ if they’re “spatially separated” (that’s when you’d have to travel faster than the speed of light to get from one measurement to the other). Since the instruments are showing the same result to any observer, are they simultaneously telling the truth and lying? It appears you can choose a privileged space-time frame that somehow still matches the predictions of special relativity but is consistent with Broglie-Bohm too, but again we end up with all these illusory appearances and an explanation that can’t be verified (or at least distinguished from competing theories). Bohm #1 vs Bohm #2 D’Espagnat (in a footnote) says difficulties with the Broglie-Bohm model led David Bohm to devise his “implicit order” theory, which does not rely on corpuscles. The problem is that the “implicit” order of what’s really happening is separated from the “explicit” order of appearances, and it’s hard to turn that distinction into an “ontologically interpretable” theory. Standard vs Modal Interpretations Borrowing modal logic’s use of intrinsic probabilities, Bas van Fraassen initiated a different approach to realist quantum mechanics that led to various related interpretations. Wave Function vs Finer States Standard quantum mechanics says the wave function is the best description of a quantum system. “Modal” interpretations say sometimes there are “finer” states governed by hidden variables (d’Espagnat prefers to call them “supplementary”). Standard vs Intrinsic Probabilities In “modal” interpretations the wave function describes the probability of various measurements but not necessarily what is “really” happening. The use of supplementary variables rescues these interpretations from the problem of proper mixtures and ensembles (see chapter eight). A system is in state A or state B even before a measurement, even if the quantum state is A + B. Wave Function vs Value State A system’s wave function describes observational probabilities. In a “modal” interpretation the system’s “value state” uses supplementary variables to describe what’s “really” happening. Broglie-Bohm vs “Modal” Interpretations “Modal” interpretations are indeterminate and Broglie-Bohm is determinate, but they share the need for supplementary variables that are experimentally undetectable–and they produce predictions identical to the standard interpretation’s. These realist approaches also seem to violate special relativity. Since their predictions are consistent with the standard interpretation’s they end up being nonlocal, which special relativity isn’t really equipped to handle. Also, in some cases (say some authors) the “modal” interpretation implies the measurement dial will somehow show a value different from the predicted “observed” value. It’s as convoluted as the measurement issues in Broglie-Bohm (such as detectors’ getting false hits). Unlike Broglie-Bohm the “modal” interpretations also get into difficulties about properties of a system and its subsystems. A subsystem can have a property even if the system itself doesn’t. Language vs Ontology D’Espagnat wonders if the “modal” interpretations are basically just offering a different language convention. The terms make it sound like something is “really” going on, but this alleged reality is inaccessible to observers, and “modal” interpretations make the same predictions as the standard interpretation of quantum mechanics. Schrödinger vs Heisenberg Representations Yet another approach makes use of the Heisenberg representation. Its equations are supposedly more realism-friendly than Schrödinger’s wave function. Time-dependent vs Time-independent Equations In both representations dynamical quantities (position and velocity, for instance) are represented by “self-adjoint operators.” The Schrödinger wave function is time independent until a measurement is made. The wave function does double duty, describing states then knowledge. The Heisenberg representation does things differently. Its self-adjoint operators are time dependent–so maybe they describe “real” states that are evolving through time. Heisenberg Representation vs Contingent States The problem is that the self-adjoint operators in the Heisenberg representation, though designating dynamical quantities, refer to all possible values of those quantities. You have to specify initial values if you want the measurement to be a “mental registration” rather than a “creation” of those values. Just as bad, the best way to specify those initial conditions is by using the wave function. Heisenberg vs Schrödinger Operators D’Espagnat says that in the end the self-adjoint operator has too modest a scope in the Heisenberg representation. It does not label contingent states. In the Schrödinger representation there’s the opposite problem. The self-adjoint operator’s role there is too ambitious. It labels the initial state as it “really” is, which leads to the problems of the measurement collapse. Feynman’s Reformulation vs Physical Realism D’Espagnat says high-energy physicists mostly see physical realism as self-evident. Richard Feynman’s “fabricated ontology” greatly eases their calculations, and apparently eases many philosophical doubts too. Probabilities with Detectors vs without Detectors In standard quantum mechanics the probability amplitude indicates how likely one would find a particle (for instance) at a particular spot if there were a detector there. Feynman’s leap was to interpret it as how likely a particle would “arrive” at a certain point–whether or not there was a detector there. Being vs Calculating So is this “arrival” (which means that it “is,” however briefly, at that point) an ontological claim or is it just a calculating convenience? D’Espagnat says Feynman knew quite well the problems of interpreting quantum mechanics but was “absolutely reluctant” to talk about them. Since fringes in a double-slit experiment show up, clearly this way of speaking is just for predictive purposes. If a particle “really arrived” at one slit or the other there’d be no fringes on the detector screen. In fact, the older quantum field theory and the Feynman diagram approaches “are quite strictly equivalent.” This means they both support the nonlocality hypothesis. Standard vs Non-Boolean Logic Quantum mechanics’ formalism uses Hilbert space. This infinite-dimensional abstract space leads some to suggest a non-Boolean logic would rescue objectivist realism. Formalism vs Experimental Facts However, d’Espagnat says that this reformulation has no more ontological significance than Feynman’s approach. Nonseparability and nonlocality remain as issues since these are experimental facts not dependent on the formalism. Using a kind of quantum logic can’t on its own describe microsystems in realist terms. Standard vs Partial Logics Griffiths, Gell-Mann and Hartle, and Omnès have tried using “partial logics” and “decohering histories.” D’Espagnat says that this approach (like the non-Boolean approach) reformulates quantum mechanics but doesn’t change its predictions. The experimental facts remain a barrier to objectivist realism. Macroscopic Reality vs Microscopic Unreality Because of experimental results (such as Aspect’s combined with the Bell inequalities) it’s clear that the microscopic arena is not going to yield to some “strongly objective” form of realism. The challenge then becomes figuring out how “real” macroscopic entities could possibly be made up of “unreal” microscopic constitutents. Existence vs Meaning One approach is to deflect the question. Decoherence describes a mechanism by which macroscopic objects have a certain (physical-looking) appearance—but not existence as such. Maybe we can create Dummett-like criteria (see chapter seven) for determining just the meaning (“signification”) of statements about macroreality (but not microreality). Entities vs Observability If you’re going to make meaningful statements about macroscopic reality then it would help if you could define macroscopic entities. This is surprisingly difficult. One attempt uses statistical mechanics’ concept of “irreversibility” because human observational skills are limited. D’Espagnat says this approach doesn’t necessarily sit well with a realist. After all, the general goal of realist approaches is to describe reality (to some degree of accuracy) through our own observations. Schrödinger’s Cat vs Laplace’s Demon Decoherence theory says that our inability to make precise measurements of complex systems creates the illusion of macroscopic reality. So what do we do about this limitation? We could imagine some version of Laplace’s demon who’s able to make precise measurements of all physical quantities in the universe. We could then try to determine if he sees Schrödinger’s cat as simultaneously dead or alive—or just one or the other, as humans do because of their limited observational acuity. This would tell us what’s “really” going on. But how powerful should this demon be? Let’s assume he can’t use an instrument made up of more atoms than the universe possesses. Some physicists then calculate that even Laplace’s demon couldn’t observe the complex quantum superpositions theoretically observable in macroscopic objects. The “meaningful” conclusion is that these complex quantities are “nonexistent” and therefore the Schrödinger cat problem disappears. Realism vs Human Decisions But can a supposed reality depend on the capabilities of an observer (human or otherwise)? Even more fundamentally, mathematical representations of quantum ensembles (see chapter eight) are compatible with an infinite number of physical representations. Why is just one representation chosen? In the end it seems this kind of realist argument ends up describing an empirical reality, not a meaningful approximation of an observer-independent reality. Linear vs Nonlinear Terms You can trace the “conceptual difficulties” of quantum mechanics back to the mathematical linearity of the formalism. Unsurprisingly, some realists might consider adding terms to make the mathematics nonlinear. These new terms have almost no effect on observational predictions but allow a profound conceptual leap when it comes to macroscopic objects. Their centre-of-mass wave function will now collapse frequently and spontaneously, so there’s no more “measurement collapse.” Relativity vs Nonlinear Realism Nonlocality is still an issue, even though we’re talking about faster-than-light “influences” instead of signalling. The realist might retort that standard quantum mechanics runs into the same problem, but d’Espagnat says it’s the demand for realism that prevents relativity and quantum mechanics from being compatible. Decoherence vs Nonlinear Realism Decoherence theory and approaches based on nonlinear terms are making essentially identical predictions. However, decoherence theory says macroscopic objects are just phenomena. We share this knowledge and call it “empirical reality.” Nonlinear realism believes these objects are “real.” D’Espagnat wonders why we even need nonlinear terms considering that according to conventional (that is, linear) quantum mechanics any macroscopic object with quantum features quickly goes through decoherence and ends up showing classical features. Appearance vs Reality So you don’t need nonlinear terms unless you want macroscopic objects not just to “appear” the way they do but also “really” to be like that. Verbalism vs Reality D’Espagnat is unimpressed by these ontological manoeuvres. He rhetorically asks if this is “some kind of a poor man’s metaphysics” amounting to little more than “pure verbalism.” Open Realism vs Commonsense Realism Yet D’Espagnat is not prepared to abandon realism altogether. He believes in a “veiled reality” that can be gently prodded through an approach he calls “open realism.” But for realism to be consistent with the results of quantum experiments the reality that’s allowed is far different from the “commonsense” reality of the man in the street, or even that of many hard-nosed physicists. Measuring the Decoherence 4 March 2011 Realistically Speaking Chapter eight of Bernard d’Espagnat’s On Physics and Philosophy is entitled, “Measurement and Decoherence, Universality Revisited.” In some ways it was a very dense and difficult chapter to read (and summarize). However, in the end the main points seemed pretty reasonably clear: 1. Quantum universalism and our perceptions of macroscopic reality at first appear to clash 2. A macroscopic object easily shifts between numerous and narrow energy bands under the slightest influence from their environment 3. Therefore it’s almost impossible to measure the exact quantum states of macroscopic objects 4. Our lack of knowledge about large-scale systems in “decoherent” states leads to the apparent stability of the macroscopic world 5. However, on the microscopic level a “realistic” interpretation of superpositions only works if a system includes unmeasurable components or we restrict what measurements we’ll make. There’s a lot of material in this chapter so one could easily come up with some other highlights. In any event, here are my impressions of the chapter in greater detail… Realist Statements vs Realist Philosophy Instead of saying “I see a rock on the path” one could say “I know if I looked on the path to see if I would get the impression of seeing a rock there, I would actually get that impression.” That would be cumbersome so we use “realistic” statements even if we don’t believe in hard-line realism. If we switch back to the microscopic realm realist-like statements might mislead. Macroscopic Realism vs Quantum Universalism If we assume quantum formalism is universal, then why don’t we see a rock in two places at the same time? Macroscopic realism says macroscopic objects have mind-independent forms located in mind-independent places. So even before we look at it, a measuring device’s pointer will point to one and only one part of the dial. A macroscopic state-vector therefore can’t be a quantum superposition A + B, and hence we can’t see a rock in two places at the same time. Schrödinger Equation vs Macroscopic Realism The problem is that the Schrödinger equation will often demand such a superposition. Realists respond by using something other than state-vectors to describe macroscopic objects. D’Espagnat says that he showed (in 1976) that such attempts will fail, and a somewhat more general proof was found by Bassi and Ghirardi (in 2000). Antirealism vs Macroscopic Realism A different approach is to follow Plato and Kant. The senses are unreliable and deceive us. There’s no distinction between Locke’s reliable “primary” qualities and the less reliable “secondary” qualities. The only thing certain are the quantum rules that predict our observations. All else is uncertain. Probability vs Determinism However, we don’t experience the world as a sequence of probabilistic predictions. We picture objects with definite forms, and we can predict the behaviour of these objects using classical laws that are deterministic. Textbook Realism vs Quantum Predictive Rules Part of the problem is that textbooks talk about the mathematics (including symbols for wave forms) as if they represent physical states that “exist” whether or not we’re taking a measurement. D’Espagnat notes the same old difficulties of realist interpretations will  then reappear. He says symbols for the wave forms and other values should instead represent “epistemological realities.” They signify possible knowledge once the observer makes an observation. In other words, the quantum rules predict observations, they don’t describe unobserved realities. Absorbed vs Released Particles In chapter four d’Espagnat assumed that a measured electron gets absorbed by the measuring instrument. In practice this rarely happens. If the electron gets released, then the instrument and the electron form a “composite system.” Instrument and electron are “entangled” (in the quantum sense). Composite States vs Measurements If an electron is in a quantum superposition of two states, the instrument dial shows just one of those states (which you can confirm by using a second instrument to measure the first instrument). If you test an “ensemble” of identical states all at once then some of your instruments will show one state while others will show the other state. Note that the measurement points to the state of the electron after it’s measured, not before. Measurements vs Quantum Collapse Some physicists who won’t accept “weak objectivity” or mere “empirical reality” see the measurement process as “collapsing” a “real” wave function. Quantum Collapse vs Quantum Universality A quantum collapse is a “discontinuous” transition from the (differential hence continuous) Schrödinger equation. If the quantum laws are universal, then what’s so special about a measuring instrument to produce this collapse? Moveable Cuts vs Realism Using the “von Neumann chain” idea, one can predict observations by placing a “cut” between observer and observed at various points. There’s nothing special about one particular instrument. The cut may be placed between a measuring instrument and the particle, or between a second instrument (measuring the first instrument) and the first, or between a third instrument and the second, and so on. Von Neumann showed that the results will be the same no matter where this cut is placed. The problem is that the realist believes in a mind-independent reality, so presumably this cut should be in one and only one place. The collapse of a quantum system shouldn’t be at the whim of the observer (and his mind!). Longing for Realism vs the Practice of Operationalism D’Espagnat says a lot of physicists suffer from a kind of logical “shaky balance.” They want to believe in realism but in their working methods they use “operational” methods (which therefore don’t require a belief in realism). Schrödinger’s Cat vs Quantum Superposition Getting back to the composite system of instrument and electron, if the electron was prepared by a superposition of two states, then the composite system is represented by aA + bB. The small letters represent the “states” of the electron, and the big letters represent the states of the instruments. But the measuring instruments will point to A or B on the dial, not both at the same time. Schrödinger imagined a cat that’s dead or alive depending on the results of the experiment. We don’t see an instrument pointing to two parts of the dial simultaneously, nor can we imagine the cat is both dead and alive simultaneously. Quantum Superposition vs Probabilities The measuring instruments will show one result each time. Quantum rules predict the probability that a particular result will be seen, not that several results will be seen at the same time. Probabilities vs Ensembles To test probabilities we can create a really large ensemble of identical conditions and see what results we get. Imagine we create a whole lot of composite systems with an entangled electron and measuring instrument. On each of those instrument dials we’ll measure one result or another, not both, and not something in between. Identical States vs a “Proper” Mixture Staying with the electron that was prepared as a superposition of states, we calculate a percentage probability that we’ll measure that electron as “being” in one specific “state” and another probability it’ll “be” in another “state.” What if instead of a large number of identical states and identical measuring instruments we prepare some electrons in one state and some others prepared in the other state? We’ll determine how many of each by the predictions for the superposed state. If we then just measure, say, position, we’ll get (approximately) the same results as predicted for the superposition of states. But if we try measuring something other than position our results may violate these predictions. So unless we ignore everything but position, measurements on our ensemble of electrons in superposed states will differ from our proper mixture of electrons in pure quantum states. Coherent vs Decoherent Measurements Imagine we measure an entangled system of an electron (with states in superposition) and an atom. Then an ensemble of identical superposed states cannot be approximated by a “proper mixture” of separate pure states. But if the atom and electron interact with a molecule that is too complex to measure, our measurements of the electron–atom system will be the same whether we measure an ensemble of identical states or a proper mixture. The system has become “decoherent.” Electron–Instrument vs Electron–Instrument–Environment Systems It’s already hard enough to measure the “state” of an electron using an instrument. If we try to measure the “state” of the electron and the instrument in relation to the environment then we have a big problem. Macroscopic vs Microscopic Energy Levels A macroscopic object’s energy levels are very close to each other, so a very small disturbance from its environment (or its internal constituents) will shift its energy level. Measurement Imprecision vs Quantum Precision There is thus so much environmental influence on an instrument that we cannot measure the “state” of the instrument and electron as a system in the same way we were able to measure just the “state” of the electron. That’s why we can’t perform an experiment similar to our earlier one that found differences between measurements on the ensemble of superposed states and the proper mixture of separate pure states. Therefore an instrument pointer, which is a macroscopic object, will act like it’s in a single state, not a superposition. Ensembles vs Double-slit Experiments In the “Young slit experiment” we imagine a particle source, a barrier with two slits, and a detector screen (see chapter four). Normally the screen would show fringe-like patterns because of the quantum system’s wavelike nature. However, if you add a dense gas to the area between the barrier and the detector screen then you’ll just see two “blobs,” therefore showing no evidence of wave-like interference. The molecules in front of the screen are analogous to the molecules that are near an electron–atom system. The molecules form part of a system but are not themselves measured. In both cases we lose the effects of superposition. Independent vs Empirical Reality Because the insertion of unmeasurable molecules prompts us to infer distinct beams with distinct states (corresponding to the “up” or “bottom” slit), this shows how decoherence creates the illusion of a macroscopic reality. D’Espagnat acknowledges it’s a bit artificial to make this distinction since we know about the particle source. But it reminds us that decoherence is what provides the illusion of an independent reality, although it’s really just an “empirical” reality. Entanglement vs Reduced States If one system gets “entangled” with another (such as an electron with an atom) then each system loses its own distinct wave function. There’ll now be a wave function for the combined system. But the quantum formalism allows some information about the original system to be recovered if we imagine a large ensemble of its replicas. The mathematics that represents this is called a “reduced state.” Quantum Prediction vs Decoherence Imagine an ensemble of grain sands or dust specks. They’re small but still macroscopic. The quantum formalism predicts these small objects would be enough to produce the macroscopic effects in the Young slit experiment. And the quantum formalism also predicts that these objects will act macroscopically, supporting the role of decoherence in creating the illusion of a macroscopic reality. Reduced State vs Localization The matrix mathematics used to describe the reduced state suggests the reduced state can stand in for an infinite number of proper mixtures of pure quantum states, which threatens the idea of locality. Fortunately at least one of those proper mixtures is composed of quantum states that are localized. Experimental Superposition vs Decoherence In experiments by Brune et al. a “mesoscopic” object is put into a superposition of states. In the brief time before environmental interactions introduce decoherence, the object’s quantum properties can be observed. The experiments therefore provide evidence both for decoherence and for the validity of quantum laws in objects larger than microscopic. Quantum Universality vs Classical Laws Brune’s experiments support quantum universality, but it would be good if we could also show how to derive the laws of classical physics from the rules of quantum prediction. Classical Numbers vs Quantum Operators In classical physics various properties of an object (such as a table’s length) are represented by numbers governed by classical mechanics. In quantum physics these properties are represented by (Heisenberg) operators and obey quantum equations. Roland Omnès has proved that the observational predictions of both approaches coincide (in classical physics’ traditional domains). Quantum Laws vs “Reifying by Thought” Because classical physics and their predictive formulas are so reliable in the macroscopic realm we naturally infer that past objects and events have “caused” present ones, and present ones will “cause” future ones. Counterfactuality vs Quantum Mechanics Counterfactuality depends on locality, but Bell’s Theorem combined with the Aspect-type experiments show that nonlocality, and hence counterfactuality, is violated (relevant if we’re realists). If we want to show classical and quantum predictions are the same in the macroscopic realm then we’re going to have to figure out how to “recover” the counterfactuality we imagine macroscopic reality possesses. Is there action-at-a-distance with macroscopic darts? It turns out their orientation is a macroscopic variable that “washes away” microscopic variations. In fact orientation is one of the “collective variables” that includes length, mass, and other classically measurable quantities. We’ve already noted that Omnès showed their values are consistent with quantum formalism. Macroscopic Certainty vs Microscopic Uncertainty Measuring a “complete set of compatible observables” will give you the state vector that “exists” after all the measurements were made, but that doesn’t help you figure out the state vector that “existed” before you made any measurements. The idea of a measurement is usually that it measures something previously existing. By that standard you can’t figure out a state vector for sure no matter how many measurements you make. By contrast, the mathematics behind a macroscopic ensemble’s “reduced state” will tell us which physical quantities may be measured without disturbing the system. We can therefore recover the “state” of a macroscopic member of that ensemble. D’Espagnat says this ability helps shed light on our intuition that the properties of something must have been the same before we looked at it. Realism vs Semirealism D’Espagnat will discuss those who still cling to realism in the next chapter. However, he says there are “semirealist” approaches that manage to stay faithful to the quantum formalism. A and B vs A or B The “and–or problem” arises because when we measure a system of superposed states aA + bB we see it as either in state A or in state B, not in both states A and B at the same time. This shift from “and” to “or” is nowhere suggested in the equations. D’Espagnat suggests this is a conceptual not a mathematical issue. One vs Many Realities The mathematics of quantum formalism does not require there just be one and only one reality. Everett’s “relative state theory” interprets this formalism to suggest that the universe “branches off” when a superposed system is measured. In a given branch only one of the superposed “states” is measured, but the overall multi-branch system is still represented by the same expression that combines superposition plus entanglement: aA + bB. Common Sense vs Formalism Some physicists are attracted to Everett’s branching universes because it agrees with the quantum formalism. They believe that following the formalism first rather than common sense could bring in a revolution similar to relativity’s own repudiation of common sense. Zurek vs Reality Zurek showed that the “reduced state” of a macroscopic ensemble is stable under certain measurements. He goes further and defines “reality” as whatever is out there that remains stable under such measurements. Quantum Universality vs Classical Foundations Decoherence theory tips the balance away from thinking classical physics is somehow more foundational than quantum physics. Decoherence theory shows how the rules of classical physics may be derived from quantum rules. Physics vs Chemistry, Biology, and Other Disciplines Decoherence theory can’t let us predict the structure of other disciplines though. The quantum formalism has to be simplified “by hand.” Quantum theory is still universal, but our human choices, our human ways of conceiving things, will crucially guide our perceptions. The Antirealist’s Reality 1 March 2011 Ultimate reality The Invisible Hand Chapter seven of Bernard d’Espagnat’s On Physics and Philosophy is a kind of grab bag, entitled: “Antirealism and Physics; the Einstein-Podolsky-Rosen Problem; Methodological Operationalism.” D’Espagnat’s points in this chapter seem to boil down to this: 1. Physics (and science in general) is about predicting observations not describing some kind of reality 2. Operationalism (which concentrates on methodology) increases the reliability of science as it counters critics who complain scientific theories (which they say should describe and explain reality) keep changing, and 3. Although measurements (of “empirical” reality) depend on the observer, physical laws seem to be constrained in various ways (by the structure of an “ultimate” reality that’s scientifically indescribable). This chapter feels a little scattered as d’Espagnat pre-emptively defends himself against a bevy of incoming realist missiles. In the end, though, he’s an antirealist in terms of empirical reality, and a realist in his belief there’s an ultimate reality that’s (probably) beyond our direct knowledge but nonetheless influences the shape of our everyday reality. Here’s some more detail… Unconscious vs Conscious Antirealism D’Espagnat says modern physicists (ever since Galileo) generally use an antirealist approach in their methods even if they don’t explicitly embrace antirealism as a philosophy. Mind-independent Realism vs Pythagorean Ontology Objectivist realism claims there’s a mind-independent reality whose contents resemble our observations. A Pythagorean Ontology (capital “O”) claims there’s a mind-independent reality that is reachable through deeper mathematical truths. Unlike either of these approaches, modern physics emphasizes instruments and measurements. It’s not very interested in saying what’s “really” out there in the “world,” whether physical or mathematical. Meaningful Statements in Classical vs Quantum Physics While done more intuitively in the past, physicists nowadays can more formally apply “meaningfulness conditions” to statements. Also, quantum systems are so peculiar that certain distinctions need to be made. Antirealist statements have to be expressed and tested in special ways. Facts vs Contingent Statements D’Espagnat is concerned here not with general “factual” statements such as “Protons bear an electric charge” but rather with satements about physical quantities. A value is assigned to the speed of a particular object, for instance. True/False Statements vs Meaningless Statements Based on Dummett’s approach a statement about an object’s speed would be meaningful only if we can measure (at least in principle) that physical quantity at some specified time and place. Necessary vs Sufficient Grounds for Meaningfulness D’Espagnat says Dummett’s criterion is necessary, but that doesn’t mean it’s sufficient. Other conditions may need to be fulfilled. Imagining vs Measuring a Quantity It’s possible that we can conceive of a physical quantity that has no meaning. However, if we can measure it then that quantity will definitely have meaning. Classical vs Quantum Measurements In classical physics it’s intuitive to think a measurement reflects the “true” values of an object, but in quantum systems the measurement of a particle (depending on your model) either creates or changes the values that you’re trying to measure. In quantum physics we’re not simply “registering” some pre-existing value when we take a measurement. So the “truth value” criteria will need to include more than just measurability. Disturbing vs Non-disturbing Measurements In the spirit of antirealism D’Espagnat introduces a test: for a statement to have a truth value “it should be possible” (at least in theory) to measure the required physical quantity without disturbing the system. The Einstein–Podolsky–Rosen trio claimed in 1935 that in some cases there are indirect ways to make non-disturbing measurements, admittedly only on correlated systems. Correlated Darts vs Photons If you throw a pair of correlated darts (see chapter three) they originally have some identical orientation. Measuring one dart’s value after they become separated will tell us the other dart’s value. As a bonus, the measurement won’t even change that other dart’s orientation. If instead of darts you use correlated photons, and instead of measuring orientation you measure the polarization vector’s component at some angle, then you run into a problem. Consistent vs Broken Correlations If you measure one photon’s component at a certain angle then you can be sure if you measure the other photon’s component at the same angle you’ll get the same value (which will simply be “plus” or “minus”). Because we are capable of making this measurement then by our meaningfulness test we can tell if a statement about those values is true or false. But quantum formalism says the system of these two photons can have just one value at a time. We can’t measure one photon at a particular angle, then measure the other photon to measure another angle’s polarization component. Multiple Values vs Bell’s Inequalities At least we can’t then claim the second photon has simultaneous values at two different angles. The first measurement destroys the original correlation. Because Bell’s inequalities have been disproved experimentally, we know that these multiple values don’t exist simultaneously. And because our original meaningfulness test implied such a simultaneity we know that test is flawed. Actual vs Possible Measurements If we instead require that measurements are available rather than merely could be available then we get a stricter test. By phrasing our requirements in the indicative not the conditional we end up with a sufficient condition, not just a necessary one. Possible Measurements vs Observational Predictions Dummett’s meaningfulness test is a very general antirealist approach. It doesn’t look at the factual data actually available in a microscopic situation. It just considers our ability to make measurements in principle. D’Espagnat says the tighter requirements he’d impose take an approach even further along the antirealist path as they speak of observational predictions not measurements. This also takes us further down the path of instrumentalism. Operationalism vs the Value of Science D’Espagnat says if you understand operationalism properly then you’ll realize operationalism confirms the value of science and makes its statements more reliable. Description vs Prediction D’Espagnat says critics of science believe scientific knowledge is easily influenced by social and cultural factors, and is frequently throwing out old theories for the sake of very different new ones. Superficially this makes sense. Einstein’s curved space-time replaced Newton’s gravitational force. They’re radically different approaches. But science isn’t trying to describe reality. It’s trying to make predictions about observations. Newton’s approach makes good predictions in its own domain, but in other domains Einstein’s predictions are the only ones that work out. Sometimes the predictions and domains can be identical. Fresnel’s and Maxwell’s theories of light make the same predictions. D’Espagnat says the value of Fresnel’s theory was independent of whether the ether was really out there. If you drop the naïve realism and its concern for description, then science as a method for synthesizing and predicting experience is not so inconsistent. Now we can see steady progress as science gets better and better in its power of prediction. Scientific Knowledge vs Practicality D’Espagnat says science is mainly knowledge. Even if science is  concerned with prediction and not description, don’t confuse science with the various practical uses it’s put to (such as technology). Descriptive vs Instrumentalist Knowledge Science brings together an account of human experience that can be communicated: “If we do this, then we observe that.” Just because it’s not trying to describe “reality” doesn’t mean it’s not imparting some kind of knowledge. Instrumentalist vs Theoretical Knowledge These methods of making observational predictions are at the core of science. Coming up with a theory to define certain terms and describe certain entities can be useful, but that’s something added onto this predictive foundation. Operationalism vs Instrumentalism D’Espagnat doesn’t try to distinguish the two terms. He says the most important aspect of any theory that conforms to this approach is that it’s an instrument of making observational predictions. He says mathematical physics is a prime example. Open Realism vs Endless Possibilities In chapter five D’Espagnat talked of his preferred approach of “open realism.” Certainly our view of “reality” (specifically its physical laws) depends on us, including our ability to make observations. But there seem to be “constraints” on what kinds of theories are valid. Describing vs Acknowledging Constraints This “something else” that lies beyond our observations but somehow constrains them may not be directly accessible by us, but D’Espagnat says our inability to describe the constraints does not mean they don’t exist. Ultimate vs Empirical Reality An elusive, indescribable “ultimate reality” may still shape the physical laws that we describe. In turn the laws we infer are shaped from our observations that contribute to our sense of “empirical reality.” Explanations vs Theories D’Espagnat quotes one critic of operationalism, Mario Bunge, who says that the main role of a theory is to provide an explanation. Therefore a theory must provide at least a “rough sketch” of reality as it is. D’Espagnat replies that the explanation would actually lie in the ultimate reality that constrains our physical laws, but this ultimate reality is not scientifically describable. Therefore what Bunge desires is impossible. Unless we grant that “miracles” happen all the time there appear to be constraints on our physical laws. But the ultimate reality producing these constraints can’t be scientifically described because of the problems with objectivist realism noted before. Physics vs Physical Objects D’Espagnat says that Bunge considers a value in physics attached to something that is not physical is meaningless. If the value doesn’t refer to something “real” then it’s pointless. D’Espagnat points out that many physical laws refer to values that are not attached to existing physical objects. Probability is a concept referring to either imaginary objects or is a thought not subject to physics. Particles vs Waves Also, wave functions are useful, in fact, essential for quantum physics. So are wave functions real? If so, then particles would have to be real too. If waves and particles exist simultaneously then we’d have to accept the Broglie–Bohm model with all its problems (see chapter nine). Also, a ground-state electron in a hydrogen atom would seem to have zero momentum because it’s not changing state (quantum potential is balanced by Coulomb force). But the Compton effect shows momentum is non-zero. We have two different versions of momentum. If they were both “real” then we get into pointless difficulties, says d’Espagnat. Other possibilities: waves change into particles (but the collapse of the wave function has lots of problems attached to it) or only waves exist (but then nonseparability and measurements cause problems). So D’Espagnat says Bunge’s objections seem pretty “dogmatic.” Circular vs Practical Definitions Another objection notes (correctly, d’Espagnat acknowledges) that operationalists place a lot of emphasis on precise definitions, but Bunge says some concepts will remain undefined (just like a dictionary uses some undefined words to define other words). D’Espagnat replies that operationalism is a methodology, not an “a priori” philosophical system. We want efficiency. Dictionaries are useful despite their undefined terms. Some concepts we just seem to naturally know (whether they’re born with us or not). These undefined concepts (though neither certain nor absolute) let us operate a measuring instrument, for instance, which then lets us define other concepts. Sometimes concepts considered “primary” in the past get defined explicitly, such as Einstein’s replacement of “absolute time” with a time that’s partly relative to the observer. Measurement vs Change The act of measurement seems to change the quantum system. If, as Bunge’s approach would suggest, this change is “real” then we’d have the difficult problem of explaining this change. But the quantum approach is “weakly objective” so it refers only to measurement. In the end theoretical entities are useful for helping to make predictions in modern physics. Just don’t regard them as self-contained and “real.” Einsteinian Hope vs Descriptive Failure Einstein and those of a similar optimistic bent believed reality would be increasingly describable. This view does not seem consistent with the reality that the quantum framework paints.
2289abb49808dbb1
A refutation of Salterism [Reproduced below is James Goulding’s refutation of Salterism, which is relevant to a discussion on Salterism taking place across several Dark Enlightenment blogs at the moment, hyperlinked at Outside In. James Goulding famously – and frustratingly – deleted all of his incredible work on his blog Studiolo, but I archived some of it before it was destroyed, which in time I hope to make available here. Of course, if James Goulding contacts me and asks me to remove it I will. Text is missing hyperlinks.] The signal character of Salter’s thesis is that it is ethical as well as empirical. As such, it challenges philosophy’s liberal perceptions of race and ethnicity from a novel angle. Furthermore, since reproductive interests exist as described and constitute the ultimate interest in organic life (ie, continuity), they should have some place in ontology. After all, is not every ethical question also an ontological question? To maintain any system of ethics at all, and avoid the slide into utility, arbitraryness, relativism, and nihilism, must not there be some testable and solid basis to ethic? — “Guessedworker” Frank Salter’s book On Genetic Interests (2003, 2007) proposes that humans have a “vital” or “ultimate” interest in the reproduction of their genes, and that ethnic nationalism is an important strategy for realising these interests. “Genetic interests” refers to the allegedly vital human interest in passing on genes in general; “ethnic genetic interests” refers specifically to these interests as embodied in differential relatedness of various ethnic groups to a given human. Salter provides, via Henry Harpending, tables relating “replacement migration” to “child-equivalent” reproductive losses—e.g. a negro immigrating to Ireland supposedly reduces each Irishman’s genetic representation in humanity as much as if he lost a child. “Salterism” refers to the ideology that holds pursuit of genetic interests, and ethnic genetic interests in particular, to be of overriding importance. “Salterians”, adherents to this creed, are most numerous at majorityrights.com. I: Refutation Setting aside data, let’s skip to the important question: why should every human regard genetic proliferation as his “ultimate interest”? Salter devotes a chapter of On Genetic Interests to dealing with objections. Unfortunately for Salterians, his replies are full of holes. In this chapter I try to anticipate objections to the notions that genetic fitness is an interest and that it is the only ultimate one. Some of these objections are plausible, at least initially, while others can be readily dispensed with […] (4a) Objection from lack of human motivation: Who cares? Perhaps genes are not interests, if interests are defined as conscious wants. […] If he [R.D. Alexander] is right, if humans are not evolved consciously to pursue genetic interests even after reflecting on their genetic history, then the concept of genetic interest might be hollow. Perhaps if this interest cannot motivate protective action it must remain a descriptive idea unless and until humanity evolves to the extent that people can get excited about it. Surely Alexander is mistaken. In our modern world many interests are not intrinsically motivating, only being valued when we understand their significance. Would keys to a castle be more than a curio to hunter-gatherers unaware of the wealth and prestige they can unlock? […] Recognising something as an interest requires background knowledge, sometimes quite sophisticated, of the contexts in which it becomes valuable. […] It might be countered that objects and codes are not interests in themselves. They only attain value because they allow access to things we all intuitively value, that we have feelings about, such as status and resources. In this account keys are not intrinsic interests. It is objects, states of being and other individuals that we consider valuable—that are intrinsic interests. Nothing is an interest that does not unlock such valuables. This is a plausible view, but hardly a criticism of the notion of genetic interests. Genes produce myriad effects in the real world, including health and kinship, that are intrinsically valuable. Thus genes have always been valuable, even before they and their actions were discovered. Salter equivocates on terminal goals and instrumental goals. His analogy: to possess keys to a castle is of potential value to most humans, even if this value only becomes apparent via additional knowledge. This implies that a lack of knowledge may prevent people from realising the value of genetic proliferation. A castle key is, however, of merely instrumental value: it allows someone to bring about states of reality that he values for their own sake. Wealth obtained from the castle may be a further instrumental goal, which facilitates the terminal goal of e.g. hedonic egoism. To possess a key-shaped lump of metal is unlikely to be a terminal goal, and if it were the hunter-gatherer should realise this without additional knowledge (since the castle would be extraneous to the key’s inherent value). Genes do have important effects, but likewise this only implies that genes are of instrumental value. Salter’s grand claim is that genetic reproduction is a terminal value for everyone, to which notion genes’ instrumental value is orthogonal. On the whole, serving genetic interests upholds human proximate interests. Many of the values we hold most dear are preserved down the generations because individuals strive to preserve their genetic interests, even when those interests are vaguely apprehended or not apprehended at all. This too is irrelevant to the question of whether genetic proliferation is a terminal value. If reproduction is instrumentally valuable, a rational agent attempts to reproduce; he need not consider gene-spreading inherently valuable. The point should be emphasised that genes only become interests when part of the reproductive chain of life; when they contribute to the creation of humans and influence their development; or when such function is in prospect. If it were possible to manufacture billions of copies of one’s genome in the form of powdered protein, and disperse them in the world or in outer space, that would hardly be in one’s genetic interests. But it does serve genetic interests to have part of one’s genome help form a new human. The point should also be emphasised that “genetic interests” remain underspecified. Genes in powdered protein aren’t valuable; genes in humans are. What about plants and animals? They are also part of the reproductive chain of life. If I replace onions in my garden with leeks, might this not be a tragic loss of genetic interests—millions of child-equivalents, even—if onions happen to share more genes with humans than do leeks? We may rule out plants—it’s silly. But what about apes, or Neanderthals? What definite criterion distinguishes organisms that embody genetic interests from organisms whose genes are ignored? This is important, because humanity may change by genetic drift, evolution or self-modification in future, and if it changes too much it might no longer be a vessel for existing humans’ genetic interests. Then, to forestall this change would be far more important than combating immigration. Genetic interest could motivate as a token of success. It is conceivable that individuals aware of life’s evolutionary dimension can treat genetic fitness as a safety indicator. The assumption would be that if they or their groups are not sustaining their genetic line, for example by monopolizing a territory, something is wrong and should be put right. Genetic proliferation could motivate. An AI could indeed be programmed: “maximise the number of these genetic code snippets within living human beings”, although the behaviour of such an AI would probably horrify the naïve Salterian. So what? If my aunt had balls, she’d be my uncle. An effective counter to the view that humans cannot be motivated by genetic interests, even indirectly, is that they are and always have been. The cooperative defensiveness shown by band and tribal peoples is bound to have boosted inclusive fitness, because it is universal and ancient, thus likely to have been an evolutionarily stable strategy. Other forms of group spirit, including patriotism and nationalism and religious solidarity, have been powerful motivators of group continuity. Even in present day Western societies where ethnic sentiment is often considered passé by the ruling elites and where whole populations are being displaced by mass immigration, indirect concern over genetic interests lives on in one place or another. Many people feel a strong affinity for their ethnic identities, and many more are prone to do so. Salter once again fails to defend the overarching thesis of On Genetic Interests. Some humans may well feel an abstract desire to maximise genetic representation—Frank Salter presumably does—but no-one else need share this interest. Salter’s claim that humans “always have been” motivated by genetic interests is also interesting. If one cares about “genetic interests”, one deliberately sets out to maximise one’s genetic proliferation within humanity. Since ancestral humans knew nothing of genes, Salter’s claim can only be true if we accept the idea of “indirect” motivation to increase genetic proliferation. Ancestral humans who cared about “blood ties”, for example, were indirectly concerned with genetic proliferation, because “blood” is a vague label standing in for the concept of biological relatedness that genes now fill. Perhaps ancestral humans even viewed themselves as having blood ties on arbitrarily extended levels of kinship. Such thoughts might have encouraged cooperative defensiveness of tribes; or, an inclination to join a mutually defensive, homogeneous group of any kind could produce this phenomenon. Who knows? Salter’s problem, in either case, is that many living humans do not exhibit an abstract concern, direct or indirect, for genetic proliferation. Even if they have seen Salter’s “child-equivalent” tables, most people don’t care very much about EGI, ethnic bloodlines or any such thing. At this point, Salterians exchange the sensible idea of “indirect” interests for absurdity. Humans exhibit an indirect effort to realise a goal if they characterise their efforts using vague stand-in terms. Salterians like to argue that, in addition, since human goals are explained by the fact that our brains are coded for by genes, we have an indirect or “ultimate” interest in genetic reproduction whatever we might claim. This is untrue, simply because an object is not identical to its cause. If one asks for café au lait in a restaurant, one will be displeased should the waiter bring an espresso machine, coffee beans and a jug of milk. “But Sir, this is your ultimate coffee; just the same as regular coffee.” An example that radically separates phenotypes and genes is helpful because it shows how important an explicit comprehension of genetic interests might be. […] Brooks believes that should robots be constructed with humanlike intelligence and consciousness it will be unethical to treat them as slaves. ‘You get into the moral question—would it be okay to breed a race of subhumans? And certainly we feel now it’s okay. We don’t feel any empathy for the machines but that may be a consideration ultimately…’. This position combines vivid psychological insight with poor biology. Brooks thinks it would be wrong to have any entity be our slave that could elicit our empathy, arguing from the lack of empathy slaveholders once felt for their human slaves. If the slaveholders were wrong in casting their slaves as subhuman, he implies, then robot owners would similarly be wrong to cast their robots as subhuman. The syllogism makes sense only if divorced from the most basic understanding of biology, and from a concept of genetic interests, implicit or explicit. Human slaves of any race were as human as their masters. It was false belief that designated them as subhuman, but a similar belief about robots would not be false. Salter thinks that ability to experience pleasure or pain is no basis for empathy. Instead, what matters is that humans contain genes. Any brain not coded into existence by genes is undeserving of concern, says the ethical Dr. Salter. The Church–Turing–Deutsch principle implies that any physical process in a brain can be simulated by a computer. Therefore, Dr. Salter himself could be running in a simulation, or be a silicon brain in a vat. I doubt that he is; but claims should apply to all of physics, including improbable circumstances. If Dr. Salter is a simulation, does he think the experimenter should torture him, if this happens to further the experimenter’s genetic interests (e.g. because it impresses his girlfriend)? If we care more about phenotypes than genotypes, then ‘who cares?!’ will often be an effective repost to any evangelising call to preserve genetic interests. One either feels protectively about genetic interests or not. Dr. Salter thus admits defeat. But his series of half-baked failed rebuttals is enough to satisfy the lazy and credulous. (4b) Objection from the teleological nature of genetic interests I have encountered criticisms of the idea of genetic interests based on rejection of teleological explanation. a. Objection: Human behaviour is often directed towards goals, such as acquiring food or mates, but it is fallacious to portray humans as deliberately striving to maximize their reproductive fitness. Fitness might or might not be an outcome of our behaviour, but with rare exceptions it is not a conscious goal. Reply: The present essay is not primarily a theory of human behaviour, but of interests. Rather than being a work of explanation, this is mainly an exercise in political theory dealing with what people are able to do if they want to behave adaptively. This is a lie. Earlier we saw him claim, “genetic fitness is an interest […] the only ultimate one”, and here is a similar quote from the blurb (with my boldface): Salter’s sensible part knows that this is untrue; therefore, he provides disclaimers. But the blurb is a fair summary. The idea that On Genetic Interests just offers strategies for those who wish to behave adaptively is contradicted by the book’s actual content. Hitler, ducking accusations of anti-Semitism, might have included a note in Mein Kampf: “This book is mainly an exercise in political theory dealing with what people are able to do if they think Jews are evil. At no point do I impeach the Jews. Would I lie to you?” (4c) Objection from levels of analysis: Do only genes have genetic interests? Assuming as valid the notion of objective interests, independent of motivations or even awareness, it could be argued that neo-Darwinian theory emphasizes the genes’ phenotypic interests, not phenotypes’ genetic interests. From the replicator’s vantage point phenotypes exist for the convenience of genes. This line of thinking might conclude that if phenotypes have any interests they must bear on their own phenotypic needs. A rough guide to these needs is striving behaviour but includes the objective need of the organism to survive and flourish. Put differently, phenotypes might have only proximate interests, not ultimate ones. The latter type of interests might adhere to replicators, not vehicles. This argument fails to account for what Alexander calls ‘the direction of striving of the phenotype’, quoted earlier. Predictably from the evolutionary perspective, phenotypic needs and motivations usually point to the reproductive interests of their genes. Phenotypes are, after all, genes’ survival vehicles, to use Dawkins’s term. Genes are our ultimate interests because they are the basic units of selection, partially defined by Dawkins as ‘active replicators’, those that positively influence their probability of being copied. […] Active germ-line replicators, such as functional genes, are units of selection and hence ultimate interests. The general mutuality between genetic and phenotypic ‘striving’ in the Environment of Evolutionary Adaptedness indicates that even if we count only phenotypic needs and motives as interests, these are strongly identified in that environment with genetic interests as the genes’ interests. […] Surely the primacy of phenotypic (or vehicular) interests cannot be maintained when so many phenotypes in so many species give highest priority to their genetic interests; when selfishness and altruism are shown convincingly to be strategies for ensuring genetic continuity. When a human forms the idea “I want to bring about X”, this is the outcome of a computation instantiated in his brain. X may be an instrumental goal: for convenience, the brain pins down an objective like “I want to earn money”, but this is predicated on the fact that possession of money allows the brain to satisfy other goals. At the bottom of any chain of instrumental goals is a terminal goal: a state of reality the brain attempts to achieve for its own sake—that’s just how the brain is programmed. The human brain isn’t a coherent expected utility maximiser; it is a bunch of competing terminal goals that natural selection has glued together. Competing terminal goals, e.g. hedonic egoism vs. hedonic utilitarianism, increasingly conflict as humans gain knowledge, and the further we depart from the ancestral environment. It may be useful to view the brain’s terminal goals as the objectives of various coherent sub-agents, rather than a singular “person”. Either way, these terminal goals need make no reference to genes and genetic proliferation. Some of them may, but most do not. Humans often enjoy sexual intercourse for its own sake. The concept “sexual intercourse” forms, and the brain reliably attempts to bring about the configuration of reality, “I engage in sexual intercourse”. This is the bottom of the chain: a terminal goal (although “experience pleasure” could be the terminal goal in other cases). This is wholly distinct from, “I wish to engage in sexual intercourse, in order to pass on my genes”. That would be direct concern of genetic interests. It is also different to, “I wish to engage in sexual intercourse, in order to continue my bloodline”. That would be indirect concern for genetic interests. These are different mathematical statements. A computer programmer wouldn’t treat them as the same statement; they are distinct claims about reality. Perhaps the concept of instrumental goals confuses people. The word “because” is used to descend chains of instrumental goals: “I want a better job because I want more money because I want a bigger house because I want to make my children happy”. One of this person’s terminal goals is, “I want to make my children as happy as possible”. It may not be his most powerful mental sub-agent, but it controls a major part of his behaviour. Once he hits rock bottom—a terminal goal—he is simply stating what he values. His brain happens to be programmed to realise the state of reality in which his children are happy. The word because can also be used to explain the existence of this terminal goal. “I want a better job because I want more money because I want a bigger house because I want to make my children happy; I want my children to be happy because natural selection favoured genes coding for the terminal goal of making one’s children happy. Here is a completely different statement: “I want my children to be happy because I want to spread my genes”. In that case, genetic proliferation would be the person’s terminal goal. Instrumental goals are way to keep track of the actions necessary to fulfil a terminal goal. Tabooing the confusing word “because”, one might instead say, “I want a better job, in order to obtain more money, in order to obtain a bigger house, in order to make my children happy. ‘Make your children happy’ is the utility function of a powerful sub-agent in my brain. Natural selection favoured genes that code for a brain with this strong mental sub-agent.” In the statement “I want my children to be happy”, the “I” is the entity that represents this goal. Genes do not represent that goal; the brain does. Genes code the brain into existence, but they are not the cluster-in-thingspace that actually has the goal. Salter’s claim, “so many phenotypes in so many species give highest priority to their genetic interests” is therefore false. Goals embodied in a brain coded for by genes needn’t make any reference to genes, or a concept standing in for genes, and they do so rarely. Goals are not identical to the thing that caused them to exist. To think so—to conflate an object with its putative cause—is the logic of “ultimate coffee”. In addition, genes are only a convenient abstraction. They aren’t the entire “cause” of a brain, any more than an espresso machine, coffee beans and a jug of milk are the “cause” of a cup of coffee. Consider identical twins. In the womb, before a mature brain has developed, environmental factors (e.g. one twin’s advantageous connection to the placenta) cause differences in the twins’ phenotypes. On a smaller scale, radiation, copying errors and even quantum tunnelling have some influence on the structure of each twin’s brain. As the twins mature, enculturation and their different experiences create massive differences. One can’t even be sure that only genes in the twins’ bodies are coding for their brain structure, rather than those of a parasite organism. One could describe the genetic code as the brain’s “ultimate” cause, and every other influence as “contingent”, but tabooing these words such a distinction is arbitrary. “Gene”, like most words, is also a fuzzy concept. To quote Dawkins in The Extended Phenotype: I shall make no attempt to specify exactly how long a portion of chromosome can be permitted to be before it ceases to be usefully regarded as a replicator. There is no hard and fast rule, and we don’t need one. It depends on the strength of the selection pressure of interest. We are not seeking an absolutely rigid definition, but ‘a kind of fading-out definition, like the definition of “big” or “old”‘. […] The possibility of strong linkage disequilibrium (Clegg 1978) does not weaken the case. It simply increases the size of the chunk of genome that we can usefully treat as a replicator. […] It was in this spirit that I playfully contemplated titling an earlier work The slightly selfish big bit of chromosome and the even more selfish little bit of chromosome (Dawkins 1976a, p.35). When discussing natural selection, genes are but a suitable actor to play the leading role in our metaphors of purpose. Physics does not run on “genes”. A highly specific description of the processes that caused the brain to exist would refer to quantum amplitudes, and although fuzzy clusters-in-thingspace called “genes” would be implicit in this description, things would be more complex. Genes are implicit in the explanation. So are the nucleotides that developed into the first RNA self-replicator. So are the laws of physics that enable DNA to exist, brains to develop and mutations to occur. Depending on the time-scale and zoom lens one prefers, using Salter’s logic even electromagnetism or the Big Bang could be considered humanity’s “ultimate interest”. Now let’s skip to another interesting section of On Genetic Interests: Chapter 9, “On the Ethics of Defending Genetic Interests”. I formulate an ethic of ‘adaptive utilitarianism’ according to which a good act is one that increases or protects the fitness of the greatest number. I apply this ethic in an attempt to answer three fundamental questions raised by the concept of genetic interest, especially the ethnic component (followed by short answers): (9a) Under which conditions if any does defending genetic interests justify frustrating other interests? Since genetic interests are shared according to degree of kinship, individuals have duties to family, ethny, and humanity ahead of strictly private needs. (9b) Should the ultimate interest of genetic fitness be accorded absolute priority over other interests? In principle ‘yes’, but in practice ‘not always’, since the effect of a behaviour on fitness is often unknown. (9c) What is the proper action when ultimate interests conflict? When ethnies conflict, adaptive utilitarianism is best satisfied by universal nationalism, since this ideology teaches respect for everyone’s ethnic interests. Genetic continuity is compatible with peace between ethnies, with equality of opportunities within ethnies, but not with equality of fitness outcomes within ethnies, since a system that ensured equality would be evolutionarily unstable. The ultimate form of liberty is the freedom to defend one’s genetic interests. […] In this chapter I raise and attempt to answer some basic questions of morality concerning the defence of genetic interests, especially in the domain of ethnic rivalry. I do so in the spirit of consilience, or unity of all knowledge, urged by E. O. Wilson. The Enlightenment will finally reach maturity, Wilson argues, when mankind deploys the knowledge gained from science to forge wiser, more humane policies. Humanity’s “ultimate interest” of the Schrödinger equation genetic proliferation should, in principle, be given absolute priority. What if Frank Salter’s Grandma were sick and needed his help? The effect of his leaving her to die may be difficult to compute in the genetic calculus. But he might decide that clearly, this lonely, poor, sterile old lady is worthless to a fitness-maximiser. In that case, his ultimate interest is to leave her to rot. How ethical. This may be slightly unfair. Hedonic utilitarianism also forces some almost unconscionable decisions. Torture vs. dust specks discomforts me, and to choose “torture” is a bitter bullet to bite. But at least this choice is grounded in humane reasoning. Leaving Grandma to die because she won’t help you to pass on your genes is just psychopathic. It may be rational behaviour for some minds, but they are not “humane”. I try not to lose sight of the implications of Wilson’s view that the moral instincts can change due to differential reproduction. From an evolutionary standpoint an ethical system that weeds out the genes or culture of those who practise it is a failure. Of course, if the expected value of (hedons – dolors) in the timeless Universe is maximised by e.g. immigration control, this is what a rational hedonic utilitarian advocates. This remains an instrumental goal, not a terminal one. Failure to maximise utility is failure—this needs no embellishment. [A] weakness of utilitarianism is its happiness criterion. Happiness is an emotion, and thus a proximate rather than an ultimate interest. As an indicator of ultimate interests it is better than nothing, but fallible. Individuals suffering from mania appear happy and claim to be so, but are prone to maladaptive behaviour. Drug addicts experience periods of intense happiness, and this can be maintained for a time if the supply of drugs is kept up. Yet drug addiction tends to be maladaptive. Humans strive for resources and status, that is clear, but achieving this goal does not increase happiness in any simple or predictable way. By contrast reproductive fitness is an objective measurable by number of offspring and continuity of one’s familial and ethnic lineage. The weakness of the happiness criterion is not fatal to the utilitarian enterprise because, as noted earlier, other criteria of non-moral goodness can be substituted for it. […] Adaptiveness as utility In this section I argue that the structure of the utilitarian ethic can be retained while replacing criteria such as happiness or beauty with adaptiveness. From the perspective of modern biology the most important consequence of any act is how it affects genetic interests, how it affects adaptiveness. The consequence of ultimate import is not happiness of the greatest number but adaptiveness of the greatest number. This notion underpins a survival ethic—which I shall refer to as ‘adaptive utilitarianism’—which has important advantages over happiness and other proximate criteria. This ethic cannot be reduced to the social Darwinist doctrine of ‘survival of the fittest’. Like the social Darwinists I shall argue that the freedom to compete within limits is a vital adaptive right, but the criterion of ‘the greatest number’ also leads to an emphasis on the need for cooperation and adoption of procedures for peacefully resolving conflicts. […] Adaptiveness has the advantage of corresponding to knowledge of the human condition, especially to observable states. We can observe individuals’ (or groups’) resources, the amount of control they have over their environment, their state of health, their fertility and life span, ability to defend themselves, and so on. Adaptive utilitarianism does not have a transient emotional state as its criterion of goodness, while retaining much of the intuitive appeal of classic utilitarianism. Genetic proliferation is the “ultimate interest”, but Dr. Salter can’t stomach this idea’s psychopathic consequences. Therefore, he introduces “adaptive utilitarianism”, which involves genetic proliferation but doesn’t accord it priority. So, which is it? Is spreading genes the ultimate interest, or is adaptive utilitarianism more important? One of these must be the victor. Salter boasts that adaptiveness is easy to measure—this seems to be adaptive utilitarianism’s great merit. But the same is true of e.g. hirsute utilitarianism: hairiness of the greatest number. This is easy to quantify, unlike happiness and misery. But I’m not tempted to become a hirsute utilitarian. Worse, adaptiveness of the greatest number doesn’t imply ethnic nationalism. Imagine there are only two very distinct ethnic groups, and group A outnumbers B. Then, replacement of B humans by A humans always increases adaptiveness of the greatest number, because it increases the fitness of many A humans and reduces the fitness (to an equal extent per capita) of only the few B humans. In reality, racial distinctions are fuzzy. But adaptive utilitarianism probably implies (as a first step) replacing all other humans with the largest relatively discrete ethnic-genetic group, i.e. Han Chinese. Genocide: very ethical. [A]daptive utilitarianism should be more sustainable in the long run because it is better for us. An adaptive utilitarian would condemn any practice that reduced fitness below replacement level, no matter how pleasurable. Drug-taking comes to mind, but also the sort of middle class culture common in developed societies that values consumption, comfort, and status over children. If drug-use and dysgenics reduce the expected value of (hedons – dolors) over all timescales, rational hedonic utilitarians oppose drug-use and dysgenics. Irrational people calling themselves utilitarians may cause more misery than pleasure; irrational people who care about EGI may not be effective in spreading their genes. The solution isn’t to change one’s goals, but to be more rational. Another weakness of utilitarianism that a survival ethic corrects is the arbitrariness of the clause prescribing that happiness be maximized. Whether the criterion is happiness, pleasure or economic profit, Mill and the economists who adopted his approach thought that it was impossible to get too much of a good thing. This is an improbable view if proximate interests are not goals in themselves but means to adaptiveness. Even too much wealth or too many mates is bad if the monopoly diminishes the society bearing one’s genetic interests. Too much happiness can diminish prudence and thus harm other interests, such as status or wealth, reducing fitness. Like other proximate interests, happiness necessarily exists in balance with other states, and is thus best optimized rather than maximized. Adaptiveness, in the sense of ability to survive and reproduce, is different. One cannot be too well adapted. Terminal goals are “goals in themselves”. One can call this “arbitrary”, but it is a fact of life. The goal, “maximise the number of my genes in human beings” is represented in some human brains. It isn’t particularly strong, but it can’t be refuted. Goals are not claims about reality; they just exist. This sub-agent’s weakness is demonstrated, however, by the soi-disant Salterians’ lack of sincere commitment to the goal of genetic proliferation. Consider individual genetic interests: do we really believe that Guessedworker et al spend every free hour in spasms of sperm-donation? Salter can’t stomach the vile consequences of strict gene maximisation, so he has invented the incompatible ethic of “adaptive utilitarianism”. And for some reason, only genes in human beings are counted. But even within the human species, Salterians are suspiciously Euro-centric. Tamil immigration to Bahrain harms an Englishman’s EGI roughly as much as the same amount of Turkish immigration to England. Do Salterians care about Tamils replacing Bahrainis as much as they care about Turks in England, or as much as they would care about losing an actual child under a bus? The evidence suggests not. 9(b) Should an individual’s ultimate (genetic) interests be accorded absolute priority over others’ proximate interests? The message of modern biology is that genetic fitness is the ultimate interest, meaning precisely that it is of absolute importance. Unless you practise “adaptive utilitarianism”. Or when you claim, “this is mainly an exercise in political theory dealing with what people are able to do if they want to behave adaptively.” This is surely the starting position of any ethical discussion of the choice between genetic and other interests. And the end point, n’est pas? Unless Frank Salter eschews the accepted meaning of “absolute importance”. Fortunately for those who hold proximate values dear, whether one gives greater emphasis to genetic interests or to other values will rarely be an either/or choice. Most humans are evolved to value adaptive proximate interests such as bonds of kith and kin, status, wealth and health because they are adaptive. More accurately, striving for the things we hold dear is adaptive or was adaptive for much of our evolutionary past. So our lives are unlikely to be turned upside down if we act to increase or secure our genetic interests. This will amount to nothing more than shuffling existing priorities. Indeed: shuffling down the priority of caring for Grandma, and shuffling up the priority of round-the-clock sperm-donation and genocide. 10. Afterword This essay has ranged across several fields of knowledge, including genetics, evolutionary theory, ethology, ecology, various policy areas, the political theory of the state, and ethics. Since mastery of any of these fields is the work of a lifetime, the unavoidable conclusion is that I am not competent to write this essay. Readers should thus approach the arguments presented in this book with a critical attitude. I recommend that you look on it as a stimulus to debate, rather than a statement of final wisdom. I have done my best to get the analysis right, but errors probably remain. This is the most sensible paragraph of On Genetic Interests. Having dismantled enough Salter for all but his most blinkered disciple to admit defeat, I shall now discuss the systematic errors that underlie Salterism. II: Post-mortem Who is Frank Salter? Argumentum ad hominem is unnecessary; but having slain Salterism, prudence demands a bullet through its brain. We wouldn’t want it to rise from the dead. First stop, Wikipedia: Frank Salter matriculated (undergraduate) at the University of Sydney (1979–1982) where he majored in government and public administration, specializing in organization theory under the mentorship of Ross Curnow. At the same time, one Frank K. Salter was active in Sydney’s underground nationalist scene. Dr. Jim Saleam, an amateur historian of Australian nationalism, tells us that: Azzopardi seems to have been a decisive product of the underground. He moved freely within it in the years 1974–76, seeking out allies and otherwise learning lessons. For the latter reason he said, he had even searched out Cass Young in 1975. He had wanted to know what made neo-nazis tick. […] In 1976, he met Frank Salter, formerly of Duntroon Military College and then an engineering student at the University of New South Wales, and through Salter moved into wider circles of the Sydney “Right.” […] The “refugee invasion” had begun and Azzopardi and Salter were certain the old-Right groups would miss the chance. A sheet Advance appeared and in November 1977, it became a broadsheet newspaper. The White Australia question took pride of place. […] Frank Salter, secretary of Australian National Alliance, was clubbed down at the University of New South Wales in February 1979. Perhaps the author of On Genetic Interests bumped into his namesake at the varsity hockey match. An encounter with the young firebrand might have spurred our Frank to wonder whether ethnic nationalism is a vital interest. More probably, they are the same person. Eliezer Yudkowsky describes a common rationality failure mode: There are two sealed boxes up for auction, box A and box B. One and only one of these boxes contains a valuable diamond. There are all manner of signs and portents indicating whether a box contains a diamond; but I have no sign which I know to be perfectly reliable. There is a blue stamp on one box, for example, and I know that boxes which contain diamonds are more likely than empty boxes to show a blue stamp. Or one box has a shiny surface, and I have a suspicion—I am not sure—that no diamond-containing box is ever shiny. Now suppose there is a clever arguer, holding a sheet of paper, and he says to the owners of box A and box B: “Bid for my services, and whoever wins my services, I shall argue that their box contains the diamond, so that the box will receive a higher price.” So the box-owners bid, and box B’s owner bids higher, winning the services of the clever arguer. The clever arguer begins to organize his thoughts. First, he writes, “And therefore, box B contains the diamond!” at the bottom of his sheet of paper. Then, at the top of the paper, he writes, “Box B shows a blue stamp,” and beneath it, “Box A is shiny”, and then, “Box B is lighter than box A”, and so on through many signs and portents; yet the clever arguer neglects all those signs which might argue in favor of box A. And then the clever arguer comes to me and recites from his sheet of paper: “Box B shows a blue stamp, and box A is shiny,” and so on, until he reaches: “And therefore, box B contains the diamond.” But consider: At the moment when the clever arguer wrote down his conclusion, at the moment he put ink on his sheet of paper, the evidential entanglement of that physical ink with the physical boxes became fixed. […] Now suppose another person present is genuinely curious, and she first writes down all the distinguishing signs of both boxes on a sheet of paper, and then applies her knowledge and the laws of probability and writes down at the bottom: “Therefore, I estimate an 85% probability that box B contains the diamond.” Of what is this handwriting evidence? Examining the chain of cause and effect leading to this physical ink on physical paper, I find that the chain of causality wends its way through all the signs and portents of the boxes, and is dependent on these signs; for in worlds with different portents, a different probability is written at the bottom. So the handwriting of the curious inquirer is entangled with the signs and portents and the contents of the boxes, whereas the handwriting of the clever arguer is evidence only of which owner paid the higher bid. There is a great difference in the indications of ink, though one who foolishly read aloud the ink-shapes might think the English words sounded similar. Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts. It’s clear what algorithm wrote Frank Salter’s bottom line. It should have been: I have investigated human goals with an open mind. The evidence suggests that the only human goal is genetic proliferation. I shall present my findings in a book. It was actually: I don’t want non-white immigrants in Australia. Mass non-white immigration is bad. Therefore, I shall write a book whose conclusion is, stop immigration! I wonder what arguments I can use… This reasoning isn’t conscious; Salter is earnest. But his subconscious wrote this bottom line, hence an intelligent man spouts nonsense. Salter’s intelligence is part of the problem. He has found a nugget of scientific truth. Richard Lewontin famously argued that since only 15% of genetic variation is between populations, racial classification is invalid. This is fallacious. In addition to Edwards’s refutation, Salter (via Harpending) has demonstrated that Fst values, like Lewontin’s 15%, are equivalent to statistical “kinship” between family members. The kinship of parents and children, for example, is 25%. This shows that 15% is actually a large value—another means of refuting Lewontin. Salter’s insane thesis derives credibility because in this one respect, his beliefs are more accurate than the mainstream. Another life-support system for Salterism (since GNXP mercilessly stabbed it years ago) is toleration of imprecise language. Guessedworker has tried to leaven the stodgy genetic-interest dough with a sprinkling of Heidegger—observe: But here’s the rub. Being belongs to all organic life … to every living thing, from the strangest bacteria in some hydrothermal vent or sub-glacial lake to the future genius born somewhere among Europeans today. All living things make being and have being. It is not the other way round somehow. It does not become the other way around just in Man’s case because he has evolved an intellectual faculty and higher emotions. We are in Nature with all of Nature, and we are not an exception to Nature. All is multiplicity. In this way, being is Nature’s cumulative constant. I hold the view that animals are, within their own bounds, constantly true to their being. But we men are not constantly true to our being, except in the special moment I have described. We are fallen in the significant respects – the subject of part 3 of this essay. Therefore, we alone experience that inner divorce, and this, of course, is the tragedy of the human condition. Nonetheless, while we have life, that is our moment of potential for the realisation of being, and there is no other. Each holds being, therefore, in relation to the self, and it is the unconcealment of this being, and not just the glory of her raiment, which is Nature’s sublimest part. Our inner Helios rising is our witness of that sublimity. To refute this argument with faith … to say being is from a god … is good only if the saying of it advances the wholly materialist making of adaptive life choices (the material being distinctive genes, of course). And likewise, therefore, to objectivise it as the universal, indivisible, prior, and endowing substrate – that, too, is good only if it enhances fitness. Faith is there in our emotional faculties because genes for it have enhanced fitness and been selected accordingly. The pre-frontal cortex, where all those higher emotions occur, is a product of natural selection like anything else. The pre-frontal cortex is also not on holiday during the being-episode, the moment of presence. It is functioning as always, as it must, and the faith nexus sings as sweetly in the ear of the risen man as ever it did in his predecessor’s (and soon to be successor’s, for presence turns constantly towards absence unless it is attended to actively). That is how the being-as-singular, how immateriality, enters metaphysical thinking, and not from any bona fide witness of an ontotheological reality. There has never been such witness outside of religious thought. But if Western metaphysics is to avoid appeals to an immaterial authority it must find for multiplicity. And to be consistent it must, in turn, acknowledge that objective reality cannot be known or experienced – not even in the moment of ecstatic revelation and annihilation of the self that I mentioned at the beginning. Everything is perception. Of what is, we can know and experience only the reality of our own being in the world, and that reality is informed and coloured by, and situated within, the reality of Man’s being and of the being of kinds of men – Heidegger’s Mitwelt, as far as it goes. So this is my principal argument for multiplicity. There are certainly others. One is very familiar to readers of this blog. Those who’ve read David Stove’s What is Wrong with Our Thoughts? will recognise mumbo-jumbo, passing itself off as profundity. What is Guessedworker’s bottom line—why Heidegger and “being”? Simple: he thinks white people are too concerned with what they do, rather than what they be. They think more about the minutiae of their family lives, work and hedonism, and not enough about their ancestry and their race. Stated clearly, this meets with the “So what?” objection, so Guessedworker must clothe his idea in pseudo-philosophy. The cure is to ask precisely he means by “Nature’s cumulative constant” and the “unconcealment of being”, or to suggest he recap his essay with the word “being” tabooed. No-one can reduce everything he says to the level of quantum amplitudes, but if someone can’t disassemble a few high-level statements then he is probably spewing egesta. On the majorityrights.com sidebar, nestled below “The Ontology Project”, is another interesting link: “Thread Wars”. This is a collection of Guessedworker’s effortless skewering of luminaries such as “simon21″ and “90Lew90″—inadvertent debating partners from Guardian and Daily Telegraph (never the Sun or Mirror) comment threads. If I had a little-known, Earth-shaking new idea about humanity’s ultimate interests, I would want to have it critiqued by important people. I would contact the brilliant philosopher Neven Sesardic, whom I can trust to be free of PC myths. Robin Hanson is always good for a debate. And famous neuroscientist Jeff Hawkins must know a lot about human goals. But if part of me knew that my idea was actually retarded, I might stick to 90Lew90. Salterians carefully avoid clear, precise language when speaking about “EGI”. Another excerpt from The ontology of the material: The ultimate interest in organic life? Why not “of”? And why not unfurl the full thesis? The predominant or “absolute” goal that every human being ought to pursue is to make sure that the genetic code of as many other human beings as possible contains small sections that are identical to small sections of his own chromosomes. Not quite so impressive, eh? Muddy expressions like “ethnic genetic interests” and “reproductive interests” disguise Salterism’s absurdity. “EGI” evokes connotations, in the mind of the Salterian, like “fewer immigrants”. It does not evoke, “I sacrifice everything else, in order to proliferate snippets of genetic code”. When confronted by rational argument, Salterians draw strength from these pleasant emotions, and dwell not upon the real meaning of EGI. What about this: Utility is the mathematical measure of goal-satisfaction, so “avoiding the slide into utility” means “trying not to achieve one’s goals”. III: Advice for Salterians Just look at your gerrymandering. Because we need to keep Australia white! Because we don’t really care about genes, but we do care about race. But classical Salterian theory is limited. Of course, there is a real Salterian “fallacy”—but one that underestimates, not overestimates, the genetic loss via intermarriage and that undercuts the critique analyzed here. Thus, patterns of gene frequencies is a piece of information destroyed by intermarriage independent of the number of specific alleles in the general population. Interbreeding doesn’t harm genetic proliferation. Therefore, genetic interests must now incorporate patterns of gene frequencies. Yes, this means that passing on germ-line replicators is no longer the ultimate interest. But miscegenation is bad. Gray’s linkage of Salter and rape, which is even more grotesque than David B’s linkage to Huntington’s, is stupidity bordering on mendacity. Did Gray finish Salter’s book? Did he read the last one-third, the part on ethics? Salter favors a “mixed ethic”, in which concern about one’s genetic interests is not only balanced by reciprocity concerning the interests of others, but also by concern for individual rights. Salterism is defensive, a balance of relative interests and rights, and in practice in boils down to majority rights and ethno-states. Salterism does not “clearly” imply a promotion of rape, and Gray should be ashamed of himself to even obliquely suggest otherwise. However, given the paragraph about his “beautiful” mulatto grandnieces, I assume that a sense of shame is not one of Gray’s strong suits. Rape could easily further a Salterian’s ultimate interests. Therefore, we are adaptive utilitarians, not gene-maximisers. In practice, this means majority rights and ethno-states. Racialists who know nothing of adaptive utilitarianism also share this goal—what a coincidence. Mention ontology to even an educated fellow nationalist, and certainly to an activist, and he will very likely gaze unawares at the ground beneath his feet. After a few seconds the void of understanding will fill with something very like scorn. He will level his eyes at you and deliver himself of the opinion that that sort of thing has nothing to do with the world of struggle in Nature and politics that he knows and sees everywhere – the struggle which European Man is so demonstrably losing. Too detached from reality, too self-absorbing, he will say. Too many dancing angels. And then, to set you right, and quite without irony, he will remind you of the great existential plaint, the crisis of the crisis. While you are engaged in all this intellectual vanity, he will say, we Europeans are growing older and weaker by the day, our lands more lost to us, our family lines more negroidalised, the political class more traitorous (if that is possible), the bankers and corporate scum more rapacious, the Jews more audacious. You will see how the collective angst, unspoken by his people, unacknowledged amid the culture of greed and celebrity and political hype, is torrenting through him, defining him politically, driving him. What do we do? Now! Today! That is the question, de-Barded and anti-intellectual though it is. That is what he will want you, somehow, to answer. You will nod, and search for a way to explain that revolutions without founding ideas cannot sustain. Salter’s ideas aren’t very persuasive. Let’s mix in some Heidegger, and see if that protects our family lines from negroidalisation. What next, a mixed ethic of stay-in-your-own-country utilitarianism? If there is anything sensible in Salterism, it says: You there—mental sub-agent that cares about bloodlines. Why not generalise yourself to the ethnic level of kinship? This has little effect. The sub-agent cares little about genetic kinship beyond the extended family, and that’s difficult to change. Salterians are similar, except their interest drops off at the limit of humanity (or more probably, Europe). Gene-maximisation also conflicts with more powerful sub-agents, like empathy and the moral sense. An average person might have lots of children, instead of spending all his time being charitable; but empathy discourages him from rape or genocide, and his dignity discourages him from spreading his genes via sperm donation. Even Guessedworker et al wouldn’t really give the Salterian sub-agent free reign. It is genocidal, and a nasty piece of work even in domestic matters. EGI, whether vital interest or mere subjective appeal, is hopeless. I advise racialists to give up these far-fetched ideas. Instead, they should campaign for a more libertarian government. This would not allow them to outlaw miscegenation, but it would permit them to discriminate more in their private lives—which, although they may not realise it, is enough to sate their ethnocentric impulses. This entry was posted in Race on December 22, 2012. On Death This world, our world, is a Death world. Life is predicated on Death: it walks a fraying tightrope over the abyss of non-existence and nothingness. At it’s core, life is nothing but infinitely small pockets of temporary resistance, subsumed in a swirling vortex of entropy and Death. Death is the absence of life. The hollowing out of life. The stripping away of life to the bone. It is the bottomless abyss: an undifferentiated non-place, devoid of time and space. Death is sameness, the end of the illusion of difference which marks life. Death is the cannibalisation of the borders and boundaries that separate things, giving them form and existence. In Death all is one. But all is nothing. The flesh is a fallen, rotten, putrid thing. It is not your friend. Stinking sclerosis. Entropy embodied. The will entombed.  It fails. Stranded in the noumenal without a guide. Not a flicker or light. Nor a whisper of sound.  Ebola, Effective altruism and state sponsored death Last week we looked at how to use White Hate Magic to assist the spread of Ebola. Out of a sense of fairness, universal harmony and balance I thought it would be fun this week to look at how to stop the spread of the virus instead. Of course the plan outlined below isn’t going to happen, but it’s always fun to imagine. And, well, you never know, I mean things could get pretty desperate… In a recent post JM Greer posed the following question: He doesn’t provide the answer. But taking the current world population to be 7.125 billion, the date in question is Monday the 14th of September 2015. The End of the World: it’s sooner than you think. Now, let’s imagine that you wanted to avoid this scenario. What measures would need to be taken to stop the spread of Ebola across the globe? Right now it is still relatively restricted to an area of west Africa. This will change as more people become infected in larger African cities and when greater numbers of people start migrating and begin to seek asylum elsewhere. The potential of this to become truly apocalyptic is only too real. Unless something like the humanitarian plan outlined below is followed the death toll is likely to be 100’s of millions, if not more. 1) Impose a military curfew lasting no less than 30 days on all affected areas. If after 30 days there is still evidence of infection in the area the curfew will be extended for a further 30 days. Food and water will be delivered to families affected by the curfew twice a week. Anyone who breaks curfew for any reason will be shot on sight. 2) Anyone infected within a family home will be taken to a ‘treatment centre’. The only effective treatment for Ebola is death. Any living spaces in which someone suffered the disease will be incinerated and raised to the ground, alongside any possessions contained within. Looting from infected properties will be punished by death without trial. 3) All international travel will be heavily restricted for the duration of the outbreak. Something akin to the Bitcoin blockchain will be created, mapping the movement of individuals across international boarders, in order to restrict it. Anyone found to carry the infection will be taken to a ‘treatment centre’. 4) As Ebola spreads internationally the same curfew process is to be adopted by each affected area. First world countries are not to assume that their healthcare system and facilities are sufficiently advanced to treat Ebola patients. The only effective treatment for Ebola is death. 5) Anyone delivering food and water or ensuring the curfew is to be provided with sufficient protective clothing and well remunerated. Anyone who becomes infected will be taken to a ‘treatment centre’. 6) the bodies of all victims of Ebola are to be incinerated, irrespective of local custom or religious belief. 7) In all non-affected areas life is to proceed as normal. Anyone violently protesting the treatment plan will be incarcerated. Public protests against the treatment plan will be brutalised. 8) Boarder control is to be upheld vigorously throughout the outbreak. No one is to leave an infected area. And that’s it. If implemented, a plan such as the one outlined above, albeit with a little more flesh applied to the bones, could potentially save millions of lives. At the moment the death rate in Africa from Ebola is 70 – 90%. As the epidemic turns into a pandemic, and the capacity of the authorities to cope is stretched even further beyond its limit, this will quickly begin to approach 100%. In such a situation only death can cure death. If each infected person on average infects another two people, and each infected person is close to 100% certain to die anyway, the moral imperative is to ‘treat’ them before they infect anyone else. It’s effective altruism as civilisation preserving genocide. The Question of Sovereignty: Part 1 But is it? Moldbug again: Questions, questions, questions…
06919acfb9ac639f
back to namelist Dieter Schuch J. W. Goethe Universität, Frankfurt a. M., DE Is Quantum Mechanics Emerging from a Nonlinear Theory? Theoretical physics seems to be in a kind of schizophrenic state. Many phenomena in the observable macroscopic world obey nonlinear evolution equations, whereas the microscopic world is governed by quantum mechanics, a fundamental theory that is supposedly linear. In order to combine these two worlds in a common formalism, at least one of them must sacrifice one of its dogmas. I claim that linearity in quantum mechanics is not as essential as it apparently seems since quantum mechanics can be reformulated in terms of nonlinear Riccati equations. In a first step, it will be shown where complex Riccati equations appear in time-dependent quantum mechanics and how they can be treated. This also leads to comparisons with susy quantum mechanics, dynamical invariants and generalized creation/annihilation operators with corresponding coherent states. Furthermore, the time-independent Schrödinger equation can also be rewritten as complex Riccati equation. Finally, it will be shown that (real and complex) Riccati equations also appear in many other fields of physics, from nonlinear dynamics via statistical thermodynamics to cosmology. Watch presentation video watch video Download abstract pdf Download PDF
558623fc43a72766
Saturday, January 23, 2010 Verlinde's thermal origin of gravitation from TGD view point of view Eric Verlinde has posted an interesting eprint titled On the Origin of Gravity and the Laws of Newton to Lubos has commented the article here and also here. What Linde heuristically derives is Newton's F=ma and gravitational force F= GMm/R2 from thermodynamical considerations plus something else which I try to clarify (at least to myself!) in the following. 1. Verlinde's argument for F=ma The idea is to deduce Newton's F=ma and gravitational force from thermodynamics by assuming that space-time emerges in some sense. There are however various assumptions involved which more or less impy that both special and general relativity has been feeded in besides quantum theory and thermodynamics. 1. Time translation invariance is required in order to have the notions of conserved energy and thermodynamics. This assumption requires not only require time but also symmetry with respect to time translations. This is quite a powerful assumption and time translation symmetry not hold true in General Relativity- this was actually the basic motivation for quantum TGD. 2. Holography is assumed. Information stored on surfaces, or screens and discretization is assumed. Again this means in practice the assumption of space-time since otherwise the notion of holography does not make sense. One could of course say that one considers the situation in the already emerged region of space-time but this idea does not look very convincing to me. Comment: In TGD framework holography is an essential piece of theory: light-like 3-surfaces code for the physics and space-time sheets are analogous to Bohr orbits fixed by the light-like 3-surfaces defining the generalized Feynman diagrams. 3. The first law of thermodynamics in the form dE= TdS-Fdx Here F denotes generalized force and x some coordinate variable. In usual thermodynamics pressure P would appear in the role of F and volume V in the role of x. Also chemical potential and particle number form a similar pair. If energy is conserved for the motion one has Fdx= TdS. This equation is basic thermodynamics and is used to deduce Newton's equations. After this some quantum tricks -a rather standard game with Uncertainty Principle and quantization when nothing concrete is available- are needed to obtain F=ma which as such does not involve hbar nor Boltzmann constant kB. What is needed are thermal expression for acceleration and force and identifying these one obtains F=ma. 1. Δ S= 2π kB states that entropy is quantized with a unit of 2π appearing as a unit. log(2) would be more natural unit if bit is the unit of information. 2. The identification Δ x =hbar/mc involves Uncertainty principle for momentum and position. The presence of light velocity c in the formula means that Minkowski space and Special Relativity creeps in. At this stage I would not speak about emergence of space-time anymore. This gives T= FΔ x/Δ S= F×hbar/[2π×mc×kB] F has been exressed in terms of thermal parameters and mass. 3. Next one feeds in something from General Relativity to obtain expression for acceleration in terms of thermal parameters. Unruh effect means that in an accelerted motion system measures temperate proportional to acceleration : kBT= hbar a/2π . This quantum effect is known as Unruh effect. This temperature is extremely low for accelerations encountered in everyday life - something like 10-16 K for free fall near Earth's surface. Using this expression for T in previous equation one obtains the desired F=ma, which would thus have a thermodynamical interpretation. At this stage I have even less motivations for talking about emergence of space-time. Essentially the basic conceptual framework of Special and General Relativities, of wave mechanics and of thermodynamics are introduced by the formulas containing the basic parameters involved. 2. Verlinde's argument for F= GMm/R2 The next challenge is to derive gravitational force from thermodynamic consideration. Now holography with a very specially chosen screen is needed. Comment: In TGD framework light-like 3-surfaces (or equivalently their space-like duals) represent the holographic screens and in principle there is a slicing of space-time surface by equivalent screens. Also Verlinde introduces a slicing of space-time surfaces by holographic screens identified as surfaces for which gravitational potential is constant. Also I have considered this kind of identification. 1. The number of bits for the information represented on the holographic screen is assumed to be proportional to area. N =A/Ghbar. This means bringing in blackhole thermodynamics and general relativity since the notion of area requires geometry. Comment: In TGD framework the counterpart for the finite number of bits is finite measurement resolution meaning that the 2-dimensional partonic surface is effectively replaced with a set of points carrying fermion or antifermion number or possibly purely bosonic symmetry generator. The orbits of these points define braid giving a connection with topological QFTs for knots, links and braids and also with topological quantum computation. 2. It is assumed that A=4π R2, where R is the distance between the masses. This means a very special choice of the holographic screen. Comment: In TGD framework the counterpart of the area would be the symplectic area of partonic 2-surfaces. This is invariant under symplectic transformations of light-cone boundary. These "partonic" 2-surfaces can have macroscopic size and the counterpart for blackhole horizon is one example of this kind of surface. Anyonic phases are second example of a phase assigned with a macroscopic partonic 2-surface. 3. Special relativity is brought in via the bomb formula One introduces also other expression for the rest energy. Thermodynamics gives for non-relativistic thermal energy the expression E= 1/2N kBT. This thermal energy is identified with the rest mass. This identification looks to me completely ad hoc and I think that kind of holographic duality is assumed to justify it. The interpretation is that the points/bits on the holographic screen behave as particles in thermodynamical equilibrium and represent the mass inside the spherical screen. What are these particles on the screen? Do they correspond to gravitational flux? Comment: In TGD framework p-adic thermodynamics replaces Higgs mechanism and identify particle's mass squared as thermal conformal weight. In this sense inertia has thermal origin in TGD framework. Gravitational flux is mediated by flux tubes with gigantic value of gravitational Planck constant and the intersections of the flux tubes with sphere could be TGD counterparts for the points of the screen in TGD. These 2-D intersections of flux tubes should be in thermal equilibrium at Unruh temperature. The light-like 3-surfaces indeed contain the particles so that the matter at this surface represents the system. Since all light-like 3-surfaces in the slicing are equivalent means that one can choose the reresentation of the system rather freely . 4. Eliminating the rest energy E from these two formulas one obtains NT= 2mc2 and using the expression for N in terms of area identified as that of a sphere with radius equal to the distance R between the two masses, one obtains the standard form for gravitational force. It is difficult to say whether the outcome is something genuinely new or just something resulting unavoidably by feeding in basic formulas from general thermodynamics, special relativity, and general relativity and using holography principle in highly questionable and ad hoc manner. 3. In TGD quantum classical correspondence predicts that thermodynamics has space-time correlates From TGD point of view entropic gravity is a misconception. On basis of quantum classical correspondence - the basic guiding principle of quantum TGD - one expects that all quantal notions have space-time correlates. If thermodynamics is a genuine part of quantum theory, also temperature and entropy should have the space-time correlates and the analog of Verlinde's formula could exist. Even more, the generalization of this formula is expected to make sense for all interactions. Zero energy ontology makes thermodynamics an integral part of quantum theory. 1. In zero energy ontology quantum states become zero energy states consisting of pairs of the positive and negative energy states with opposite conserved quantum numbers and interpreted in the usual ontology as physical events. These states are located at opposite light-like boundaries of causal diamond (CD) defined as the intersection of future and past directed light-cones. There is a fractal hierarchy of them. M-matrix generalizing S-matrix defines time-like entanglement coefficients between positive and negative energy states. M-matrix is essentially a "complex" square root of density matrix expressible as positive square root of diagonalized density matrix and unitary S-matrix. Thermodynamics reduces to quantum physics and should have correlate at the level of space-time geometry. The failure of the classical determinism in standard sense of the word makes this possible in quantum TGD (special properties of Kähler action (Maxwell action for induced Kahler form of CP2) due to its vacuum degeneracy analogous to gauge degeneracy). Zero energy ontology allows also to speak about coherent states of bosons, say of Cooper pairs of fermions- without problems with conservation laws and the undeniable existence of these states supports zero energy ontology. 2. Quantum classical correspondence is very strong requirement. For instance, it requires also that electrons traveling via several routes in double slit experiment have classical correlates. They have. The light-like 3-surfaces describing electrons can branch and the induced spinor fields at them "branch" also and interfere again. Same branching occurs also for photons so that electrodynamics has hydrodynamical aspect too emphasize in recent empirical report about knotted light beams. This picture explains the findings of Afshar challenging the Copenhagen interpretation. These diagrams could be seen as generalizations of stringy diagrams but do not describe particle decays in TGD framework. In TGD framework stringy diagrams are replaced with a direct generalization of Feynman diagrams in which the ends of 3-D lightlike lines meet along 2-D partonic surfaces at their ends. The mathematical description of vertices becomes much simpler since the 2-D manifolds describing vertices are not singular unlike the 1-D manifolds associated with string diagrams ("eyeglass" in fusion of closed strings). 3. If entropy has a space-time correlate then also first and second law should have such and Verlinde's argument that gravitational force attraction follows from first law assuming energy correlation might identify this correlate. This of course applies only to the classical gravitation. Also other classical forces should allow analogous interpretation as space-time correlates for something quantal. 4. The simplest identification of thermodynamical correlates in TGD framework The first questions that pop up are following. Inertial mass emerges from p-adic thermodynamics as thermal conformal weight. Could the first law for p-adic thermodynamics, which allows to calculate particle masses in terms of thermal conformal weights, allow to deduce also other classical forces? One could think that by adding to the Hamiltonian defining partition function chemical potential terms characterizing charge conservation it might be possible to obtain also other forces. In fact, the situation might be much simpler. The basic structure of quantum TGD allows a very natural thermodynamical interpretation. 1. The basic structure of quantum TGD suggests a thermodynamic interpretation. The basic observation is that the vacuum functional identified as the exponent of Kähler function is analogous to a square root of partition function and Kähler coupling strength is analogous to critical temperature. Kähler function identified as Kähler action for a preferred extremal appears in the role of Hamiltonian. Preferred extremal property realizes holography identifying space-time surface as analog of Bohr orbit. One can interpret the exponent of Kähler function as the density of states in the world of classical worlds so that Kähler function would be analogous to entropy density. Ensemble entropy is average of Kähler function involving functional integral over the world of classical worlds. This exponent is the counterpart for the quantity Ω appearing in Verlinde's basic formula. 2. The addition of a measurement interaction term to the modified Dirac action gives rise to a coupling to conserved charges. Vacuum functional is identified as Dirac determinant and this addition is visible as an addition of an interaction term to Kähler function. The interaction gives rise to forces coupling to various charges at classical level for quantum states with fixed quantum numbers for positive energy part of the state. These terms are analogous to chemical potential terms in thermodynamics fixing the average values of various charges or particle numbers. In ordinary non-relativistic thermodynamics energy is in a special role. In the recent case there is a complete quantum number democracy very natural in a framework with coordinate invariance and with four-momentum assigned with the isometries of the 8-D imbedding space. In Verlinde's formula there is exponential factor exp(-E/T- Fx) analogous to the measurement interaction term. In TGD however conserved charges multiplied by chemical potentials defining generalized forces appear in the exponent. 3. This gives an analog of thermodynamics in the world of classical worlds (WCW) for fixed values of quantum numbers of the positive energy part of state. For zero energy states one however has also additional thermodynamics- or rather its square root. This thermodynamics is for the conserved quantum numbers whose averages are fixed. For general zero energy states one has sum over state pairs labelled by momenta and various other quantum numbers labelling the positive energy part of the state. The coefficients of the conserved quantities of the measurement interaction term linear in conserved quantum numbers define the analogs of temperature and various chemical potentials. The field equations defined by Kähler function and chemical potential terms have thermodynamical interpretation and give coupling to conserved charges and also to their thermal averages. What is important is that temperature and various chemical potentials assigned to positive and negative energy parts of the state allow a complete geometrization in a general coordinate invariant manner and allow explicit expressions in terms of functions expressible in terms of the induced geometry. 4. The explicit expressions must be deduced from Dirac determinant defining exponent of Kähler function plus measurement interaction term, in which the conserved isometry charges of Cartan algebra (necessarily!) appearing in the exponent are contracted with the analogs of chemical potentials. One make two rather detailed educated guesses for the chemical potentials. For the modified Dirac action the measurement interaction term is 4-dimensional and essentially unique. For the Kähler action one can imagine two candidates for the measurement interaction term. For the first option the term is 4-dimensional and for the second one 3-dimensional. 5. Some details related to the measurement interaction term As noticed, one can imagine two options for the measurement interaction term defining the chemical potentials in terms of the space-time geometry. 1. For both options the M4 part of the interaction term is proportional to n(M4)G/R and CP2 part to a dimensionless constant n(CP2), and the condition that there is no dependence of hbar excludes the dependence on the dimensionless constant Ghbar/R2. 2. One can consider two different forms of the measurement interaction part in Kähler function. For the first option the conserved Kähler current replaces fermion current in the modified Dirac action and defines a 4-dimensional interaction term highly analogous to that assigned with the modified Dirac action. The source term induced to the field equations corresponds to the variation of [(G/R)× n(M4)pq,A gAB(M4)jA,α +n(CP2)Qq,A gABJA,α(CP2)] Jα . Here Jα is Kähler current. 3. For the second option the measurement interaction term in Kähler action is sum over contractions of quantum Cartan charges with corresponding classical Noether charges giving the sum of the term (G/R)× n(M4)pq,A pcl,A +n(CP2)Qq,A Qcl,A from both ends of the space-time sheet. For a general space-time sheet the classical charges are different at its ends so that the variation gives non-trivial boundary conditions equating the normal (time-like) component of the canonical momentum current with the contraction of the variation of classical Noether charges contracted with quantum charges. By the extremal property the measurement interaction terms at the ends of the space-time sheet cancel each other so that the effect on Kähler function is only via the boundary conditions in accordance with zero energy ontology. For this option the thermodynamics for conserved charges is visible at space-time level only via the appearence of the average quantal charges and universal chemical potentials. 4. The vanishing of Kähler gauge current resp. classical Noether charges for the first resp. second option would suggest an interpretation in terms of infinite temperature limit. The fact that momenta and color charges are in completely symmetric position suggests however the vanishing of chemical potentials. One can in fact fix the value of the temperature to say T= R/G without loss of information and code thermodynamics in terms of the chemical potentials alone. The vanishing of the measurement interaction term occurs for the vacuum extremals. For CP2 type vacuum extemals with Euclidian signature of the induced metric interpretation in terms of vanishing chemical potentials is more natural. For vacuum extremals with Minkowskian signature of the induced metric fluctuations and consequently classical non-determinism are maximal so that the interpretation in terms of high temperature phase cannot be excluded. One must however notice that CP2 projection for vacuum extremals is 2-dimensional whereas high temperature limit would suggest 4-D projection so that the interpretation in terms of vanishing chemical potentials is more natural also now. To sum up, TGD suggests two thermodynamical interpretations. p-Adic thermodynamics gives inertial mass squared as thermal conformal weight and also the basic formulation of quantum TGD allows thermodynamical interpretation. The thermodynamical structure of quantum TGD has of course been guiding principle for two decades. In particular, quantum criticality as the counterpart of thermal criticality has been extremely useful guide line and led to a breakthrough in the understanding of the modified Dirac equation during the last year. Also p-adic thermodynamics has been in the scene for more than 15 years and makes TGD a theory able to make precise quantitative predictions. Some conclusions drawn from Verlinde's argument is that gravitation is entropic interaction, that gravitons do not exist, and that string models and theories introducing higher-dimensional space-time are a failure. TGD view is different. Only a generalization of string model allowing to realize space-time as surface is needed and this requires fixed 8-D imbedding space. Gravitons also exist and only classical gravitation as well as other classical interactions code for thermodynamical information by quantum classical correspondence. In any case, it is encouraging that also colleagues might be finally beginning to get on the right track although the path from Verlinde's arguments to quantum TGD as it is now will be desperately long and tortuous if colleagues continually refuse to receive the helping hand. For more details see the brief pdf file or the chapter Does the Modified Dirac Equation Define the Fundamental Action Principle? of "Quantum TGD as Infinite-dimensional Spinor Geometry". Monday, January 18, 2010 Twenty four questions Lubos Motl provided his own answer to Sean Carroll's 24 questions. Lubos answered these questions as a super string fanatic. In the following I will do the same as a TGD fanatic;-). 1. What breaks electroweak symmetry? Lubos gives the text book answer: the electroweak symmetry is broken by the Higgs field's vacuum expectation value. TGD allows Higgs but reduces the description of the symmetry breaking to much deeper level. CP2 geometry breaks the electroweak symmetry: for instance, color partial waves for different weak isospin states of imbedding space spinors have hugely different masses. The point is that electroweak gauge group is the holonomy group of spinor connection and not a symmetry group unlike color group, which acts as isometries. For physical states are massless before p-adic thermal massivation due to the compensation of conformal weights of various operators. The most plausible option is that both the non-half integer part of vacuum conformal weight for particle and Higgs expectation are expressible in terms of the same parameter which corresponds to a generalized eigenvalue of the modified Dirac operator. Higgs expectation-massivation relation is transformed from causation to correlation. 2. What is the ultraviolet extrapolation of the Standard Model? As Lubos violently explains that "UV extrapolation" in the above statement should be replaced with "UV completion". I would replace it with "the unified theory of fundamental interactions". Lubos of course answers as a proponent of string theory. The problem is that there is practically infinite number of completions so that the predictivity is lost. TGD geometrizes the symmetries of the standard model and reduces them to the symmetries of classical number fields. Also octonionic infinite primes, one of the most exotic notions inspired by TGD, code standard model symmetries. The most general formulation of the World of Classical Worlds is as the space of hyper-quaternionic of co-hyper-quaternionic subalgebras of the local hyper-octonionic Clifford algebra of M8 or equivalent M4× CP2. The answers by both Lubos and me involve also supersymmetry but in different sense. In TGD framework the oscillator operators of the induced spinor fields define the analog of the space-time SUSY so that the algebra of second quantization is replaced with N=∞ SUSY. This requires a modification of SUSY formalism but N=1 SUSY associated with the right handed coveriantly constant neutrinos emerges as preferred sub-SUSY and counterpart of N=1 SUSY. The construction of infinite primes involves also supersymmetry. 3. Why is there a large hierarchy between the Planck scale, the weak scale, and the vacuum energy? These are the two most famous hierarchy problems of current physics as Lubos notices. In TGD framework Planck scale is replaced with CP2 length scale, which is roughly by a factor 104 longer than Planck length scale. Instead of Planck length it might be more appropriate to talk about gravitational constant which follows as a prediction in TGD framework. p-Adic length scale hierarchy is needed to understand the hierarchy of mass scales. The inverse of the mass squared scale comes as primes which are very near to octaves of a fundamental scale. Powers of two near Mersenne primes or Gaussian Mersennes are favored and this predicts a scaled up copy of hadron physics, which should become visible at LHC. Quite generally, unlimited number of scaled versions of standard model physics are in principle possible. The vacuum energy density is the basic problem of super string approach. How desperate the situation is is clear from the fact that rhetoric tricks such as anthropic principle are considered seriously. Empirical findings- for some reason neglected by colleagues - suggests that cosmological constant depends on time. In TGD framework the cosmological constant is predicted to depend on the p-adic length scale of the space-time sheet and behaves roughly like 1/a2, where a is cosmic time identified as light-cone property time. Actually the time parameter a is replaced by a corresponding p-adic length scale. The recent value is predicted correctly under natural assumptions. What dark energy is is a second question. TGD suggests the identification as a matter at space-time sheets mediating gravitational interaction having gigantic values of Planck constant implying extremely long Compton lengths for elementary particles. This guarantees that the energy density is constant in excellent approximation. If gravitational space-time sheets correspond to dark magnetic flux tubes- expanded cosmic strings- the mysterious negative pressure can be identified classically in terms of magnetic tension. If one takes seriously the correlation of the intelligence of conscious entities with the value o Planck constant, these gravitational space-time sheets can be God like entities. 4. How do strongly-interacting degrees of freedom resolve into weakly-interacting ones? Lubos regards this question as strange and expresses this using colorful rhetoric. Maybe Carroll refers to QCD and hadronization. M8-M4× CP2 duality relates low energy and higher energy hadron physics to each other in TGD framework and corresponds group theoretically to SU(3)-SO(4) duality, where SO(4) is the well-known strong isospin symmetry of low energy hadron physics. Or maybe Carroll talks about the technical problem of calculating the behavior of strongly interacting systems. Nature might have solved the latter problem by a phase transition increasing Planck constant so that perturbation theory based on larger value of Planck constant works. The particle spectrum however changes and system becomes anyonic in general. 5. Is there a pattern/explanation behind the family structure and parameters of the Standard Model? I can only echo Lubos: of course there is. In super string models the large number of explanations tells that the real explanation is lacking. In TGD framework fermion families correspond to various genera for partonic 2-surfaces (genus tells the number of handles attached to sphere to get the 2-dimensional topology). There is an infinite number of genera but the 3 lowest genera are mathematically very special (hyper-ellipticity as a universal property), which makes them excellent candidates for light fermion families. The successful predictions for masses using p-adic thermodynamics and relying strongly on the genus dependent contribution from conformal moduli supports the explanation. Bosons correspond to wormhole contacts and are labeled by pairs of general implying a dynamical SU(3) symmetry with ordinary bosons identified as SU(3) singlets. SU(3) octet bosons perhaps making themselves visible at LHC are predicted and serve as a killer test. The symmetries of standard model reduce to the geometry of CP2 having a purely number theoretical interpretation in terms of the hyper-octonionic structure. Number theory fixes through associativity condition the dynamics of space-surfaces completely (hyper-quaternionicity or its co-property in appropriate sense). 6. What is the phenomenology of the dark sector? Lubos sees the dark matter as something relatively uninteresting. Just some exotic weakly acting particles. How incredibly blind a theorist accepting 11-D space-time and landscape having absolutely no empirical support can be when it comes to actual experimental facts! In TGD framework dark matter means a revolution in the world view. Its description relies on the hierarchy of Planck constants requiring a generalization of the 8-D imbedding space M4 × CP2 to a book like structure with pages partially characterized by the value of Planck constant. The most fascinating implications are in biology. Also the implications for our view about the nature of consciousness and our position in World Order are profound. 7. What symmetries appear in useful descriptions of nature? As Lubos says, one must be careful what types of symmetries we are talking about. As Lubos says "Only global unbroken symmetries are "really objective" features of the reality. It's very likely that we have found the full list and it includes the CPT-symmetry, Poincare symmetry (including Lorentz, translational, and rotational symmetries), and the U(1) from the conservation of the electric charge. By adding color symmetry and separate baryuon and lepton conservation one obtains the symmetries of quantum TGD: this prediction follows from number theoretical vision alone. Lubos mentions dualities relating descriptions based on different symmetries. In TGD M8-M42 duality manifests as the dual descriptions of hadrons using low energy hadron phenomenology (SO(4))and parton picture at high energies (color SU(3)). There are good reasons to believe that TGD Universe is able to emulate almost any gauge theory for which gauge group is simply laced Lie group and stringy system (Mc-Kay correspondence, inclusions of hyper-finite factors and the book like structure of generalized imbedding space). These symmetries would be however engineered rather than fundamental symmetries. 8. Are there surprises at low masses/energies? Lubos believes that there are no surprises without realizing that we ourselves are the most surprising surprise. Eye is not able to see itself without a mirror. The fact is that standard physics cannot say anything really interesting about life and consciousness. p-Adic physics, hierarchy of Planck constants, zero energy ontology,.... ; I believe that all this is necessary if one really wants to understand living matter. 9. How does the observable universe evolve? Lubos believes in standard cosmology described by General Relativity as such. TGD predicts quantum version of standard cosmology. Smooth cosmological evolution is replaced by a sequence of rapid expansion periods serving as space-time correlates for quantum jumps increasing Planck constant for appropriate space-time sheets. This applies in all length scales and one especially fascinating application is to the evolution of Earth itself. Expanding Earth hypothesis finds a physical justification and one ends up to an amazingly simple and predictive vision about pre-Cambrian and Cambrian periods: this includes both meteorology, geology, and biology. Zero energy ontology strongly suggests that the proper quantum description is in terms of the moduli space for causal diamonds (CDs identified as intersections of future and past light-cones). The entire future light-cone labeling the "upper" tips of CD and analogous to Robertson-Walker cosmology is replaced with a discrete set of points. In particular, the values of cosmic time come as octaves of basic scale for a given value of Planck constant. The spectrum of planck constants means that all rational multiples of CP2 time scale are in principle possible. Cosmic evolution as endless re-creation of the Universe- can be seen as the emergence of CDs with larger and larger size. 10. How does gravity work on macroscopic scales? General Relativity is part of the description but zero energy ontology and hierarchy of Planck constants bring in new elements. The gigantic values of gravitational Planck constant make possible astroscopic quantum coherence for the dark matter at magnetic flux tubes mediating gravitational interaction and explain dark energy. Quantum classical correspondence suggests that the exchanges of virtual particles has classical description allowed by Einstein's tensor. In the case of planetary system a possible manifestation is the observation of Grusenick that a Michelson interferometer rotating horizontal plane produces constant interference pattern but in a vertical plane the interference pattern varies during rotation. If real this find is revolutionary. It might also directly relate also to the finding that the measured values of gravitational constant varies within 1 per cent. There has been no reaction from academic circles. The assumption that gravitation in long length scales has been understood more or less completely is the basic dogma of string theorists. This despite the fact that the list of anomalies and intriguing regularities is really long. It is much more rewarding to impress colleagues with long and complex calculations than using the professional lifetime to a risky attempt to solve a real problem. 11. What is the topology and geometry of space-time and dynamical degrees of freedom on small scales? In TGD framework "on small scales" can be dropped from the question. Many-sheeted space-time, hierarchy of Planck constants, p-adic space-time sheets serving as correlates of cognition and intentionality, zero energy ontology... All this means a dramatic generalization of the view about space-time in all length scales and a profoundly new way to interpret what we observe. If TGD is correct we really "see" the dark matter in biology and we really "see" p-adic physics via its interaction giving rise to effective p-adic topology of real space-time sheets leading to to extremely successful predictions for elementary particle masses. Quantum group enthusiasts believe that space-time time becomes non-commutative in Planckian length scales. Some theoreticians believe that some kind of Planckian discreteness emerges. In TGD framework quantum groups emerge as a natural part of description in terms of a finite measurement resolution and in all length scales. Discretization appears as a space-time correlate for a finite measurement resolution but not as an actual discreteness. The finite resolution of cognition and sensory perception implies also an apparent discreteness. Also the hierarchy of infinite primes suggests description in terms of hierarchy of discrete structures. At fundamental level everything is however continuous- in real or in p-adic sense in accordance with the generalization of number concept involving both fusion of real and p-adic number fields to a larger super structure and providing single space-time point with infinitely rich number theoretic anatomy. The talk about infinite primes (infinite only in real sense) sounds very unpractical but to my great surprise infinite primes lead to highly detailed predictions for the spectrum of states and quantum numbers. 12. How does quantum gravity work in the real world? Lubos restates the basic belief of string theorists that Einstein's equations follow at long length scale QFT limit of super string models. In TGD framework Einstein's equation hold true too at this limit but quantal aspects are also present. The hierarchy of Planck constants -in particular gigantic values of the gravitational Planck constant at dark magnetic flux tubes mediating gravitational interaction- are essential for the gravitational physics of dark matter. There are also several delicate effects such as Allais effect suggesting that the ultraconservative view of Lubos is wrong. With all respect, the builders of quantum gravity theories should really consider returning to the roots and also a serious consideration of experimental data. Otherwise they continue to produce useless formalism without any connection with the observed reality. 13. Why was the early universe hot, dense, and very smooth but not perfectly smooth? The standard answer echoed by Lubos is in terms of inflationary cosmology. In TGD framework very early cosmology is cosmic string dominated. Space-time sheets appear later (at certain proper time distance from light-cone boundary). Inflationary cosmology is replaced with a sequence of expansion periods during which the cosmology is quantum critical at appropriate space-time sheets. No scales are present and 3-space is flat. The critical cosmology, which is unique apart from a parameter telling its duration describes the situation. This is extremely powerful prediction following from the imbeddability to M4× CP2 alone. Quantum criticality implies the universality of the dynamics during expansion periods. Big Bang is replaced by a "silent whisper amplified to a Bang" since the energy density of cosmic strings behaves as 1/a2, where a denotes the proper time of light-cone. The moduli space of CDs suggests a cartesian product of M4×CP2 labeling the lower tips of CDs with its discrete version labeling the upper tips of CD. One must ask whether a CD corresponds to a counterpart of Big Bang followed eventually by a Big Crush. 14. What is beyond the observable universe? "What is beyond the universe observable to us" would be a more precise formulation. The hierarchies of Planck constants and p-adic length scales, the hierarchy of conscious entities in which we correspond to one particular relatively low lying level, the hierarchy of infinite primes mathematically similar to an infinite hierarchy of second quantizations, the infinitely complex structure of single space-time point realizing algebraic holography,.... I find myself standing at the shore of an infinitely vast sea. The fundamental symmetries are the basic elementary particle quantum numbers are universal. This by the simple requirement that the geometry of the world of classical worlds exists mathematically and has number theoretic interpretation. 15. Why is there a low-entropy boundary condition in the past but not the future? The form of the question reflects the erratic identification of the experienced time appearing in second law with the geometric time appearing as one space-time coordinate. After these 32 years this identification looks to me incredibly stupid but is made by most of colleagues despite the that the fact that these times are completely different. Irreversibility contra reversibility, only the recent moment and past contra entire eternity, etc... Here only consciousness theory could help but the patient stubbornly refuses to receive the medication. Lubos however intuitively realizes that future and past are not in symmetric position in second law but is unable to ask what this means. He really believes that Boltzmann equations are all that is needed and never consider the possibility that these wonderful equations might make sense only under certain conditions. In TGD framework the geometric correlate for the arrow of subjective time which by definition is always the same (consciousness as sequence of quantum jumps with past identified as quantum jumps that have already occurred and contribute to conscious experience) can in principle have both directions. Phase conjugate laser beams provide a basic example about the situation in which second law applies in "wrong" direction of geometric time. Also self assembly for biological molecules can be interpreted in this manner. Hierarchy of Planck constants implies that for given CD Boltzmann's equations make sense only for smaller CDs inside it. In living matter the Boltzmannian description fails. In TGD framework the concept of low entropy boundary condition does not make sense. The subjective evolution applies the evolution of entire CD of cosmological size quantum jump by quantum jump. Boltzmann's equation apply only in scales considerably shorter than cosmological time. What is clear that one can speak only initial condition rather than boundary condition. It is however not clear whether one can speak about the evolution of entropy as a function of cosmic time if identified as a coordinate of the imbedding space. Quantum classical correspondence might allow also the mapping of subjective time evolution to a geometric time evolution with respect to cosmic time. The low entropy of very early universe could correspond to that assignable to cosmic strings. The energy density of cosmic strings goes down as 1/a2 and entropy density as 1/a so that for a given comoving volume the entropy approaches to zero. The structure of moduli space of CDs suggests that positive of the upper tip of CD relative to the lower one defines a discretized cosmic time and the space-time correlate for entropy corresponds to the growth of entropy of CD as a function of this time in an ensemble of CDs. The asummetry between tips could be seen as a correlate for the arrow of time. Carroll's idea about boundary conditions in future might make sense in the following sense. In zero energy ontology one has pairs of positive and negative energy sense and there is large temptation to think that there are two choices for the tip which corresponds to the discrete version of future light-cone. 16. Why aren't we fluctuations in de Sitter space? If I have understood correctly the emotional rhetoric of Lubos, the idea of Carroll seems to be that intelligent life is just a random fluctuation rather than a long lasting evolution. For some reason he locates this fluctuation in de Sitter space. In the standard physics framework this view is however more or less unavoidable. The colleagues should really use some of their time to learn what we understand and what we do not understand about consciousness and brain to realize that the physics as they understand really fails to describe the physics of life. Also Lubos is so fixated in his materialistic and reductionistic dogmas that he is unable to propose anything constructive. For instance, he does not ask how this undeniable evolution is possible in the framework of standard physics. In TGD framework the hierarchy of Planck constants meaning a hierarchy of macroscopic quantum phases and hierarchy of time scales of memory and intentional action leads to a coherent overall view about what life is. Zero energy ontology provides a concrete realization how volition is realized in accordance with the laws of physics and makes possible a continual re-creation of the Universe. 17. How do we compare probabilities for different classes of observers? I do not repeat the violent reaction of Lubos to this question. I am myself not at all sure whether I can catch the meaning of this question. Maybe I could interpret in terms of finite measurement resolution. Different measurement resolutions give rise to different M-matrices and probabilities and the comparison would require rules allowing to compare these probabilities. This comparison requires relationship between M-matrices at quantum level: probabilities are not enough. Renormalization group evolution as function of measurement resolution could provide the answer to ho compare the probabilities. 18. What rules govern the evolution of complex structures? The text book answer of Lubos is "The detailed evolution of all complex structures is governed by the microscopic laws that govern the elementary building blocks, applied to a large number of ingredients". The TGD inspired answer is based on the acceptance of fractal hierarchies: reductionistic dogma is replaced with fractality. The laws at various levels are essentially similar but every level brings something new: Mandelbrot set does not look exactly the same in the new zoom. It is not possible to reduce the behavior at higher levels that at the lowest level. The hierarchy of infinite primes characterizes this idea number theoretically and -as there are reasons to believe- also physically. The construction of hyper-octonionic infinite primes is structurally similar to a second quantization of an arithmetic quantum field theory with states labeled by primes (rational, quaternionic, or octonionic). There is infinite hierarchy of second quantization with many particle states of the previous level becoming single particle states of the new level. At each level one has infinite primes analogous to free many particle states plus primes analogous to bound states. One new element of emergence is association statistics. Permutations and associations are basic stuff of number theory and algebra. Quantum commutativity- invariance of the physical state under permutations in quantum sense leads to Fermi-, Bose- and quantum group statistics in effectively 2-D situation. Quantum associativity requires association statistics with respect to different associations of particles (replacing A(BC) with (AB)C can induce multiplication with +1,-1, or more complex phase). At space-time level the hierarchy of space-time sheets is the counterpart for this hierarchy. p-Adic length scales define one hierarchy. Also space-time sheets characterized by a large value of Planck constant emerge as systems migrate to the the pages of the Big Book partially characterized increasing value of Planck constant and at which matter is dark relative to the observer with standard value of Planck constant, which corresponds to rational number equal to 1. There is also a hierarchy of cognitive descriptions of the physical system. The higher the level in the hierarchy, the more abstract the description is and the less details it has. This is like the view of boss of big company as compared to that of a person doing something very concrete job. p-Adic physics turns upside the reductionistic hierarchy proceeding from short to long scales. What is infinitesimal p-adically is infinitely large in real sense. This p-adic aspects is necessarily if we want to understand intentional systems able to plan their own behavior. p-Adic effectively topology means precise long range correlations and short range chaos which indeed characterizes the behavior of living matter. One can also say that p-adic physics provides the IR completion of physics. 19. Is quantum mechanics correct? Quantum mechanics is not wrong. It however requires a profound generalization if we want to understand life. Also the gravitational anomalies and unexpected regularities at the level of planetary system suggest a generalization. Planck constant must be replaced with a hierarchy of Planck constants realized in terms of the "Big Book". Positive energy ontology must be replaced with zero energy ontology for which states correspond to physical events in standard positive energy ontology. S-matrix is replaced with its "complex square" root - M-matrix- having interpretation as square root of density matrix and making thermodynamics part of quantum theory. This generalization answers several frustrating questions raised in standard ontology. A further important modification is the introduction of the notion of finite measurement resolution realized in terms of inclusions of hyper-finite factors and having discretization as space-time correlate. 20. What happens when wave functions collapse? The answer of Lubos is from the few pages of the standard quantum mechanics text book devoted to measurement problem. "A wave function is nothing else than a tool to predict probabilities; it is no real wave. When such an object "collapses", the only thing that it means is that we learned something about the random outcomes of some measurements, so we may eliminate the possibilities that - as we know - can no longer happen. For our further predictions, we only keep the probabilities of the possibilities that can still happen." This answer brings in "we" but says nothing about what this "we" might be. This "We" remains an outsider to the physical world. Here we encounter the amazing ability of even admittedly intelligent persons to see the problem although it is staring directly at their face. In TGD framework wave function collapse is involved with quantum jump re-creating the quantum universe. Speaking about space-time correlates this means that entire space-time surface (or rather their quantum superposition) is replaced with a new one. Both geometric past and future are replaced with a new one in quantum jump. There is no conflict with deterministic field equations (in generalized sense in TGD framework) since the non-determinism relates to subjective time identified as a sequence of quantum jumps rather than with geometric time appearing at classical field equations and Schrödinger equation. Negentropy Maximization Principle stating the reduction of entanglement entropy in quantum jump is maximal implies standard quantum measurement theory. There are fascinating possibilities opened by the fact that for rational and even algebraic entanglement probabilities number theoretic analogs of Shannon entropy make sense and allow negentropic entanglement (emergence of information carrying stable quantum entangled states). 21. How do we go from the quantum Hamiltonian to a quasiclassical configuration space? A more appropriate question would be "How to go from quantum description to classical description". Hamiltonian formulism relies on on Newtonian time and is given up already in Special Relativity. In General Relativity General Coordinate invariance makes Hamiltonian formalism even more un-natural. In zero energy ontology the basic mathematical object coding for the predictions of the theory is M-matrix characterizing the physics inside given CD. It decomposes into a product of positive square root of diagonal density matrix and unitary S-matrix. The latter characterizes given CD and need not have any natural representation as an exponentiation of infinitesimal Hermitian operator- the Hamiltonian. This kind of picture is also in conflict with General Coordinate Invariance. In p-adic context unitary evolution becomes highly questionable also for number theoretical reasons. The counterpart of exponential function in p-adic context does not have the properties as it has in real context and the natural unitary operators involve roots of unity typically requiring algebraic extension of p-adic numbers and therefore have no description as unitary time evolutions. In the formalism without Hamiltonian observables are replaced with algebras of various symmetries. Various super-conformal symmetries make these algebras infinite-dimensional. Modified Dirac equation brings in second quantization which reduces to an infinite-dimensional analog of space-time SUSY algebra. How classical physics emerges from quantum theory is of course extremely important un-answered question although Lubos claims the opposite. This emergence has two meanings corresponding to geometric time and subjective time. 1. Consider first geometric time. In TGD framework space-time surface is a preferred extremal of K\"ahler action and analogous to Bohr orbit. Classical physics in the geometric sense becomes an exact part of quantum physics and the geometry of the World of Classical Worlds. This is forced by the General Coordinate Invariance alone. Even more preferred space-time surfaces correspond to maxima of the K\"ahler function identified as value of K\"ahler action for a preferred space-time surface. In mathematically non-existing path integral formalism stationary phase approximation gives something believed to be enough for classical physics in this sense. 2. Lubos talks also about de-coherence as a mechanism leading to classicality. This notion applies when one speaks about subjective time. When the time scale of observer is long as compared to the time scale of observed events (the CD of observer is much larger than those of observed systems so that quantum statistical determinism applies) decoherence taking place in sub-quantum jumps guarantees that all phase information is lost and quantum mechanical interference effects are masked out. The world looks classical in Boltzmannian sense but only for an observer looking the situation from a longer time scale. 22. Is physics deterministic? Determinism is not valid in quantum universe as Lubos states. Determinism is valid at the level of field equations. These statements are contradictory unless one realizes that there are two different times. To understand these two times and their relationship one is forced bo make observer a part of the Universe instead of being outsider, that is to develop a quantum theory of consciousness. Amusingly, Lubos admits the non-determinism is a fact but denies that Schrödinger amplitudes which must behave non-deterministically in standard ontology, are real. 23. How many bits are required to describe the universe? Currently around 10100 says Lubos. For me both the question and its answer are nonsense for the same reason as some other questions above. That people waste their time with this kind of questions shows how desperately physics needs an extension to a theory of consciousness. This is required also by neuroscience and biology. Lubos identifies this number as the entropy of the observed Universe. The notion in principle makes sense but not the identification. In TGD framework the entropy is also dependent on the resolution used. The better the measurement resolution, the larger the number of degrees of freedom, and the larger the entropy. 24. Will elementary physics ultimately be finished? The answer depends on what one means with "elementary particle" and what one means with "finished"! TGD predicts in principle infinite hierarchy of scaled versions of what we have used to call elementary particle physics corresponding to hierarchies of p-adic length scales and Planck constants. The hierarchy of infinite primes suggests a generalization of elementary particle in which many particle states of given hierarchy level (space-time sheets) become single particle states of the new level (space-time sheets topologically condensed at large space-time sheets). Same Universal mathematical description applies at all levels but always something new emerges. Therefore my answer is realistic "No". Wednesday, January 13, 2010 Saturday, January 09, 2010 Exceptional symmetries in condensed matter system? Lubos commented an interesting abstract reporting evidence for a realization of the mathematically extremely interesting exceptional Lie group E8 as symmetries of a condensed matter system known as Ising chain consisting of a chain of spins in a strong transversal magnetic field causing magnetization. At criticality for phase transition destroying the magnetization excitations appear and E8 would appear as a symmetry of these excitations. Here is the abstract. Quantum phase transitions take place between distinct phases of matter at zero temperature. Near the transition point, exotic quantum symmetries can emerge that govern the excitation spectrum of the system. A symmetry described by the E8 Lie group with a spectrum of eight particles was long predicted to appear near the critical point of an Ising chain. We realize this system experimentally by using strong transverse magnetic fields to tune the quasi–one-dimensional Ising ferromagnet CoNb2O6 (cobalt niobate) through its critical point. Spin excitations are observed to change character from pairs of kinks in the ordered phase to spin-flips in the paramagnetic phase. Just below the critical field, the spin dynamics shows a fine structure with two sharp modes at low energies, in a ratio that approaches the golden mean predicted for the first two meson particles of the E8 spectrum. Our results demonstrate the power of symmetry to describe complex quantum behaviors. The relation of the results to string theory and TGD Lubos gives a nice summary of E8, which I recommend. Unfortunately Lubos takes a completely non-critical attitude accepting the experimental evidence as a proof and also creates the impression that this as a victory of super string model. The emergent dynamical E8 symmetry is actually predicted by conformal field theory approach to 1-D critical systems alone and has nothing to with the fundamental E8×E8 symmetry of heterotic strings as Lubos actually admits. E8 symmetry is predicted to be possible by conformal symmetry characterizing 2-dimensional criticality and the Kac-Moody representation is obtained once one has 8 complex scalar fields describing excitations of a conformally invariant system. The associated Kac-Moody symmetry predicts also a presence of a large number of other excitations created by the Kac-Moody generators obtained as normal ordered exponentials of complex scalar fields and their presence in the spectrum should be shown. Of course, also string models as well as TGD are characterized by conformal symmetry. In TGD conformal symmetries have interpretation as a 3-D generalization of 2-D conformal symmetries acting at light-like boundaries of light-cone of M4 and also at light-like 3-surfaces of H=M4×CP2 (because of their metric 2-dimensionality). Also string theories apply the exponentiation trick so that the 10-D target space of superstring models could be a purely formal construct in which case the notion of spontaneous compactification, which has led to the landscape catastrophe, would not make sense physically. In TGD framework compactication is replaced by number theoretical compactification, which is not a dynamical process but a duality stating the equivalence of formulations of quantum TGD based on the possibility to interpret 8-D imbedding space either as M8 or H=M4×CP2 (M8-H duality). Could E8 emerge in TGD? E8 is interesting also from the TGD point of view. Of course, to say anything detailed about the finding in TGD framework would require hard work and in the following I can make just speculative general remarks. 1. The rank of E8 group is 8, which means that the Cartan algebra of E8 spanned by maximum number of commuting algebra elements has dimension 8. The eigenvalues of the Cartan algebra generators define the 8 quantum numbers of a physical state belonging to a representation of E8. In TGD framework the quantum numbers of particle correspond to Cartan algebra of the product of Poincare group color group SU(3) and electroweak group SU(2)×U(1). The dimension of the corresponding Cartan algebra is also 8 corresponding to 4 components of four-momentum, 2 color quantum numbers and 2 electroweak quantum numbers. In conformal field theories Lie groups are extended to Kac-Moody algebras. One can construct rank 8 Kac-Moody algebras by starting from 8 complex scalar fields which could be interpreted in terms of coordinates of 8-D Minkowski space. One would obtain both the complex form of E8 and the current algebra defined by symmetries of TGD (and of standard model). 2. Hyper-finite factors of type II1 (HFFs) are a particular class of von Neumann algebras, which is very interesting from the point of view of quantum theories and the mathematics of quantum groups relates to them very closely. The spinors of world of classical worlds (the 4-surfaces in 8-D imbedding space) define a canonical representative for HFF. The inclusions of HFFs known as Jones inclusions are in one-one correspondence with finite discrete subgroups of SO(3) and these in turn are in one-one correspondence with simply laced Lie groups containing also E8. E6,E7 and E8 correspond to tedrahedon, octahedron, and dodecahedron, which are 3-D polygons. For other subgroups the minimal orbit is 2-D polygon. The conjecture is roughly that these Lie groups appear as dynamical symmetries of quantum TGD so that TGD Universe is like a universal computer able to emulate any other computer. Now the emulation is emulation of any gauge theory and also string model type system. These symmetries would not be fundamental but achieved by engineering. 3. Also the hierarchy of Planck constants realized in terms of the book like structure of the 8-D imbedding space could involve the mathematics of Jones inclusions. The pages of the big book are singular coverings and factor spaces of both CP2 and what I call causal diamond (CD). CD is the intersection of future and past directed light-cones of 4-D Minkowski space M4. At least cyclic subgroups Zn are involved. Also Zn with reflection added and perhaps all finite discrete subgroups of the rotation group as symmetries permuting the copies of M8 or CP2 of the covering or permuting the identified points of the singular factor space. E8 gauge symmetry could emerge as a dynamical symmetry at corresponding pages. Even E8×E8 of heterotic strings models could appear. The two E8:s would be associated with M4 and CP2: maybe TGD Universe is able to emulate also E8×E8 and heterotic super string model. In the case of E8 the symmetries of dodecahedron would identify equivalent points of M4 for singular factor space option. These symmetries would be engineering symmetries requiring quantum criticality. The system should be very near to the back of the big book so that the 3-surface describing the physical system can leak to the other pages of the book. The E8 symmetry would appear only at the other side of criticality (E8 page) and would correspond to a non-standard value of Planck constant. The change of the value of Planck constant would stabilize the phase unstable for the standard value of Planck constant. The claimed condensed matter E8 symmetry is indeed assigned with quantum criticality rather than thermal criticality. Maybe the space-time sheets serving as correlates for the magnetic excitations of the system reside at the E8 page and correspond to dark matter in TGD framework. 4. The fundamental representation of E8 is identical with its adjoint representation and obtained by combining the rotation generators of SO(16) acting as rotations of points of 16-D Euclidian space E16 and the spinors of the same space to form a Lie-algebra in which E8 acts. The question whether TGD could allow to identify some natural 16-D space inspires some reckless numerology. The definition of singular covering and factor spaces means a choice of two points of M4 in case of CD so that the moduli space for CDs is M4×M4+, where M4+ is 8-D light-cone: p-adic length scale hypothesis is obtained if M4+ reduces to a union of hyperboloids for which proper time is quantized as powers of two. A possible interpretation is in terms of quantum cosmology with quantization of cosmological time. This procedure fixes quantization axes and means fixing of preferred time-like direction and spatial direction at either tip of CD (rest system and quantization axes of spin). In the case of CP2 the selection of quantization axes should fix of point of CP2 and a direction of geodesic line at that point. Therefore this part of the moduli space is CP2×E4. Altogether the moduli space labeling CD×CP2 with fixed quantization axes and thus sectors of the world of classical worlds is 16-D space M4×M4+ ×CP2×E4. Could the tangent space of this space provide a natural realization of the generators of the complex form of E8?
a7e771f09723cffd
From Wikipedia, the free encyclopedia   (Redirected from Super symmetry) Jump to navigation Jump to search In particle physics, supersymmetry (SUSY) is a theory that proposes a relationship between two basic classes of elementary particles: bosons, which have an integer-valued spin, and fermions, which have a half-integer spin.[1][2] A type of spacetime symmetry, supersymmetry is a possible candidate for undiscovered particle physics, and seen as an elegant solution to many current problems in particle physics if confirmed correct, which could resolve various areas where current theories are believed to be incomplete. A supersymmetrical extension to the Standard Model would resolve major hierarchy problems within gauge theory, by guaranteeing that quadratic divergences of all orders will cancel out in perturbation theory. In supersymmetry, each particle from one group would have an associated particle in the other, which is known as its superpartner, the spin of which differs by a half-integer. These superpartners would be new and undiscovered particles. For example, there would be a particle called a "selectron" (superpartner electron), a bosonic partner of the electron. In the simplest supersymmetry theories, with perfectly "unbroken" supersymmetry, each pair of superpartners would share the same mass and internal quantum numbers besides spin. Since we expect to find these "superpartners" using present-day equipment, if supersymmetry exists then it consists of a spontaneously broken symmetry allowing superpartners to differ in mass.[3][4] [5] Spontaneously-broken supersymmetry could solve many mysterious problems in particle physics including the hierarchy problem. There is no evidence at this time to show whether or not supersymmetry is correct, or what other extensions to current models might be more accurate. In part this is because it is only since around 2010 that particle accelerators specifically designed to study physics beyond the Standard Model have become operational, and because it is not yet known where exactly to look nor the energies required for a successful search. The main reasons for supersymmetry being supported by physicists is that the current theories are known to be incomplete and their limitations are well established, and supersymmetry would be an attractive solution to some of the major concerns. Direct confirmation would entail production of superpartners in collider experiments, such as the Large Hadron Collider (LHC). The first runs of the LHC found no previously-unknown particles other than the Higgs boson which was already suspected to exist as part of the Standard Model, and therefore no evidence for supersymmetry.[6][7] These findings disappointed many physicists, who believed that supersymmetry (and other theories relying upon it) were by far the most promising theories for "new" physics, and had hoped for signs of unexpected results from these runs.[8][9] Former enthusiastic supporter Mikhail Shifman went as far as urging the theoretical community to search for new ideas and accept that supersymmetry was a failed theory.[10] However it has also been argued that this "naturalness" crisis was premature, because various calculations were too optimistic about the limits of masses which would allow a supersymmetry based solution.[11][12] The collider energies needed for such a discovery were likely too low, so superpartners could exist but be more massive than the LHC can detect. There are numerous phenomenological motivations for supersymmetry close to the electroweak scale, as well as technical motivations for supersymmetry at any scale. The hierarchy problem[edit] Gauge coupling unification[edit] The idea that the gauge symmetry groups unify at high-energy is called Grand unification theory. In the Standard Model, however, the weak, strong and electromagnetic couplings fail to unify at high energy. In a supersymmetry theory, the running of the gauge couplings are modified, and precise high-energy unification of the gauge couplings is achieved. The modified running also provides a natural mechanism for radiative electroweak symmetry breaking. Dark matter[edit] TeV-scale supersymmetry (augmented with a discrete symmetry) typically provides a candidate dark matter particle at a mass scale consistent with thermal relic abundance calculations.[14][15] Other technical motivations[edit] Supersymmetry is also motivated by solutions to several theoretical problems, for generally providing many desirable mathematical properties, and for ensuring sensible behavior at high energies. Supersymmetric quantum field theory is often much easier to analyze, as many more problems become mathematically tractable. When supersymmetry is imposed as a local symmetry, Einstein's theory of general relativity is included automatically, and the result is said to be a theory of supergravity. It is also a necessary feature of the most popular candidate for a theory of everything, superstring theory, and a SUSY theory could explain the issue of cosmological inflation. Another theoretically appealing property of supersymmetry is that it offers the only "loophole" to the Coleman–Mandula theorem, which prohibits spacetime and internal symmetries from being combined in any nontrivial way, for quantum field theories like the Standard Model with very general assumptions. The Haag–Łopuszański–Sohnius theorem demonstrates that supersymmetry is the only way spacetime and internal symmetries can be combined consistently.[16] A supersymmetry relating mesons and baryons was first proposed, in the context of hadronic physics, by Hironari Miyazawa in 1966. This supersymmetry did not involve spacetime, that is, it concerned internal symmetry, and was broken badly. Miyazawa's work was largely ignored at the time.[17][18][19][20] J. L. Gervais and B. Sakita (in 1971),[21] Yu. A. Golfand and E. P. Likhtman (also in 1971), and D. V. Volkov and V. P. Akulov (1972),[22] independently rediscovered supersymmetry in the context of quantum field theory, a radically new type of symmetry of spacetime and fundamental fields, which establishes a relationship between elementary particles of different quantum nature, bosons and fermions, and unifies spacetime and internal symmetries of microscopic phenomena. Supersymmetry with a consistent Lie-algebraic graded structure on which the Gervais−Sakita rediscovery was based directly first arose in 1971[23] in the context of an early version of string theory by Pierre Ramond, John H. Schwarz and André Neveu. Finally, Julius Wess and Bruno Zumino (in 1974)[24] identified the characteristic renormalization features of four-dimensional supersymmetric field theories, which identified them as remarkable QFTs, and they and Abdus Salam and their fellow researchers introduced early particle physics applications. The mathematical structure of supersymmetry (graded Lie superalgebras) has subsequently been applied successfully to other topics of physics, ranging from nuclear physics,[25][26] critical phenomena,[27] quantum mechanics to statistical physics. It remains a vital part of many proposed theories of physics. The first realistic supersymmetric version of the Standard Model was proposed in 1977 by Pierre Fayet and is known as the Minimal Supersymmetric Standard Model or MSSM for short. It was proposed to solve, amongst other things, the hierarchy problem. Extension of possible symmetry groups[edit] One reason that physicists explored supersymmetry is because it offers an extension to the more familiar symmetries of quantum field theory. These symmetries are grouped into the Poincaré group and internal symmetries and the Coleman–Mandula theorem showed that under certain assumptions, the symmetries of the S-matrix must be a direct product of the Poincaré group with a compact internal symmetry group or if there is not any mass gap, the conformal group with a compact internal symmetry group. In 1971 Golfand and Likhtman were the first to show that the Poincaré algebra can be extended through introduction of four anticommuting spinor generators (in four dimensions), which later became known as supercharges. in 1975 the Haag-Lopuszanski-Sohnius theorem analyzed all possible superalgebras in the general form, including those with an extended number of the supergenerators and central charges. This extended super-Poincaré algebra paved the way for obtaining a very large and important class of supersymmetric field theories. The supersymmetry algebra[edit] Traditional symmetries of physics are generated by objects that transform by the tensor representations of the Poincaré group and internal symmetries. Supersymmetries, however, are generated by objects that transform by the spin representations. According to the spin-statistics theorem, bosonic fields commute while fermionic fields anticommute. Combining the two kinds of fields into a single algebra requires the introduction of a Z2-grading under which the bosons are the even elements and the fermions are the odd elements. Such an algebra is called a Lie superalgebra. The simplest supersymmetric extension of the Poincaré algebra is the Super-Poincaré algebra. Expressed in terms of two Weyl spinors, has the following anti-commutation relation: and all other anti-commutation relations between the Qs and commutation relations between the Qs and Ps vanish. In the above expression are the generators of translation and are the Pauli matrices. There are representations of a Lie superalgebra that are analogous to representations of a Lie algebra. Each Lie algebra has an associated Lie group and a Lie superalgebra can sometimes be extended into representations of a Lie supergroup. The Supersymmetric Standard Model[edit] Incorporating supersymmetry into the Standard Model requires doubling the number of particles since there is no way that any of the particles in the Standard Model can be superpartners of each other. With the addition of new particles, there are many possible new interactions. The simplest possible supersymmetric model consistent with the Standard Model is the Minimal Supersymmetric Standard Model (MSSM) which can include the necessary additional new particles that are able to be superpartners of those in the Standard Model. One of the main motivations for SUSY comes from the quadratically divergent contributions to the Higgs mass squared. The quantum mechanical interactions of the Higgs boson causes a large renormalization of the Higgs mass and unless there is an accidental cancellation, the natural size of the Higgs mass is the greatest scale possible. This problem is known as the hierarchy problem. Supersymmetry reduces the size of the quantum corrections by having automatic cancellations between fermionic and bosonic Higgs interactions. If supersymmetry is restored at the weak scale, then the Higgs mass is related to supersymmetry breaking which can be induced from small non-perturbative effects explaining the vastly different scales in the weak interactions and gravitational interactions. In many supersymmetric Standard Models there is a heavy stable particle (such as neutralino) which could serve as a weakly interacting massive particle (WIMP) dark matter candidate. The existence of a supersymmetric dark matter candidate is related closely to R-parity. The standard paradigm for incorporating supersymmetry into a realistic theory is to have the underlying dynamics of the theory be supersymmetric, but the ground state of the theory does not respect the symmetry and supersymmetry is broken spontaneously. The supersymmetry break can not be done permanently by the particles of the MSSM as they currently appear. This means that there is a new sector of the theory that is responsible for the breaking. The only constraint on this new sector is that it must break supersymmetry permanently and must give superparticles TeV scale masses. There are many models that can do this and most of their details do not matter. In order to parameterize the relevant features of supersymmetry breaking, arbitrary soft SUSY breaking terms are added to the theory which temporarily break SUSY explicitly but could never arise from a complete theory of supersymmetry breaking. Gauge-coupling unification[edit] One piece of evidence for supersymmetry existing is gauge coupling unification. The renormalization group evolution of the three gauge coupling constants of the Standard Model is somewhat sensitive to the present particle content of the theory. These coupling constants do not quite meet together at a common energy scale if we run the renormalization group using the Standard Model.[28] With the addition of minimal SUSY joint convergence of the coupling constants is projected at approximately 1016 GeV.[28] Supersymmetric quantum mechanics[edit] Supersymmetric quantum mechanics adds the SUSY superalgebra to quantum mechanics as opposed to quantum field theory. Supersymmetric quantum mechanics often becomes relevant when studying the dynamics of supersymmetric solitons, and due to the simplified nature of having fields which are only functions of time (rather than space-time), a great deal of progress has been made in this subject and it is now studied in its own right. SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called partner Hamiltonians. (The potential energy terms which occur in the Hamiltonians are then known as partner potentials.) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy. This fact can be exploited to deduce many properties of the eigenstate spectrum. It is analogous to the original description of SUSY, which referred to bosons and fermions. We can imagine a "bosonic Hamiltonian", whose eigenstates are the various bosons of our theory. The SUSY partner of this Hamiltonian would be "fermionic", and its eigenstates would be the theory's fermions. Each boson would have a fermionic partner of equal energy. Supersymmetry in condensed matter physics[edit] SUSY concepts have provided useful extensions to the WKB approximation. Additionally, SUSY has been applied to disorder averaged systems both quantum and non-quantum (through statistical mechanics), the Fokker-Planck equation being an example of a non-quantum theory. The 'supersymmetry' in all these systems arises from the fact that one is modelling one particle and as such the 'statistics' don't matter. The use of the supersymmetry method provides a mathematical rigorous alternative to the replica trick, but only in non-interacting systems, which attempts to address the so-called 'problem of the denominator' under disorder averaging. For more on the applications of supersymmetry in condensed matter physics see the book[29] Supersymmetry in optics[edit] Integrated optics was recently found[30] to provide a fertile ground on which certain ramifications of SUSY can be explored in readily-accessible laboratory settings. Making use of the analogous mathematical structure of the quantum-mechanical Schrödinger equation and the wave equation governing the evolution of light in one-dimensional settings, one may interpret the refractive index distribution of a structure as a potential landscape in which optical wave packets propagate. In this manner, a new class of functional optical structures with possible applications in phase matching, mode conversion[31] and space-division multiplexing becomes possible. SUSY transformations have been also proposed as a way to address inverse scattering problems in optics and as a one-dimensional transformation optics[32] Supersymmetry in dynamical systems[edit] All stochastic (partial) differential equations, the models for all types of continuous time dynamical systems, possess topological supersymmetry.[33][34] In the operator representation of stochastic evolution, the topological supersymmetry is the exterior derivative which is commutative with the stochastic evolution operator defined as the stochastically averaged pullback induced on differential forms by SDE-defined diffeomorphisms of the phase space. The topological sector of the so-emerging supersymmetric theory of stochastic dynamics can be recognized as the Witten-type topological field theory. The meaning of the topological supersymmetry in dynamical systems is the preservation of the phase space continuity—infinitely close points will remain close during continuous time evolution even in the presence of noise. When the topological supersymmetry is broken spontaneously, this property is violated in the limit of the infinitely long temporal evolution and the model can be said to exhibit (the stochastic generalization of) the butterfly effect. From a more general perspective, spontaneous breakdown of the topological supersymmetry is the theoretical essence of the ubiquitous dynamical phenomenon variously known as chaos, turbulence, self-organized criticality etc. The Goldstone theorem explains the associated emergence of the long-range dynamical behavior that manifests itself as 1/f noise, butterfly effect, and the scale-free statistics of sudden (instantonic) processes, e.g., earthquakes, neuroavalanches, solar flares etc., known as the Zipf's law and the Richter scale. Supersymmetry in mathematics[edit] SUSY is also sometimes studied mathematically for its intrinsic properties. This is because it describes complex fields satisfying a property known as holomorphy, which allows holomorphic quantities to be exactly computed. This makes supersymmetric models useful "toy models" of more realistic theories. A prime example of this has been the demonstration of S-duality in four-dimensional gauge theories[35] that interchanges particles and monopoles. The proof of the Atiyah-Singer index theorem is much simplified by the use of supersymmetric quantum mechanics. Supersymmetry in quantum gravity[edit] Supersymmetry is part of superstring theory, a string theory of quantum gravity, although it could in theory be a component of other quantum gravity theories as well, such as loop quantum gravity. For superstring theory to be consistent, supersymmetry seems to be required at some level (although it may be a strongly broken symmetry). If experimental evidence confirms supersymmetry in the form of supersymmetric particles such as the neutralino that is often believed to be the lightest superpartner, some people believe this would be a major boost to superstring theory. Since supersymmetry is a required component of superstring theory, any discovered supersymmetry would be consistent with superstring theory. If the Large Hadron Collider and other major particle physics experiments fail to detect supersymmetric partners, many versions of superstring theory which had predicted certain low mass superpartners to existing particles may need to be significantly revised. General supersymmetry[edit] Supersymmetry appears in many related contexts of theoretical physics. It is possible to have multiple supersymmetries and also have supersymmetric extra dimensions. Extended supersymmetry[edit] It is possible to have more than one kind of supersymmetry transformation. Theories with more than one supersymmetry transformation are known as extended supersymmetric theories. The more supersymmetry a theory has, the more constrained are the field content and interactions. Typically the number of copies of a supersymmetry is a power of 2, i.e. 1, 2, 4, 8. In four dimensions, a spinor has four degrees of freedom and thus the minimal number of supersymmetry generators is four in four dimensions and having eight copies of supersymmetry means that there are 32 supersymmetry generators. The maximal number of supersymmetry generators possible is 32. Theories with more than 32 supersymmetry generators automatically have massless fields with spin greater than 2. It is not known how to make massless fields with spin greater than two interact, so the maximal number of supersymmetry generators considered is 32. This is due to the Weinberg-Witten theorem. This corresponds to an N = 8 supersymmetry theory. Theories with 32 supersymmetries automatically have a graviton. For four dimensions there are the following theories, with the corresponding multiplets[36] (CPT adds a copy, whenever they are not invariant under such symmetry) • N = 1 Chiral multiplet: (0,​12) Vector multiplet: (​12,1) Gravitino multiplet: (1,​32) Graviton multiplet: (​32,2) • N = 2 hypermultiplet: (-​12,02,​12) vector multiplet: (0,​122,1) supergravity multiplet: (1,​322,2) • N = 4 Vector multiplet: (-1,-​124,06,​124,1) Supergravity multiplet: (0,​124,16,​324,2) • N = 8 Supergravity multiplet: (-2,-​328,-128,-​1256,070,​1256,128,​328,2) Supersymmetry in alternate numbers of dimensions[edit] It is possible to have supersymmetry in dimensions other than four. Because the properties of spinors change drastically between different dimensions, each dimension has its characteristic. In d dimensions, the size of spinors is approximately 2d/2 or 2(d − 1)/2. Since the maximum number of supersymmetries is 32, the greatest number of dimensions in which a supersymmetric theory can exist is eleven. Current status[edit] Supersymmetric models are constrained by a variety of experiments, including measurements of low-energy observables – for example, the anomalous magnetic moment of the muon at Brookhaven; the WMAP dark matter density measurement and direct detection experiments – for example, XENON-100 and LUX; and by particle collider experiments, including B-physics, Higgs phenomenology and direct searches for superpartners (sparticles), at the Large Electron–Positron Collider, Tevatron and the LHC. Historically, the tightest limits were from direct production at colliders. The first mass limits for squarks and gluinos were made at CERN by the UA1 experiment and the UA2 experiment at the Super Proton Synchrotron. LEP later set very strong limits.,[37] which in 2006 were extended by the D0 experiment at the Tevatron.[38][39] From 2003-2015, WMAP's and Planck's dark matter density measurements have strongly constrained supersymmetry models, which, if they explain dark matter, have to be tuned to invoke a particular mechanism to sufficiently reduce the neutralino density. Prior to the beginning of the LHC, in 2009 fits of available data to CMSSM and NUHM1 indicated that squarks and gluinos were most likely to have masses in the 500 to 800 GeV range, though values as high as 2.5 TeV were allowed with low probabilities. Neutralinos and sleptons were expected to be quite light, with the lightest neutralino and the lightest stau most likely to be found between 100 and 150 GeV.[40] The first run of the LHC found no evidence for supersymmetry, and, as a result, surpassed existing experimental limits from the Large Electron–Positron Collider and Tevatron and partially excluded the aforementioned expected ranges.[41] In 2011–12, the LHC discovered a Higgs boson with a mass of about 125 GeV, and with couplings to fermions and bosons which are consistent with the Standard Model. The MSSM predicts that the mass of the lightest Higgs boson should not be much higher than the mass of the Z boson, and, in the absence of fine tuning (with the supersymmetry breaking scale on the order of 1 TeV), should not exceed 135 GeV.[42] The LHC result seemed problematic for the minimal supersymmetric model, as the value of 125 GeV is relatively large for the model and can only be achieved with large radiative loop corrections from top squarks, which many theorists had considered to be "unnatural" (see naturalness (physics) and fine tuning).[43] See also[edit] 1. ^ Haber, Howie. "SUPERSYMMETRY, PART I (THEORY)" (PDF). Reviews, Tables and Plots. Particle Data Group (PDG). Retrieved 8 July 2015.  2. ^ "supersymmetry". Merriam-Webster. Retrieved October 2, 2017.  3. ^ Martin, Stephen P. (1997). "A Supersymmetry Primer". Advanced Series on Directions in High Energy Physics. Advanced Series on Directions in High Energy Physics. 18: 1–98. arXiv:hep-ph/9709356Freely accessible. doi:10.1142/9789812839657_0001. ISBN 978-981-02-3553-6.  4. ^ Baer, Howard; Tata, Xerxes (2006). Weak scale supersymmetry: From superfields to scattering events.  5. ^ Dine, Michael (2007). Supersymmetry and String Theory: Beyond the Standard Model. p. 169.  6. ^ "ATLAS Supersymmetry Public Results". ATLAS, CERN. Retrieved 2017-09-24.  7. ^ "CMS Supersymmetry Public Results". CMS, CERN. Retrieved 2017-09-24.  8. ^ Wolchover, Natalie (November 20, 2012). "Supersymmetry Fails Test, Forcing Physics to Seek New Ideas". Quanta Magazine.  9. ^ Wolchover, Natalie (August 9, 2016). [ht tps:// "What No New Particles Means for Physics"] Check |url= value (help). Quanta Magazine.  10. ^ M. Shifman: Reflections and Impressionistic Portrait at the Conference Frontiers Beyond the Standard Model, FTPI (pdf), FTPI, 31 October 2012 11. ^ Howard Baer; Vernon Barger; Dan Mickelson (September 2013). "How conventional measures overestimate electroweak fine-tuning in supersymmetric theory". Physical Review D. 88 (9): 095013. arXiv:1309.2984Freely accessible. Bibcode:2013PhRvD..88i5013B. doi:10.1103/PhysRevD.88.095013.  12. ^ Howard Baer; et al. (December 2012). "Radiative natural supersymmetry: Reconciling electroweak fine-tuning and the Higgs boson mass". Physical Review D. 87 (11): 115028. arXiv:1212.2655Freely accessible. Bibcode:2013PhRvD..87k5028B. doi:10.1103/PhysRevD.87.115028.  14. ^ Jonathan Feng: Supersymmetric Dark Matter (pdf), University of California, Irvine, 11 May 2007 15. ^ Torsten Bringmann: The WIMP "Miracle" (pdf) Archived 2013-03-01 at the Wayback Machine. University of Hamburg 16. ^ R. Haag, J. T. Łopuszański and M. Sohnius, "All Possible Generators Of Supersymmetries Of The S Matrix", Nucl. Phys. B 88 (1975) 257 17. ^ H. Miyazawa (1966). "Baryon Number Changing Currents". Prog. Theor. Phys. 36 (6): 1266–1276. Bibcode:1966PThPh..36.1266M. doi:10.1143/PTP.36.1266.  18. ^ H. Miyazawa (1968). "Spinor Currents and Symmetries of Baryons and Mesons". Phys. Rev. 170 (5): 1586–1590. Bibcode:1968PhRv..170.1586M. doi:10.1103/PhysRev.170.1586.  19. ^ Michio Kaku, Quantum Field Theory, ISBN 0-19-509158-2, pg 663. 20. ^ Peter Freund, Introduction to Supersymmetry, ISBN 0-521-35675-X, pages 26-27, 138. 21. ^ Gervais, J.-L.; Sakita, B. (1971). "Field theory interpretation of supergauges in dual models". Nuclear Physics B. 34 (2): 632–639. Bibcode:1971NuPhB..34..632G. doi:10.1016/0550-3213(71)90351-8.  22. ^ D. V. Volkov, V. P. Akulov, Pisma Zh.Eksp.Teor.Fiz. 16 (1972) 621; Phys.Lett. B46 (1973) 109; V.P. Akulov, D.V. Volkov, Teor.Mat.Fiz. 18 (1974) 39 23. ^ Ramond, P. (1971). "Dual Theory for Free Fermions". Physical Review D. 3 (10): 2415–2418. Bibcode:1971PhRvD...3.2415R. doi:10.1103/PhysRevD.3.2415.  24. ^ Wess, J.; Zumino, B. (1974). "Supergauge transformations in four dimensions". Nuclear Physics B. 70: 39–50. Bibcode:1974NuPhB..70...39W. doi:10.1016/0550-3213(74)90355-1.  25. ^ Hagen Kleinert, Discovery of Supersymmetry in Nuclei 26. ^ Iachello, F. (1980). "Dynamical Supersymmetries in Nuclei". Physical Review Letters. 44 (12): 772–775. Bibcode:1980PhRvL..44..772I. doi:10.1103/PhysRevLett.44.772.  27. ^ Friedan, D.; Qiu, Z.; Shenker, S. (1984). "Conformal Invariance, Unitarity, and Critical Exponents in Two Dimensions". Physical Review Letters. 52 (18): 1575–1578. Bibcode:1984PhRvL..52.1575F. doi:10.1103/PhysRevLett.52.1575.  28. ^ a b Gordon L. Kane, The Dawn of Physics Beyond the Standard Model, Scientific American, June 2003, page 60 and The frontiers of physics, special edition, Vol 15, #3, page 8 29. ^ Supersymmetry in Disorder and Chaos, Konstantin Efetov, Cambridge university press, 1997. 30. ^ Miri, M.-A.; Heinrich, M.; El-Ganainy, R.; Christodoulides, D. N. (2013). "Superymmetric optical structures". Physical Review Letters. APS. 110 (23): 233902. arXiv:1304.6646Freely accessible. Bibcode:2013PhRvL.110w3902M. doi:10.1103/PhysRevLett.110.233902. PMID 25167493. Retrieved April 22, 2014.  31. ^ Heinrich, M.; Miri, M.-A.; Stützer, S.; El-Ganainy, R.; Nolte, S.; Szameit, A.; Christodoulides, D. N. (2014). "Superymmetric mode converters". Nature Communications. NPG. 5: 3698. arXiv:1401.5734Freely accessible. Bibcode:2014NatCo...5E3698H. doi:10.1038/ncomms4698. PMID 24739256. Retrieved April 22, 2014.  32. ^ Miri, M.-A.; Heinrich, Matthias; Christodoulides, D. N. (2014). "SUSY-inspired one-dimensional transformation optics". Optica. OSA. 1 (2): 89. arXiv:1408.0832Freely accessible. doi:10.1364/OPTICA.1.000089. Retrieved August 6, 2014.  33. ^ Ovchinnikov, Igor (March 2016). "Introduction to Supersymmetric Theory of Stochastics". Entropy. 18 (4): 108. arXiv:1511.03393Freely accessible. Bibcode:2016Entrp..18..108O. doi:10.3390/e18040108.  34. ^ Ovchinnikov, Igor; Ensslin, Torsten (April 2016). "Kinematic dynamo, supersymmetry breaking, and chaos". Physical Review D. 93 (8): 085023. arXiv:1512.01651Freely accessible. Bibcode:2016PhRvD..93h5023O. doi:10.1103/PhysRevD.93.085023.  35. ^ Krasnitz, Michael (2003). Correlation functions in supersymmetric gauge theories from supergravity fluctuations (PDF). Princeton University Department of Physics: Princeton University Department of Physics. p. 91.  36. ^ Polchinski,J. String theory. Vol. 2: Superstring theory and beyond, Appendix B 37. ^ LEPSUSYWG, ALEPH, DELPHI, L3 and OPAL experiments, charginos, large m0 LEPSUSYWG/01-03.1 38. ^ The D0-Collaboration (2009). "Search for associated production of charginos and neutralinos in the trilepton final state using 2.3 fb−1 of data". Physics Letters B. 680: 34–43. arXiv:0901.0646Freely accessible. Bibcode:2009PhLB..680...34D. doi:10.1016/j.physletb.2009.08.011.  39. ^ The D0 Collaboration (2006). "Search for squarks and gluinos in events with jets and missing transverse energy using 2.1 fb-1 of pp¯ collision data at s=1.96 TeV". Physics Letters B. 660 (5): 449–457. arXiv:0712.3805Freely accessible. Bibcode:2008PhLB..660..449D. doi:10.1016/j.physletb.2008.01.042.  40. ^ O. Buchmueller; et al. (2009). "Likelihood Functions for Supersymmetric Observables in Frequentist Analyses of the CMSSM and NUHM1". The European Physical Journal C. 64 (3): 391–415. arXiv:0907.5568Freely accessible. Bibcode:2009EPJC...64..391B. doi:10.1140/epjc/s10052-009-1159-z.  41. ^ Roszkowski, Leszek; Sessolo, Enrico Maria; Williams, Andrew J. (11 August 2014). "What next for the CMSSM and the NUHM: improved prospects for superpartner and dark matter detection". Journal of High Energy Physics. 2014 (8). arXiv:1405.4289Freely accessible. Bibcode:2014JHEP...08..067R. doi:10.1007/JHEP08(2014)067.  42. ^ Marcela Carena and Howard E. Haber; Haber (1970). "Higgs Boson Theory and Phenomenology". Progress in Particle and Nuclear Physics. 50: 63–152. arXiv:hep-ph/0208209v3Freely accessible. Bibcode:2003PrPNP..50...63C. doi:10.1016/S0146-6410(02)00177-1.  43. ^ Draper, Patrick; et al. (December 2011). "Implications of a 125 GeV Higgs for the MSSM and Low-Scale SUSY Breaking". Physical Review D. 85 (9): 095007. arXiv:1112.3068Freely accessible. Bibcode:2012PhRvD..85i5007D. doi:10.1103/PhysRevD.85.095007.  Further reading[edit] Theoretical introductions, free and online[edit] On experiments[edit] External links[edit]
f403620a59a11bbb
Wednesday, April 27, 2016 Teslaphoresis and TGD I found an interesting popular article about a recently discovered phenomenon christened Teslaphoresis (see this). This phenomenon might involve new physics. Tesla studied systems critical against di-electric breakdown and observed strange electrical discharges occurring in very long length scales. Colleagues decided that these phenomena have mere entertainment value and are "understood" in Maxwellian electrodynamics. The amateurs have however continued the experiments of Tesla, and Teslaphoresis could be the final proof that something genuinely new is involved. In TGD framework these long ranged strange phenomena could correspond in TGD quantum criticality and to large values of Planck constant implying quantum coherence in long length scales. The phases of ordinary matter with non-standard value heff=n× h of Planck constant would correspond to dark matter in TGD framework. I have earlier considered Tesla's findings from TGD point of view and my personal opinion has been that Tesla might have been the first experimenter to detect dark matter in TGD sense. Teslaphoresis gives further support for this proposal. The title of the popular article is "Reconfigured Tesla coil aligns, electrifies materials from a distance" and tells about the effects involved. The research group is led by Paul Churukuri and there is also an abstract about the work in ADS Nano journal. This article contains also an excellent illustration allowing to understand both the Tesla coil and the magnetic and electric fields involved. The abstract of the paper provides a summary about the results. This paper introduces Teslaphoresis, the directed motion and self-assembly of matter by a Tesla coil, and studies this electrokinetic phenomenon using single-walled carbon nanotubes (CNTs). Conventional directed self-assembly of matter using electric fields has been restricted to small scale structures, but with Teslaphoresis, we exceed this limitation by using the Tesla coil’s antenna to create a gradient high-voltage force field that projects into free space. CNTs placed within the Teslaphoretic (TEP) field polarize and self-assemble into wires that span from the nanoscale to the macroscale, the longest thus far being 15 cm. We show that the TEP field not only directs the self-assembly of long nanotube wires at remote distances (≥ 30 cm) but can also wirelessly power nanotube-based LED circuits. Furthermore, individualized CNTs self-organize to form long parallel arrays with high fidelity alignment to the TEP field. Thus, Teslaphoresis is effective for directed self-assembly from the bottom-up to the macroscale. Concisely: what is found that single-walled carbon nanotubes (CNTs) polarise and self-assemble along the electric fields created by capacitor in much longer length scales than expected. Biological applications (involving linear molecules like microtubules) come in mind. CNTs tend to also move towards the capacitance of the secondary coil of the Tesla coil (TC). It is interesting to understand the TGD counterparts for the Maxwellian em fields involved with Tesla coils and it is found that many-sheetedness of space-time is necessary to understand the standing waves also involved. The fact that massless extremals (MEs) can carry light-like currents is essential for modelling currents classically using many-sheeted space-time. The presence of magnetic monopole flux tubes distinguishing TGD from Maxwellian theory is suggestive and could explain why Teslaphoresis occurs in so long length scales and why it induces self-organization phenomena for CNTs. The situation can be seen as a special case of more general situation encountered in TGD based model of living matter. For background see the chapter About Concrete Realization of Remote Metabolism or the article Teslaphoresis and TGD. For a summary of earlier postings see Latest progress in TGD. Tuesday, April 26, 2016 Indications for high Tc superconductivity at 373 K with heff/h=2 Some time ago I learned about a claim of Ivan Kostadinov about superconductivity at temperature of 373 K (100 C). There is also claims by E. Joe Eck about superconductivity: the latest at 400 K. I am not enough experimentalist to be able to decide whether to take the claims seriously or not. The article of Kostadinov provides a detailed support for the claim. Evidence for diamagnetism (induced magnetization tends to reduce the external magnetic field inside superconductor) is represented: at 242 transition reducing the magnitude of negative susceptibility but keeping it negative takes place. Evidence for gap energy of 15 mV was found at 300 K temperature: this energy is same as thermal energy T/2= 1.5 eV at room temperature. Tape tests passing 125 A through superconducting tape supported very low resistance (for Copper tape started burning after about 5 seconds). I-V curves at 300 K are shown to exhibit Shapiro steps with radiation frequency in the range [5 GHz, 21 THz]. Already Josephson discovered what - perhaps not so surprisingly - is known as Josephson effect. As one drives super-conductor with an alternating current, the voltage remain constant at certain values. The difference of voltage values between subsequent jumps are given by Shapiro step Δ V= h f/Ze. The interpretation is that voltage suffers a kind of phase locking at these frequencies and alternating current becomes Josephson current with Josephson frequency f= ZeV/h, which is integer multiple of the frequency of the current. This actually gives a very nice test for heff=n× h hypothesis: Shapiro step Δ V should be scaled up by heff/h=n. The obvious question is whether this occurs in the recent case or whether n=1 explains the findings. The data represented by Figs. 12, 13,14 of the artcle suggest n=2 for Z=2. The alternative explanation would be that the step is for some reason Δ V= 2hf/Ze corresponding to second harmonic or that the charge of charge carrier is Z=1 (bosonic ion). I worried about a possible error in my calculation several hours last night but failed to find any mistake. 1. Fig 12 shows I-V curve at room temperature T=300 K. Shapiro step is now 45 mV. This would correspond to frequency f= ZeΔ V/h=11.6 THz. The figure text tells that the frequency is fR=21.762 THz giving fR/f ≈ 1.87. This would suggest heff/h=n ≈ fR/f≈ 2. 2. Fig. 13 shows another at 300 K. Now Shapiro step is 4.0 mV and corresponds to a frequency 1.24 THz. This would give fR/f≈ 1.95 giving heff/h=2. 3. Fig. 14 shows I-V curve with single Shapiro step equal to about .12 mV. The frequency should be 2.97 GHz whereas the reported frequency is 5.803 GHz. This gives fR/f≈ 1.95 giving n=2. Irrespectively of the fate of the claims of Kostadinov and Eck, Josephson effect could allow an elegant manner to demonstrate whether the hierarchy of Planck constants is realized in Nature. For background see the chapter Quantum Model for Bio-Superconductivity: II. For a summary of earlier postings see Latest progress in TGD. Monday, April 25, 2016 Correlated Polygons in Standard Cosmology and in TGD Peter Woit had an interesting This Week's Hype . The inspiration came from a popular article in Quanta Magazine telling about the proposal of Maldacena and Nima Arkani-Hamed that the temperature fluctuations of cosmic microwave background (CMB) could exhibit deviation from Gaussianity in the sense that there would be measurable maxima of n-point correlations in CMB spectrum as function of spherical angles. These effects would relate to the large scale structure of CMB. Lubos Motl wrote about the article in different and rather aggressive tone. The article in Quanta Magazine does not go into technical details but the original article of Maldacena and Arkani-Hamed contains detailed calculations for various n-point functions of inflaton field and other fields in turn determining the correlation functions for CMB temperature. The article is technically very elegant but the assumptions behind the calculations are questionable. In TGD Universe they would be simply wrong and some habitants of TGD Universe could see the approach as a demonstration for how misleading the refined mathematics can be if the assumptions behind it are wrong. It must be emphasized that already now it is known and stressed also in the articl that the deviations of the CMB from Gaussianity are below recent measurement resolution and the testing of the proposed non-Gaussianities requires new experimental technology such as 21 cm tomography mapping the redshift distribution of 21 cm hydrogen line to deduce information about fine details of CMB now n-point correlations. Inflaton vacuum energy is in TGD framework replaced by Kähler magnetic energy and the model of Maldacena and Arkani-Hamed does not apply. The elegant work of Maldacena and Arkani-Hamed however inspired a TGD based consideration of the situation but with very different motivations. In TGD inflaton fields do not play any role since inflaton vacuum energy is replaced with the energy of magnetic flux tubes. The polygons also appear in totally different manner and are associated with symplectic invariants identified as Kähler fluxes, and might relate closely to quantum physical correlates of arithmetic cognition. These considerations lead to a proposal that integers (3,4,5) define what one might called additive primes for integers n≥ 3 allowing geometric representation as non-degenerate polygons - prime polygons. On should dig the enormous mathematical literature to find whether mathematicians have proposed this notion - probably so. Partitions would correspond to splicings of polygons to smaller polygons. These splicings could be dynamical quantum processes behind arithmetic conscious processes involving addition. I have already earlier considered a possible counterpart for conscious prime factorization in the adelic framework. This will not be discussed in this section since this topic is definitely too far from primordial cosmology. The purpose of this article is only to give an example how a good work in theoretical physics - even when it need not be relevant for physics - can stimulate new ideas in completely different context. For details see the chapter More About TGD Inspired Cosmology or the article Correlated Triangles and Polygons in Standard Cosmology and in TGD . For a summary of earlier postings see Latest progress in TGD. Number theoretical feats and TGD inspired theory of consciousness Number theoretical feats of some mathematicians like Ramanujan remain a mystery for those believing that brain is a classical computer. Also the ability of idiot savants - lacking even the idea about what prime is - to factorize integers to primes challenges the idea that an algorithm is involved. In this article I discuss ideas about how various arithmetical feats such as partitioning integer to a sum of integers and to a product of prime factors might take place. The ideas are inspired by the number theoretic vision about TGD suggesting that basic arithmetics might be realized as naturally occurring processes at quantum level and the outcomes might be "sensorily perceived". One can also ask whether zero energy ontology (ZEO) could allow to perform quantum computations in polynomial instead of exponential time. The indian mathematician Srinivasa Ramanujan is perhaps the most well-known example about a mathematician with miraculous gifts. He told immediately answers to difficult mathematical questions - ordinary mortals had to to hard computational work to check that the answer was right. Many of the extremely intricate mathematical formulas of Ramanujan have been proved much later by using advanced number theory. Ramanujan told that he got the answers from his personal Goddess. A possible TGD based explanation of this feat relies on the idea that in zero energy ontology (ZEO) quantum computation like activity could consist of steps consisting quantum computation and its time reversal with long-lasting part of each step performed in reverse time direction at opposite boundary of causal diamond so that the net time used would be short at second boundary. The adelic picture about state function reduction in ZEO suggests that it might be possible to have direct sensory experience about prime factorization of integers (see this). What about partitions of integers to sums of primes? For years ago I proposed that symplectic QFT is an essential part of TGD. The basic observation was that one can assign to polygons of partonic 2-surface - say geodesic triangles - Kähler magnetic fluxes defining symplectic invariance identifiable as zero modes. This assignment makes sense also for string world sheets and gives rise to what is usually called Abelian Wilson line. I could not specify at that time how to select these polygons. A very natural manner to fix the vertices of polygon (or polygons) is to assume that they correspond ends of fermion lines which appear as boundaries of string world sheets. The polygons would be fixed rather uniquely by requiring that fermions reside at their vertices. The number 1 is the only prime for addition so that the analog of prime factorization for sum is not of much use. Polygons with n=3,4,5 vertices are special in that one cannot decompose them to non-degenerate polygons. Non-degenerate polygons also represent integers n>2. This inspires the idea about numbers 3,4,5 as "additive primes" for integers n>2 representable as non-degenerate polygons. These polygons could be associated many-fermion states with negentropic entanglement (NE) - this notion relate to cognition and conscious information and is something totally new from standard physics point of view. This inspires also a conjecture about a deep connection with arithmetic consciousness: polygons would define conscious representations for integers n>2. The splicings of polygons to smaller ones could be dynamical quantum processes behind arithmetic conscious processes involving addition. For details see the chapter Conscious Information and Intelligence or the article Number Theoretical Feats and TGD Inspired Theory of Consciousness. For a summary of earlier postings see Latest progress in TGD. Monday, April 18, 2016 "Final" solution to the qualia problem The TGD inspired theory of (qualia has evolved gradually to its recent form. 1. The original vision was that qualia and and other aspects of consciousness experience are determined by the change of quantum state in the reduction: the increments of quantum numbers would determine qualia. I had not yet realized that repeated state function reduction (Zeno effect) realized in ZEO is central for consciousness. The objection was that qualia change randomly from reduction to reduction. 2. Later I ended up with the vision that the rates for the changes of quantum numbers would determine qualia: this idea was realized in terms of sensory capacitor model in which qualia would correspond to kind of generalized di-electric breakdown feeding to subsystem responsible for quale quantum numbers characterizing the quale. The Occamistic objection is that the model brings in an additional element not present in quantum measurement theory. 3. The view that emerged while writing the critics of IIT was that qualia correspond to the quantum numbers measured in the state function reduction. That in ZEO the qualia remain the same for the entire sequence of repeated state function reductions is not a problem since qualia are associated with sub-self (sub-CD), which can have lifetime of say about .1 seconds! Only the generalization of standard quantum measurement theory is needed to reduce the qualia to fundamental physics. This for instance supports the conjecture that visual colors correspond to QCD color quantum numbers. This makes sense in TGD framework predicting a scaled variants of QCD type physics even in cellular length scales. This view implies that the model of sensory receptor based on the generalization of di-electric breakdown is wrong as such since the rate for the transfer of the quantum numbers would not define the quale. A possible modification is that the analog of di-electric breakdown generates Bose-Einstein condensate and that the the quantum numbers for the BE condensate give rise to qualia assignable to sub-self. For details see the article TGD Inspired Comments about Integrated Information Theory of Consciousness. For a summary of earlier postings see Latest progress in TGD. NMP and adelic physics In given p-adic sector the entanglement entropy (EE) is defined by replacing the logarithms of probabilities in Shannon formula by the logarithms of their p-adic norms. The resulting entropy satisfies the same axioms as ordinary entropy but makes sense only for probabilities, which must be rational valued or in an algebraic extension of rationals. The algebraic extensions corresponds to the evolutionary level of system and the algebraic complexity of the extension serves as a measure for the evolutionary level. p-Adically also extensions determined by roots of e can be considered. What is so remarkable is that the number theoretic entropy can be negative. A simple example allows to get an idea about what is involved. If the entanglement probabilities are rational numbers Pi=Mi/N, ∑i Mi=N, then the primes appearing as factors of N correspond to a negative contribution to the number theoretic entanglement entropy and thus to information. The factors of Mi correspond to negative contributions. For maximal entanglement with Pi=1/N in this case the EE is negative. The interpretation is that the entangled state represents quantally concept or a rule as superposition of its instances defined by the state pairs in the superposition. Identity matrix means that one can choose the state basis in arbitrary manner and the interpretation could be in terms of "enlightened" state of consciousness characterized by "absence of distinctions". In general case the basis is unique. Metabolism is a central concept in biology and neuroscience. Usually metabolism is understood as transfer of ordered energy and various chemical metabolites to the system. In TGD metabolism could be basically just a transfer of NE from nutrients to the organism. Living systems would be fighting for NE to stay alive (NMP is merciless!) and stealing of NE would be the fundamental crime. TGD has been plagued by a longstanding interpretational problem: can one apply the notion of number theoretic entropy in the real context or not. If this is possible at all, under what conditions this is the case? How does one know that the entanglement probabilities are not transcendental as they would be in generic case? There is also a second problem: p-adic Hilbert space is not a well-defined notion since the sum of p-adic probabilities defined as moduli squared for the coefficients of the superposition of orthonormal states can vanish and one obtains zero norm states. These problems disappear if the reduction occurs in the intersection of reality and p-adicities since here Hilbert spaces have some algebraic number field as coefficient field. By SH the 2-D states states provide all information needed to construct quantum physics. In particular, quantum measurement theory. 1. The Hilbert spaces defining state spaces has as their coefficient field always some algebraic extension of rationals so that number theoretic entropies make sense for all primes. p-Adic numbers as coefficients cannot be used and reals are not allowed. Since the same Hilbert space is shared by real and p-adic sectors, a given state function reduction in the intersection has real and p-adic space-time shadows. 2. State function reductions at these 2- surfaces at the ends of causal diamond (CD) take place in the intersection of realities and p-adicities if the parameters characterizing these surfaces are in the algebraic extension considered. It is however not absolutely necessary to assume that the coordinates of WCW belong to the algebraic extension although this looks very natural. 3. NMP applies to the total EE. It can quite well happen that NMP for the sum of real and p-adic entanglement entropies does not allow ordinary state function reduction to take place since p-adic negative entropies for some primes would become zero and net negentropy would be lost. There is competition between real and p-adic sectors and p-adic sectors can win! Mind has causal power: it can stabilize quantum states against state function reduction and tame the randomness of quantum physics in absence of cognition! Can one interpret this causal power of cognition in terms of intentionality? If so, p-adic physics would be also physics of intentionality as originally assumed. A fascinating question is whether the p-adic view about cognition could allow to understand the mysterious looking ability of idiot savants (not only of them but also of some greatest mathematicians) to decompose large integers to prime factors. One possible mechanism is that the integer N represented concretely is mapped to a maximally entangled state with entanglement probabilities Pi=1/N, which means NE for the prime factors of Pi or N. The factorization would be experienced directly. One can also ask, whether the other mathematical feats performed by idiot savants could be understood in terms of their ability to directly experience - "see" - the prime composition (adelic decomposition) of integer or even rational. This could for instance allow to "see" if integer is - say 3rd - power of some smaller integer: all prime exponents in it would be multiples of 3. If the person is able to generate an NE for which probabilities Pi=Mi/N are apart from normalization equal to given integers Mi, ∑ Mi=N, then they could be able to "see" the prime compositions for Mi and N. For instance, they could "see" whether both Mi and N are 3rd powers of some integer and just by going through trials find the integers satisfying this condition. For a summary of earlier postings see Latest progress in TGD. Thursday, April 14, 2016 TGD Inspired Comments about Integrated Information Theory of Consciousness I received form Lian Sidoroff a link to a very interesting article by John Horgan in Scientific American with title "Can Integrated Information Theory Explain Consciousness?". Originally IIT is a theoretical construct of neuroscientst Giulio Tononi (just Tononi in the sequel). Christof Koch is one of the coworkers of Tononi. IIT can be regarded as heavily neuroscience based non-quantum approach to consciousness and the goal is to identify the axioms about consciousness, which should hold true also in physics based theories. The article of Horgan was excellent and touched the essentials and it was relatively easy to grasp what is common with my own approach to consciousness and comment also what I see as weaknesses of IIT approach. To my opinion, the basic weakness is the lack of formulation in terms of fundamental physics. As such quantum physics based formulation is certainly not enough since the recent quantum physics is plagued by paradoxes, which are due the lack of theory of consciousness needed to understand what the notion of observer means. The question is not only about what fundamental physics can give to consciousness but also about what consciousness can give to fundamental physics. The article Consciousness: here, there and everywhere of Tononi and Koch gives a more detailed summary about IIT. The article From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory gives a more techanical description of IIT. Also the article of Scott Aaronson was very helpful in providing computer scientific view about IIT and representing also mathematical objections. Tononi and Koch emphasize that IIT is a work in progress. This applies also to TGD and TGD inspired theory of consciousness. Personally I take writing of TGD inspired commentary about IIT as a highly interesting interaction, which might help to learn new ideas and spot the weaknesses and imperfections in the basic definitions of TGD inspired theory of consciousness. If TGD survives from this interaction as such, the writing of these commentaries have been waste of time. The key questions relate to the notion of information more or less identified as consciousness. 1. In IIT the information is identified essentially as a reduction of entropy as hypothetical conscious entity learns what the state of the system is. This definition of information used in the definition of conscious entity is circular. It involves also probabilistic element bringing thus either the notion of ensemble or frequency interpretation. 2. In TGD the notion of information relies on number theoretical entanglement entropy (EE) measuring the amount of information associated with entanglement. It makes sense for algebraic entanglement probabilities. In fact all probabilities must be assumed to belong to algebraic extension of rationals if one adopts p-adic view about cognition and extends physics to adelic physics involving real and various p-adic number fields. Circularity is avoided but the basic problem has been whether one can apply the number theoretic definition of entanglement entropy only in p-adic sectors of the adelic Universe or whether it applies under some conditions also in the real sector. Writing this commentary led to a solution of this problem: the state function reduction in the intersection of realities and p-adicities which corresponds to algebraic extension of rationals induces the reductions at real and p-adic sectors. Negentropy Maximization Principle (NMP) maximizes the sum of real and various p-adic negentropy gains. The outcome is highly non-trivial prediction that cognition can stabilize also the real entanglement and has therefore causal power. One can say that cognition tames the randomness of the ordinary state function reduction so that Einstein was to some degree right when he said that God does not play dice. 3. IIT identifies qualia with manner, which I find difficult to take seriously. The criticism however led also to criticism of TGD identification of qualia and much simpler identification involving only the basic assumptions of ZEO based quantum measurement theory emerged. Occam's razor does not leave many options in this kind of situation. IIT predicts panpsychism in a restricted sense as does also TGD. The identification of maximally integrated partition of elementary system endowed with mechanism, which could correspond to computer program, to two parts as conscious experience is rather near to epiphenomenalism since it means that consciousness is property of physical system. In TGD framework consciousness has independent causal and ontological status. Conscious existence corresponds to quantum jumps between physical states re-creating physical realities being therefore outside the existences defined by classical and quantum physics (in TGD classical physics is exact part of quantum physics). The comparison of IIT with TGD was very useful. I glue below the abstract of the article comparing IIT with TGD inspired theory of consciousness. Integrated Information Theory (IIT) is a theory of consciousness originally proposed by Giulio Tononi. The basic goal of IIT is to abstract from neuroscience axioms about consciousness hoped to provide constraints on physical models. IIT relies strongly on information theory. The basic problem is that the very definition of information is not possible without introducing conscious observer so that circularity cannot be avoided. IIT identifies a collection of few basic concepts and axioms such as the notions of mechanism (computer program is one analog for mechanism), information, integration and maximally integrated information (maximal interdependence of parts of the system), and exclusion. Also the composition of mechanisms as kind of engineering principle of consciousness is assumed and leads to the notion of conceptual structure, which should allow to understand not only cognition but entire conscious experience. A measure for integrated information (called Φ) assignable to any partition of system to two parts is introduced in terms of relative entropies. Consciousness is identified with a maximally integrated decomposition of the system to two parts (Φ is maximum). The existence of this preferred decomposition of the system to two parts besides computer and program running in it distinguishes IIT from the computational approach to consciousness. Personally I am however afraid that bringing in physics could bring in physicalism and reduce consciousness to an epiphenomenon. Qualia are assigned to the links of network. IIT can be criticized for this assignment as also for the fact that it does not say much about free will nor about the notion of time. Also the principle fixing the dynamics of consciousness is missing unless one interprets mechanisms as such. In this article IIT is compared to the TGD vision relying on physics and on general vision about consciousness strongly guided by the new physics predicted by TGD. At classical level this new physics involves a new view about space-time and fields (in particular the notion of magnetic body central in TGD inspired quantum biology and quantum neuroscience). At quantum level it involves Zero Energy Ontology (ZEO) and the notion of causal diamond (CD) defining 4-D perceptive field of self; p-adic physics as physics of cognition and imagination and the fusion of real and various p-adic physics to adelic physics; strong form of holography (SH) implying that 2-D string world sheets and partonic surfaces serve as "space-time genes"; and the hierarchy of Planck constants making possible macroscopic quantum coherence. Number theoretic entanglement entropy (EE) makes sense as number theoretic variant of Shannon entropy in the p-adic sectors of the adelic Universe. Number theoretic EE can be negative and corresponds in this case to genuine information: one has negentropic entanglement (NE). TGD inspired theory of consciousness reduces to quantum measurement theory in ZEO. Negentropy Maximization Principle (NMP) serves as the variational principle of consciousness and implies that NE can can only increase - this implies evolution. By SH real and p-adic 4-D systems are algebraic continuations of 2-D systems ("space-time genes") characterized by algebraic extensions of rationals labelling evolutionary levels with increasing algebraic complexity. Real and p-adic sectors have common Hilbert space with coefficients in algebraic extension of rationals so that the state function reduction at this level can be said to induce real and p-adic 4-D reductions as its shadows. NE in the p-adic sectors stabilizes the entanglement also in real sector (the sum of real (ordinary) and various p-adic negentropies tends to increase) - the randomness of the ordinary state function reduction is tamed by cognition and mind can be said to rule over matter. Quale corresponds in IIT to a link of a network like structure. In TGD quale corresponds to the eigenvalues of observables measured repeatedly as long as corresponding sub-self (mental image, quale) remains conscious. In ZEO self can be seen as a generalized Zeno effect. What happens in death of a conscious entity (self) can be understood and it accompanies re-incarnation of time reversed self in turn making possible re-incarnation also in the more conventional sense of the word. The death of mental image (sub-self) can be also interpreted as motor action involving signal to geometric past: this in accordance with Libet's findings. There is much common between IIT and TGD at general structural level but also profound differences. Also TGD predicts restricted pan-psychism. NE is the TGD counterpart for the integrated information. The combinatiorial structure of NE gives rise to quantal complexity. Mechanisms correspond to 4-D self-organization patterns with self-organization interpreted in 4-D sense in ZEO. The decomposition of system to two parts such that this decomposition can give rise to a maximal negentropy gain in state function reduction is also involved but yields two independent selves. Engineering of conscious systems from simpler basic building blocks is predicted. Indeed, TGD predicts infinite self hierarchy with sub-selves identifiable as mental images. Exclusion postulate is not needed in TGD framework. Also network like structures emerge naturally as p-adic systems for which all decompositions are negentropically entangled inducing in turn corresponding real systems. For a summary of earlier postings see Latest progress in TGD. Sunday, April 10, 2016 How Ramanujan did it? Lubos Motl wrote recently a blog posting about P≠ NP conjecture proposed in the theory of computation based on Turing's work. This unproven conjecture relies on a classical model of computation developed by formulating mathematically what the women doing the hard computational work in offices at the time of Turing did. Turing's model is extremely beautiful mathematical abstraction of something very every-daily but does not involve fundamental physics in any manner so that it must be taken with a caution. The basic notions include those of algorithm and recursive function, and the mathematics used in the model is mathematics of integers. Nothing is assumed about what conscious computation is: it is somewhat ironic that this model has been taken by strong AI people as a model of consciousness! 1. A canonical model for classical computation is in terms of Turing machine, which has bit sequence as inputs and transforms them to outputs and each step changes its internal state. A more concrete model is in terms of a network of gates representing basic operations for the incoming bits: from this basic functions one constructs all recursive functions. The computer and program actualize the algorithm represented as a computer program and eventually halts - at least one can hope that it does so. Assuming that the elementary operations require some minimum time, one can estimate the number of steps required and get an estimate for the dependence of the computation time as function of the size of computation. 2. If the time required by a computation, whose size is characterized by the number N of relevant bits, can be carried in time proportional to some power of N and is thus polynomial, one says that computation is in class P. Non-polynomial computation in class NP would correspond to a computation time increasing with N faster than any power of N, say exponentially. Donald Knuth, whose name is familiar for everyone using Latex to produce mathematical text, believes on P=NP in the framework of classical computation. Lubos in turn thinks that the Turing model is probably too primitive and that quantum physics based model is needed and this might allow P=NP. What about quantum computation as we understand it in the recent quantum physics: can it achieve P=NP? 1. Quantum computation is often compared to a superposition of classical computations and this might encourage to think that this could make it much more effective but this does not seem to be the case. Note however that the amount of information represents by N qubits is however exponentially larger than that represented by N classical bits since entanglement is possible. The prevailing wisdom seems to be that in some situations quantum computation can be faster than the classical one but that if P=NP holds true for classical computation, it holds true also for quantum computations. Presumably because the model of quantum computation begins from the classical model and only (quantum computer scientists must experience this statement as an insult - apologies!) replaces bits with qubits. 2. In quantum computer one replaces bits with entangled qubits and gates with quantum gates and computation corresponds to a unitary time evolution with respect to a discretized time parameter constructed in terms of fundamental simple building bricks. So called tensor networks realize the idea of local unitary in a nice manner and has been proposed to defined error correcting quantum codes. State function reduction halts the computation. The outcome is non-deterministic but one can perform large number of computations and deduce from the distribution of outcomes the results of computation. What about conscious computations? Or more generally, conscious information processing. Could it proceed faster than computation in these sense of Turing? To answer this question one must first try to understand what conscious information processing might be. TGD inspired theory of consciousnesss provides one a possible answer to the question involving not only quantum physics but also new quantum physics. 1. In TGD framework Zero energy ontology (ZEO) replaces ordinary positive energy ontology and forces to generalize the theory of quantum measurement. This brings in several new elements. In particular, state function reductions can occur at both boundaries of causal diamond (CD), which is intersection of future and past direct light-cones and defines a geometric correlate for self. Selves for a fractal hierarchy - CDs within CDs and maybe also overlapping. Negentropy Maximization Principle (NMP) is the basic variational principle of consciousness and tells that the state function reductions generate maximum amount of conscious information. The notion of negentropic entanglement (NE) involving p-adic physics as physics of cognition and hierarchy of Planck constants assigned with dark matter are also central elements. 2. NMP allows a sequence of state function reductions to occur at given boundary of diamond-like CD - call it passive boundary. The state function reduction sequence leaving everything unchanged at the passive boundary of CD defines self as a generalized Zeno effect. Each step shifts the opposite - active - boundary of CD "upwards" and increases its distance from the passive boundary. Also the states at it change and one has the counterpart of unitary time evolution. The shifting of the active boundary gives rise to the experienced time flow and sensory input generating cognitive mental images - the "Maya" aspect of conscious experienced. Passive boundary corresponds to permanent unchanging "Self". 3. Eventually NMP forces the first reduction to the opposite boundary to occur. Self dies and reincarnates as a time reversed self. The opposite boundary of CD would be now shifting "downwards" and increasing CD size further. At the next reduction to opposite boundary re-incarnation of self in the geometric future of the original self would occur. This would be re-incarnation in the sense of Eastern philosophies. It would make sense to wonder whose incarnation in geometric past I might represent! Could this allow to perform fast quantal computations by decomposing the computation to a sequence in which one proceeds in both directions of time? Could the incredible feats of some "human computers" rely on this quantum mechanism. The indian mathematician Srinivasa Ramanujan is the most well-known example of a mathematician with miraculous gifts. He told immediately answers to difficult mathematical questions - ordinary mortals had to to hard computational work to check that the answer was right. Many of the extremely intricate mathematical formulas of Ramanujan have been proved much later by using advanced number theory. Ramanujan told that he got the answers from his personal Goddess. Might it be possible in ZEO to perform quantally computations requiring classically non-polynomial time much faster - even in polynomial time? If this were the case, one might at least try to understand how Ramanujan did it although higher levels selves might be involved also (did his Goddess do the job?). 1. Quantal computation would correspond to a state function reduction sequence at fixed boundary of CD defining a mathematical mental image as sub-self. In the first reduction to the opposite boundary of CD sub-self representing mathematical mental image would die and quantum computation would halt. A new computation at opposite boundary proceeding to opposite direction of geometric time would begin and define a time-reversed mathematical mental image. This sequence of reincarnations of sub-self as its time reversal could give rise to a sequence of quantum computation like processes taking less time than usually since one half of computations would take place at the opposite boundary to opposite time direction (the size of CD increases as the boundary shifts). 2. If the average computation time is same at both boundaries, the computation time would be only halved. Not very impressive. However, if the mental images at second boundary - call it A - are short-lived and the selves at opposite boundary B are very long-lived and represent very long computations, the process could be very fast from the point of view of A! Could one overcome the P≠NP constraint by performing computations during time-reversed re-incarnations?! Short living mental images at this boundary and very long-lived mental images at the opposite boundary - could this be the secret of Ramanujan? 3. Was the Goddess of Ramanujan - self at higher level of self-hierarchy - nothing but a time reversal for some mathematical mental image of Ramanujan (Brahman=Atman!), representing very long quantal computations! We have night-day cycle of personal consciousness and it could correspond to a sequence of re-incarnations at some level of our personal self-hierarchy. Ramanujan tells that he met his Goddess in dreams. Was his Goddess the time reversal of that part of Ramanujan, which was unconscious when Ramanujan slept? Intriguingly, Ramanujan was rather short-lived himself - he died at the age of 32! In fact, many geniuses have been rather short-lived. 4. Why the alter ego of Ramanujan was Goddess? Jung intuited that our psyche has two aspects: anima and animus. Do they quite universally correspond to self and its time reversal? Do our mental images have gender?! Could our self-hierarchy be a hierarchical collection of anima and animi so that gender would be something much deeper than biological sex! And what about Yin-Yang duality of Chinese philosophy and the ka as the shadow of persona in the mythology of ancient Egypt? For a summary of earlier postings see Latest progress in TGD. Friday, April 08, 2016 Quantum critical dark matter and tunneling in quantum chemistry Quantum revolution, which started from biology, has started to infect also chemistry. contains interesting article titled Exotic quantum effects can govern the chemistry around us. The article tells about the evience that quantum tunnelling takes place in chemical reactions even at temperatures above the boiling point of water. This is not easy to explain in standard quantum theory framework. No one except me has the courage to utter aloud the words "non-standard value of Planck constant". This is perfectly understandable since at thist moment these worlds would still mean instantaneous academic execution. Quantum tunneling means that quantum particle is able to move through a classically forbidden region, where its momentum would be imaginary. The tunnelling probability can be estimated by solving the Schrödinger equation assuming that a free particle described as a wave arrives from the other side of the barrier and is partially reflected and partially transmitted. Tunneling probability is proportional to exp(-2∫ Kdx), k=iK is the imaginary wave vector in forbidden region - imaginary because the kinetic energy T=p2/2m of particle equals to T= E-V and is negative. In forbidden region momentum p is imaginary as also the wave vector k=iK = p/hbar. The trasmission-/tunnelling probability decreases exponentially with the height and width of the barrier. Hence the tunnelling should be extremely improbable in macroscopic and even nano-scales. The belief has been that this is true also in chemistry. Especially so at high temperatures, where quantum coherence lengths are expected to be short. Experiments have forced to challenge this belief. In TGD framework the hierarchy of phases of ordinary matter with Planck constant given by heff=n× h. The exponent in the tunneling probablity is proportional to 1/hbar. If hbar is large, the tunnelling probability increases since the damping exponential is near to unity. Tunneling becomes possible in scales, which are by a factor heff/h=n longer than usually. At microscopic level - in the sense of TGD space-time - the tunnelling would occur along magnetic flux tubes. This could explain the claimed tunneling effects in chemistry. In biochemistry these effects would of special importance. In TGD framework non-standard values of Planck constant are associated with quantum criticality and there is experimental evidence for quantum criticality in the bio-chemistry of proteins (see also this. In TGD framework quantum criticality is the basic postulate about quantum dynamics in all length scales and makes TGD unique since the fundamental coupling strength is analogous to critical temperature and therefore has a discrete spectrum. Physics student reading this has probably already noticed that diffraction is another fundamental quantum effect. By naive dimensional estimate, the sizes of diffraction spots should scale up by heff. This might provide a second manner to detect the presence of large heff photons and also other particles such as electrons. Dark variants of particles wold not be directly observable but might induce effects in ordinary matter making the scaled up diffraction spots visible. For instance, could our visual experience provide some support for large heff diffraction? The transformation of dark photons to biophotons might make this possible. P. S. Large heff quantum tunnelling could provide one further mechanism for cold fusion. The tunnelling probabily for overcoming Coulomb wall separating incoming charged nucleus from target nucleus is extremely small. If the value of Planck constant is scaled up, the probability increases by the above mechanism. Therefore TGD allows to consider at least 3 different mechanisms for cold fusion: all of them would rely on hierarchy of Planck constants. For a summary of earlier postings see Latest progress in TGD. Wednesday, April 06, 2016 Is cold fusion becoming a new technology? The progress in cold fusion research has been really fast during last years and the most recent news might well mean the final breakthrough concerning practical applications which would include not only wasteless energy production but maybe also production of elements such as metals. The popular article titled Cold Fusion Real, Revolutionary, and Ready Says Leading Scandinavian Newspaper ) tells about the work of Prof. Leif Holmlid and his student Sinder-Zeiner-Gundersen. For more details about the work of Holmlid et als ee this, this, this, and this. The latter revealed the details of an operating cold fusion reactor in Norway reported to generate 20 times more energy than required to activate it. The estimate of Holmlid is that Norway would need 100 kg of deuterium per year to satisfy its energy needs (this would suggest that the amount of fusion products is rather small to be practical except in situations, where the amounts needed are really small). The amusing co-incidence is that I constructed towards the end of the last year a detailed TGD based model of cold fusion ( see this) and the findings of Leif Holmlid served as an important guideline although the proposed mechanism is different. Histories are cruel, and the cruel history of cold fusion begins in 1989, when Pons and Fleichmann reported anomalous heat production involving palladium target and electrolysis in heavy water (deuterium replacing hydrogen). The reaction is impossible in the world governed by text book physics since Coulomb barrier makes it impossible for positively charged nuclei to get close enough. If ordinary fusion is in question, reaction products should involve gamma rays and neutrons and these have not been observed. The community preferred text books over observations and labelled Pons and Fleichman and their followers as crackpots and it became impossible to publish anything in so called respected journals. The pioneers have however continued to work with cold fusion and for few years ago American Chemical Society had to admit that there might be something in it and cold fusion researchers got a status of respectable researcher. There have been several proposals for working reactors such as Rossi's E-Cat and NASA is performing research in cold fusion. In countries like Finland cold fusion is still a cursed subject and will probably remain so until cold fusion becomes the main energy source in heating of also physics department. The model of Holmlid for cold fusion Leif Holmlid is a professor emeritus in chemistry at the University of Gothemburg. He has quite recently published a work on Rydberg matter in the prestigious journals of APS and is now invited to tell about his work on cold fusion to a meeting of American Physical Society. 1. Holmlid regards Rydberg matter) as a probable precursor of cold fusion. Rydberg atoms have some electrons at very high orbitals with large radius. Therefore the nuclei plus core electrons look for them like a point nucleus, which charge equal to nuclear charge plus that of core electrons. Rydberg matter forms layer-like structures with hexagonal lattice structure. 2. Cold fusion would involve the formation of what Holmlid calls ultra-dense deuterium having Rydberg matter as precursor. If I have understood correctly, the laser pulse hitting Rydberg matter would induce the formation of the ultra-dense phase of deuterium by contracting it strongly in the direction of the pulse. The ultra-dense phase would then suffer Coulomb explosion. The compression seems to be assumed to happen in all directions. To me the natural assumption would be that it occurs only in the direction of laser pulse defining the direction of force acting on the system. 3. The ultra-dense deuterium would have density about .13× 106 kg/m3, which is 1.3× 103 times that of ordinary water. The nuclei would be so close to each other that only a small perturbation would make possible to overcome the Coulomb wall and cold fusion can proceed. Critical system would be in question. It would be hard to predict the outcome of individual experiment. This would explain why the cold fusion experiments have been so hard to replicate. The existence of ultra-dense deuterium has not been proven but cold fusion seems takes place. Rydberg matter, which should not be confused with the ultra-dense phase would be the precursor of the process. I am not sure whether Rydberg matter exists before the process or whether it would be created by the laser pulse. Cold fusion would occur in the observed microscopic fracture zones of solid metal substances. Issues not so well-understood The process has some poorly understood aspects. 1. Muons as also of mesons like pion and kaon are detected in the outgoing beam generated by the laser pulse. Muons with mass about 106 MeV could be decay products of pions with mass of 140 MeV and kaons but how these particles with masses much larger than scale of nuclear binding energy per nucleon of about 7-8 MeV for ligher nuclei could be produced even if low energy nuclear reactions are involved? Pions appear as mediators of strong interaction in the old-fashioned model of nuclear interactions but the production on mass shell pions seems very implausible in low energy nuclear collisions. 2. What is even stranger that muons produced even when laser pulse is not used to initiate the reaction. Holmlid suggests that there are two reaction pathways for cold fusion: with and without the laser pulse. This forces to ask whether the creation of Rydberg matter or something analogous to it is alone enough to induce cold fusion and whether the laser beam actually provides the energy needed for this so that ultra-dense phase of deuterium would not be needed at all. Coulomb wall problem would be solve in some other manner. 3. The amount of gamma radiation and neurons is small so that ordinary cold fusion does not seem to be in question as would be implied by the proposed mechanism of overcoming the Coulomb wall. Muon production would suggest muon catalyzed fusion as a mechanism of cold fusion but also this mechanism should produce gammas and neutrons. TGD inspired model of cold fusion It seems that Holmlid's experiments realize cold fusion and that cold fusion might be soon a well-established technology. A real theoretical understanding is however missing. New physics is definitely required and TGD could provide it. 1. TGD based model of cold fusion relies on TGD based view about dark matter. Dark matter would correspond to phases of ordinary matter with non-standard value of Planck constant heff=n× h implying that the Compton sizes of elementary particles and atomic nuclei are scaled up by n and can be rather large - of atomic size or even larger. Also weak interactions can become dark: this means that weak boson Compton lengths are scaled up so that they are effectively massless below Compton length and weak interactions become as strong as electromagnetic interactions. If this happens, then weak interactions can lead to rapid beta decay of dark protons transforming them to neutrons (or effectively neutrons as it turns out). For instance, one can imagine that proton or deuteron approaching nucleus transforms rapidly to neutral state by exchange of dark W bosons and can overcome the Coulomb wall in this manner: this was my original proposal for the mechanism of cold fusion. 2. The model assumes that electrolysis leads to a formation of so called fourth phase of water discovered by Pollack. For instance, irradiation by infrared light can induce the formation of negatively charged exclusion zones (EZs) of Pollack. Maybe also the laser beam used in the experiments of Holmlid could do this so that compression to ultra-dense phase would not be needed. The fourth phase of water forms layered structures consisting of 2-D hexagonal lattices with stoichiometry H1.5O and carrying therefore a strong electric charge. Also Rydberg matter forms this kind of lattices, which suggests a connection with the experiments of Holmlid. Protons must go somewhere from the EZ and the interpretation is that one proton per hydrogen bonded pair of water molecules goes to a flux tube of the magnetic body of the system as dark proton with non-standard value of Planck constant heff=n× h and forms sequence of dark protons forming dark nucleus. If the binding energy of dark nucleus scales like 1/heff (1/size) the binding energy of dark nucleus is much smaller than that for ordinary nucleus. The liberated dark nuclear binding energy in the formation would generate further EZs and one would have a kind of chain reaction. In fact, this picture leads to the proposal that even old and boring ordinary electrolysis involves new physics. Hard to confess, but I have had grave difficulties in understanding why ionization should occur at all in electrolysis! The external electric field between the electrodes is extremely weak in atomic scales and it is difficult to understand how it induce ionization needed to load the electric battery! 3. The dark proton sequences need not be stable - the TGD counterpart for the Coulomb barrier problem. More than half of the nucleons of ordinary nuclei are neutrons and similar situation is the first expectation now. Dark weak boson (W) emission could lead to dark beta decay transforming proton to neutron or what looks like neutron (what this cryptic statement means would requires explanation about nuclear string model). This would stabilize the dark nuclei. An important prediction is that dark nuclei are beta stable since dark weak interactions are so fast. This is one of the predictions of the theory. Second important prediction is that gamma rays and neutrons are not produced at this stage. The analogs of gamma rays would have energies of order dark nuclear binding energy, which is ordinary nuclear energy scale scaled down by 1/n. Radiation at lower energies would be produced. I have a vague memory that X rays in keV range have been detected in cold fusion experiments. This would correspond to atomic size scale for dark nuclei. 4. How the ordinary nuclei are then produced? The dark nuclei could return back to negatively charged EZ (Coulomb attraction) or leave the system along magnetic flux tubes and collide with some target and transform to ordinary nuclei by phase transition reducing the value of heff. It would seem that metallic targets such as Pd are favorites in this respect. A possible reason is that metallic target can have negative surface charge densities (electron charge density waves are believed by some workers in the field to be important for cold fusion) and attract the positively charged dark nuclei at magnetic flux tubes. Essentially all of the nuclear binding energy would be liberated - not only the difference of binding energies for the reacting nuclei as in hot fusion. At this stage also ultra-dense regions of deuterium might be created since huge binding energy is liberated and could induce also ordinary fusion reactions. This process would create fractures in the metal target. This would also explain the claimed strange effects of so called Brown's gas generated in electrolysis on metals: it is claimed that Brown's gas (one piece of physics, which serious academic physicists enjoying monthly salary refuse to consider seriously) can melt metals although its temperature is not much more than 100 degrees Celsius. 5. This model would predict the formation of beta stable nuclei as dark proton sequences transform to ordinary nuclei. This process would be analogous to that believed to occur in supernova explosions and used to explain the synthesis of nuclei heavier than iron. This process could also replace the hypothesis about super-nova nucleosynthesis: indeed, SN1987A did not provide support for this hypothesis. The reactor of Rossi is reported to produce heavier isotopes of Ni and of Copper. This would strongly suggest that protons also fuse with Ni nuclei. Also heavier nuclei could enter to the magnetic flux tubes and form dark nuclei with dark protons transformed partially to neutral nucleons. Also the transformation of dark nuclei to ordinary nuclei could generate so high densities that ordinary nuclear reactions become possible. 6. What about the mysterious production of pions and mesons producing in turn muons? 1. Could the transformation of nuclei to ordinary nuclei generate so high a local temperature that hadron physics would provide an appropriate description of the situation. Pion mass corresponds to 140 MeV energy and huge temperature about .14 GeV. This is much higher than solar temperature and looks totally implausible. 2. The total binding energy of nucleus with 20 nucleons as single pion would generate energy of this order of magnitude. Dark nuclei are quantum coherent structures: could this make possible this kind of "holistic" process in the transformation to ordinary nucleus. This might be part of the story. 3. Could the transformation to ordinary nucleus involve the emission of dark W boson with mass about 80 GeV decaying to dark quark pairs binding to dark mesons transforming eventually to ordinary mesons? Could dark W boson emission occur quantum coherently so that the amplitude would be sum over the emission amplitudes, and one would have an amplification of the decay rate so that it would be proportional to the square of dark nuclear charge? The effective masslessness below atomic scale would make the rate for this process high. The emission would lead directly to the final state nucleus by emission of on mass shell mesons. For background see the background see the chapter Cold fusion again of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy" or article with the same title. For a summary of earlier postings see Latest progress in TGD. Tuesday, April 05, 2016 1. Questions about SCS in TGD framework To get plane wave normalization for the amplitudes x=log(rM/r0) , ground states with negative conformal weight? 2. Questions about N=2 SCS 2.1 Inherent problems of N=2 SCS N=2 SCS has some severe inherent problems. 2.2 Can one really apply N=2 SCFTs to TGD?
44224c7386f02a32
Monday, December 09, 2013 interimaginary philobiblionoi Screyen Thalannes is the deuteragonist in Thylesma Cholon's Time For Ice. Salasanny Kolonno is the protagonist of Hiet Mo's Sleepy Monkeys Do Not Eat Cheese. That there is a particular Screyen Thalannes, Screyen Matih Thalannes, and a particular Salasanny Kolonno, Salassany Ekinne Kolonno is also, by itself, just peachy. That Time For Ice is a book of fiction in Salassany Ekinne Kolonno's world, and that Sleepy Monkeys Do Not Eat Cheese is a book of fiction in Screyen Matih Thalannes's world, is also, just peachy. That Salassany Ekinne Kolonno and Screyen Matih Thalannes just happen to have fallen vaguely in love with each other probably will give pause to some of you with scant modal realism in your reality models. As it happens, Salassany Ekinne Kolonno and Screyen Matih Thalannes differ from the characters in the novels in ways which precisely accord with their unexpected transcendental consonance with each other. A wag says: "well, they should be finding real lovers instead of staying in the imagination!", to which the technically apropos response is this: 1. neither Salassany Ekinne Kolonno nor Screyen Matih Thalannes's fantasy life can be considered remotely unethical, given that they dream of each other, and specifically each other, on a regular basis. They are not fantasizing about other people nonconsensually. 2. Both Salassany Ekinne Kolonno and Screyen Matih Thalannes's imagination of the other has resulted in decisions which have improved the other's life. 3. For even gnarlier contextual froths for which words like 'universe' and 'cosmos' are around maximally inaccurate, packets with contents other than "I seem to be having tremendous difficulty with my lifestyle" sometimes manage to alliterate across Bifröst from originator to recipient and back and are interpreted correctly by both parties. Now, should a wag want a mechanism whose Schrödinger equation they can write down and calculate potentials and whatnot, and there will be persnickety sorts who will demand that the transmissal of such packets would be impossible without first intellectually understanding the mechanism. 4. Maybe they eventually found life partners in their respective worlds. That's beyond the scope of this post. What isn't is that they kept each other going, and isn't that what love does? Makes survival possible amongst other concordance-of-one's-environment-improving benefits? 5. This phenomenon happens more often than not sometimes. Some transcendents make a very rudimentary attempt to taxonomize it, but being something which is splayed and ramified across differentially tangled Indranets, there'll never be an absolute catalogue of it. Wednesday, December 04, 2013 context beware... The Gelturke look like small piles of bubbling muck and sticks. Of course, if you had access to the appropriate context-co-environment they just look like you or me, and have similar issues and life-stories. They share most of our genetic code too. The Sunspinners and Starwhirlers of the Linsellenorai coast, however aren't from around here and are arguably just a little more alien to us than the Gelturke and the Pund are. Yes, they kind of look like us. Genomically, they are around 60% tree, 30% butterfly, 5% mantis shrimp, and the rest is out of this world. When asked, Lirrensily Aranaic-Arracaranserel replied "metagenomics is fairly nifty, no?". Oh, and they're nuclear powered. They have two organelles which perform the proton-proton chain and CNO-cycling, and the gamma photons emitted are directly captured and drive an ATP synthase complex. Their bones are made from buckymesh. They're bioluminescent, their skin is covered in chromatophores. They're also a lot nicer than humans/hominids, in general. The exotic anatomies of the Sunspinners and Starwhirlers arose because a machine civilization found to its dismay that its power source (a pulsar) had encountered an exotic part of the pulsar-life cycle and would explode yielding mostly iron. (some extremely rare complex-exthalpy matter had gotten itself lodged in the core.) At that point, the best long term survival strategy was to build hominid-esque bodies for its constituents --the Sansuraro Galaxy was too far away from any civilization that could build quantum foam bodies for that to be an option -- and to transport them somewhere habitable. Tsiliere was the best bet -- nice atmosphere, relatively standard hominid population (except for the Gelturke),
817cc29e34f32455
From Wikipedia, the free encyclopedia Jump to navigation Jump to search Introduction to Chart to Scalar Theory[edit] Chart to Scalar Theory says there is a correspondence or relation between discrete buy and sell orders in the stock market volume and stock market geometry on the boundary. The boundary represents the charts which we can see. If you peel it back, there are buy and sell orders which are contained in the volume. The idea of the scalar is to encode the properties of the stock market into a scalar which represents a constraint of a hypothetical stock chart constrained by boundary conditions. The motivation behind Chart to Scalar is to convert stock charts and trading volume into a mathematical framework to better understand how the stock market works, and make predictions. This has applications such as measuring capital inflows, market impact, simulating how external events such as insider trading affect how a stock will trade in the expectation that the market is flat or rising, explaining how crashes occur, and pricing options. The name Chart to Scalar arises from a variational problem that involves finding a hypothetical RHS (right hand side) chart or function that represents the path of a stock constrained by a scalar value that encodes information about the LHS (left hand side) chart, volume, geometry and other characteristics of the charts and external events, in addition to the constraints of the boundary conditions and endpoints. where is a function of minimum length There are many models that try to explain how the markets work from a wide variety of research areas, such as behavior economics, behavior finance, fundamental valuation, but there are fewer papers that envisage the stock market itself as a closed physical system. Dilip Abreu and Markus K. Brunnermeier(2003) discusses in the context of behavior finance how presence of rational arbitrageurs can create persistent bubbles. A paper by Bouchaud (1998) posits a non linear Langevin equation as a model for stock market fluctuations and crashes. Racorean (2014) proposes a model that encapsulates all of the trading activity of a group of stocks as a high dimensional polygon. C Zhang (2010) appropriates classical physics establish the Schrödinger equation for a stock price. Hsinan Hsu (2001) applies the kinematic and kinetic theories of physics to derive price behavior equations for the stock markets. In Chart to Scalar, the stock market is simplified to 1-dimensional 'universe' with two particles; a buy and a sell order and the action of the system is restricted in such a way where along the x axis there is only allowed to be one y intercept (obviously because stocks cannot go 'back' in time). The variational equation outputs either a monotonically increasing/decreasing concave, linear, or convex RHS (right hand side) curve of how the stock should theoretically trade given the values in the equilibrium equation . If , the equilibrium equation can be interpreted to mean that the amount of money flowing into a stock on the LHS must equal the amount that flows out on the RHS. Or if , the amount of money that flows into the stock on the RHS must equal the amount that flowed out on the LHS. Rather than looking at forces, we reduce the market to static scalars that contain information about the market. The equation of motion is the path of least action that encloses the area of the scalar. This is a Lagrangian approach instead of a Newtonian one. Second, we introduce LHS 'resistance' which is encoded into the scalar. This resistance is analogous to the resistance of gravity or friction in Newtonian mechanics or a energy barrier in quantum mechanics. The first section covers the notation, the formulas that arise out of the buy/sell order annihilation process, and some examples. Then the model is extended to include a geometric component to provide an additional degree of freedom when needed. The second part will cover simulations and data extraction. In the third part of the paper an option ricing formula is derived via Chart to Scalar and the results are compared to Black Scholes. The inclusion of volume and trade size could explain phenomena observed in real live stocks that isn't accounted for in conventional option pricing models. This is important because traditional models (such as Black-Scholes and Binomial) have many significant limitations which Chart to Scalar may be able to resolve, such as the inability to account for historical market movements (Baggett & Thompson (2006))[1] and their frequent overpricing of options, with the overpricing increasing with the time to maturity. (Hull & White (1987)) [2] In our paper, we conclude that Chart to Scalar options grow at a slower rate than Black Scholes options , and secondly, that rare events according to Black-Scholes are more common in Chart to Scalar. Key Points[edit] 1. The tendency of a stock to rise or fall is determined by it's volume of the RHS (right hand side) of the stock graph functional relative to the volume of LHS (left hand side) functional, in addition to other variables in the stock market equilibrium equation. 2. Buy and sell orders and their interaction are the propagators of price change. 3. In a restricted 1-d trading space, stocks trade along a geodesic, swathing out convex, linear, or concave curves as the solution of a variational equation constrained by the equilibrium equation. 4. Most equations are scale invariant meaning that adjusting time frames by a scaling factor will yield the same results. Trading Space and Notation[edit] The trading space is a quasi two-dimensional space that consists of two charts. We call it quasi-two dimensional because backward motion is prohibited and hence all curves cannot have more than one y intercept. One chart is on LHS (left-hand side) and the second on the RHS (right-hand side) and are adjoined at some point, denoted as . The resistance and support forces denoted by occupies the LHS. A horizontal line drawn through will intersect the LHS curve at where . The curve is either concave, convex or a line. Depending on and additional variables like buy or sell orders, a RHS curve is produced, which is either concave, convex or linear. On the LHS, for either a convex , concave curve or line is signed either + 1 or -1 denoted by . A positive sign means that the price on the LHS is rising. This is a support curve. A resistance curve has a negative sign. The RHS always has the opposite sign as the LHS or . If the RHS is rising it will run up against the negatively signed LHS resistance curve. If the RHS is falling it will fall against the positively signed LHS support curve, as shown in the appended diagram. (the shaded region represents volume) Negative values are not permitted for all for all The perimeter trading space: For : For the parameters are reversed: Imagine a diagonal line that connects the points for a LHS curve and for a RHS curve . For either LHS or RHS, a concave curve cannot exceed this line and convex curve cannot fall below it. With the new notation, can be re-written as . As a rule of thumb, must always be positive, for any problem. must be chosen in such a way that this holds. Annihilation Process[edit] In Chart to Scalar theory, the interaction of buy orders and sell orders is the propagator is price change. This is expressed by the simple system of equations: Where is some function inputs price and returns volume. And is RHS volume and is the average order size. The challenge is finding . In the subsequent sections, we'll try to derive a relationship between price and volume using simple geometric shapes such as triangles, lines, and arcs to represent the paths of stocks. An alternative derivation of using an order book can be found here: User:Optionpricing As a stock is trading in some time frame there exists a value called an energy level that is the product of two or three components: a slope/time component , a volume component and geometric component , which is added for more complex problems and will be discussed in further detail later. The LHS (left-hand side) values are denoted by subscript such as and RHS by such as . The RHS product is ; the LHS is . Setting them equal, we have the equilibrium equation: . In this trading space a buy and sell order of equal price and quantity will instantly annihilate producing no net energy (represented as a flat line on a chart). An excess of buy orders will cause a price rise appearing as a rising slope on a chart or . However, the actual price of the stock need not matter. Instead, the ratio of buy orders to total orders does. Thus a stock trading at 10 cents will experience the same percentage change in price as one trading at $10 if the ratios of buy and sell orders are the same. We're looking for a function where where is a scaling factor; the solution is . All else (volume, time) being equal, a stock rising from $90 to $100 should have the same properties as one rising from $9 to $10. We also need some way to define the x component of a slope in a manner that is time or scale invariant. Define the variable that relate the time duration of events on the LHS to the RHS. The actual units of time are not important. For the LHS, the default time is . This serves as our reference point. For example, if then it means the time or duration of the LHS of the chart is the same as the RHS. We define as a function that ranges between 0 and 1 and measures the percentage of volume 'buys' or 'sells' remaining after buy and sell order annihilation has occurred, and is dependent on the rate of change of a stock in some time frame or . can be interpreted as some function of or . For the RHS, we need to define both in terms of the ratio buy & sell orders and the rate of change. In terms of rate of change of price over the 'base' or time duration (where ) and : and in terms of buy and sell orders: Intuitively this makes sense. If then or no change in price. If vastly exceeds then increases. Because the total number of buy and sell orders is then . Thus can be rewritten as: We're looking for a function that converges to 1 as the ratio of increases, and 0 if . The example below suffices: Define The total number of orders is equal to the volume divided by the average order size. and are recovered from the equation above: Consider there are 5 total orders: three buys and two sells in some infinitesimal trading space. The two buys and sells will cancel and one buy order will be left. Hence, . If then all the buys and sells have cancelled and there is none left over, . If then because all of the volume is buy orders. We can convert charts into buy and sell orders and vice versa through the relation: The solution: Putting these values back into and and are obtained. is our time/slope reference triangle: The buy and sell order equations: is added to account for negative RHS slopes, so to preserve symmetry between the buys and sells. Example 1:[edit] Suppose a stock rises from $50 to $51 in a one day period with a volume of . How much volume would be required for the stock to fall back to $50 instantaneously? is irrelevant. The time duration of events on the RHS is infinitely small relative to the LHS. Because and , we have Setting up the equilibrium equation The solution shares need to be sold. Part 2: How many buy and sell orders? With (the LHS is rising), when we plug into the buy order equation we get . This answer is interpreted to mean that there are zero buy orders as the stock falls from $51 to $50 in an infinitesimal time frame. . Thus we have 100% sell orders for the volume, for any . Example 2:[edit] A stock has risen in from $10 to $11 in 30 trading days with an arbitrary daily volume. How much volume is requires for the stock to fall back to $10 in one trading day? The solution is 2.3 which means that it requires 2.3 times the daily volume for the stock to fall back to $10 in a one day trading period. This agrees with empirical observations of how stocks and indexes can erase months or weeks worth of gains in single day or a couple days during a crash. An increase of 130% of the daily volume or 7.8% of the total LHS volume is sufficient for the stock to forfeit a month worth of gains. Part 2: How many buys and sell orders? When we plug into the buy order equation we get This answer is interpreted to mean that 20.5% of the RHS volume consists of buy orders and 79.5% sell orders, for any . Also note how This price/volume dynamic could explain why flash crashes occur and how preventing them would be difficult because a relatively little amount of high density volume is required for a stock experience a rapid decline. For certain problems the linearized versions of and doesn't give enough degrees of freedom, hence a geometric component is introduced. measures efficiency of a path . is a value between 0 and 1 that measures how much a stock path defects from a strait line, with 1 being a line. A line is the most efficient path and has a , but in some other circumstances such as modeling insider selling then a different path is stipulated that may be convex or concave shaped. To see why an extra variable is needed consider we have our buy order equation with Consider the equilibrium equation If we wish to simulate churning we have: (Churning volume is volume that is evenly split between buy and sell orders) Obviously, the equilibrium equation doesn't hold, but introducing a new variable bypasses this. We have: As grows the path becomes less efficient as indicated by falling . By defining to be a functional path, we can give the output as a solution of a variational equation as the path of least action as constrained by the equilibrium equation. This path may not be linear, but concave or convex depending on our criteria. The path is how the stock should hypothetically trade on the RHS with the introduction of constraints, such as extra buy or sell orders or churning volume. The buy & sell order equations include As , then and . As more churning volume is added, the number of buy orders approaches half the total volume . Either an inefficient path or a small results in an even admixture of buy and sell orders. Defining [edit] Define as a function that ranges between 0 and 1 depending on how much the area under the curve or differs from the area under the right-triangle adjoining the points (LHS) or (RHS), and approaches zero for increasing deviation or 1 for a diagonal line. Concave and convex curves and lines arise out of solutions to the variational equation when constrained by the equilibrium equation. The linear path is the most optimal and therefore has the highest 'energy density' An expression of or for concave curves: The convex curves: where are convex curves If are expressed as a column vector, the complementary convex curves are found by using a transformation matrix to reflect about the line adjoining for the LHS and for the RHS. These new curves have the same as the original curves about the endpoints of the region. For example, let's assume our RHS space is a square with the coordinates and we have a concave curve . We have: and for we also have The following table determines is you use a convex or a concave curve Convex LHS Convex RHS Convex LHS Convex RHS Concave LHS Concave RHS Concave LHS Concave RHS Invariance of [edit] Crucially, exhibits scale invariance. Imagine that a chart occupying a computer screen is re-sized in such a way that the curves or appear stretched, compressed, or shifted. Another person on a different computer terminal looks at the same stock with a different time frame. Both should arrive at the same value of . Begin with the transformation: The limits of integration are compressed and and are the same. Define . Via integration by substitution obtain: Plugging in the transformed values of and into 1.0 the 'a' variable cancels out and we're left with the original equation. For example, consider a concave RHS chart given by the potion of the equation where . We know because the RHS is rising. Also: , , , , . Plugging everything into 1.0 gives Now consider the the person viewing the graph wants a longer time frame on the same screen. will be compressed on the x axis by some factor . Substituting gives . The transformed values of are now because the time frame is compressed. As expected and are unchanged because all we're doing is making the graph compressed, not changing the starting and final price. Plugging these new values into 1.0 still gives . Invariance of Volume[edit] Let's assume is bounded between and where . There also exists a LHS volume function between the aforementioned boundaries. For simplicity, will be linear. The total volume between and will be denoted as . Hence, we can express as a proportional relationship . Suppose 'Tom' on computer 'A' observes that stock XYZ has carved a convex chart shape by bounded by . 'Mark' observes this this shape as well but his chart is stretched/resized on his computer and he gets bounded by . . For example, assuming , they will both arrive at: Or for general volume Variational equation for [edit] With , we have the complete equilibrium equation: The geodesic is the path bounded between the endpoints and that has the shortest length and satisfies the equilibrium equation. This is a Isoperimetric problem where L is minimized while constrained by and , bounded by , , , and : Variational equation: Constrained by the equilibrium equation and appropriate boundary conditions This can be solved using the Euler Formula and Lagrange multipliers, although the boundary conditions makes the problem difficult and closed form solutions often don't exist. Examples computing [edit] The formulas below assume LHS volume does not vary For curved , computing can be difficult, but there are some functions such as quadratics and ramps that are easier. Consider with with the boundary , and . For quadratics of the form: Upon simplification we have: Or the concave curve: After some labor: Consider we have a LHS ramp function connected by the coordinates where We have: , and Flat Market Simulations[edit] In this section, Chart to Scalar will be applied to examples where the expectation is that the market will be flat on the RHS an we wish to simulate how changing or adding variables will affect the hypothetical RHS chart. In a flat market the expected number number of buy & sells on the RHS is equal. Because we have a particularly simple way of writing the sell order equation as . To derive this, begin with the equilibrium and buy order equation: Note: to simulate a stock sale in flat market we have . The LHS resistor is positively sloped. The information on the LHS is known ahead of time, but the RHS information is unknown aside from which is evenly split between buy and sell orders (because the market is anticipated to be flat). To make the buy order equation udeful for simulation purposes, we can replace tho unknown RHS variables with the known LHS ones by rearranging the equilibrium equation: Plugging this into the buy order equation: As we showed in a previous section, is a function of , giving us a relationship between buy orders and price. Written out completely, we have: And the complimentary sell order equation: Where and In the flat market we know hence we have: or as a solution. If we add extra sell orders but leave unchanged, then we have upon solving . This makes sense because we would expect extra sell orders to result in a lower price. Simple Example[edit] Consider a linear LHS for stock has fallen from $33 to $30 in a 1 month period with a volume of and the RHS is a reflection such that . The LRS/LHS triangles are orientated in such a way that the LHS and RHS vertexes., , , and It's trivial to show that the equilibrium equation holds. But if we increase volume on the RHS by and we still want the equilibrium equation to hold and we require that , then we must modify . Solving gives To generate the curve use the or equation with the appropriate boundary variables. We obtain: With the endpoints: and The is a monotonically increasing curve of minimum length that lies between the boundaries. It's easy to see that when the solution is a line. Increasing results in an convex curve that almost completely hugs the boundaries and while enclosing an area that approaches 30. The diagram below illustrates how increasing results in a concave path If we let we can compute the proportion of buy orders to total volume with and without the addition of extra volume Originally, we have or about 52% of the total 10^8 volume are buy orders. If then we have total orders and only 51% of the total volume are buy orders. Insider Selling[edit] Next simulate how insider selling (or a fund selling a specified number of shares) changes the pathway of a stock. The additional sell orders result in a concave/convex curve with a lower final price. The itself has to satisfy both the buy order equation (to factor in the additional sell orders) and the equilibrium equation. We will use most of the same information in the prior problem but change the coordinates to for the LHS and for the RHS. When plotted the LHS serves as the support triangle and the RHS shows a equal number of buy/seller orders, denoted as a line between its two points. We'll begin by solving for or the final price Let , , , , (irrelevant for this problem) and (to indicate the stock is falling). We'll assume an insider is selling shares. Because the RHS is initially a line, we have an even admixture of buy & sells and then we add extra sells. And since is linear we have . After some manipulation, we find that Using the buy order equation we have or in expanded form: Solving gives , or a four percent decline attributed to insider selling. Solving the equilibrium equation we find . Because is so close to 1 we know the resulting shape is approximately a line with the endpoints and . Raising Capital[edit] An additional application is if a firm or individual shareholder wants to sell to a private investor to raise money. How should the shares be priced? We can use Chart to Scalar to simulate the instantaneous sale of some number of shares required to raise y dollars. If the present price is , the number of shares sold is where is the final price after the shares are sold. Since the transaction is an instantaneous event there are no buy orders on the RHS and hence and obviously since the stock is falling on the RHS from the selling. We have: For example, consider a stock has risen from $160 to its present price of in 60 trading days with an average daily volume of . The shape of the LHS is linear . If a large shareholder wants to raise $1 billion , calculate how many shares should be sold and at what price? The volume function: Solving, we find assuming the shares as sold instantly on the open market and 4.5 million shares sold to raise $1 billion. But some shares will be sold at around $240 and others at $202. Taking the midpoint $221 is a fair value for the private secondary offering. In summery, we see that the buy order equation tells us the final price of the stock; the equilibrium equation gives us the shape. Bubble Theorem[edit] Chart to Scalar can be used to show that charts that resemble 'bubbles' are more susceptible to collapse, in agreement with real life results. A bubble on a stock chart typically resembles a parabola. Using the results in section 3, we will compute for our bubble and for an 'anti-bubble'. Then we'll show with some calculus that fewer shares are needed for a stock with a 'bubble' chart to fall, and hence is more vulnerable to collapse than the 'anti-bubble'. Lets assume that The simplest 'bubble' that passes through the endpoints is . Using methods of linear algebra, it's trivial to show the refection about or 'anti-bubble' is . Both these curves have the same equal to 2/3 when the integrals are evaluated between 0 and 1. Showing that the 'bubble' chart is more vulnerable to collapse is the same as showing that for some Furthermore, using the properties of inverse function we know that . Let and we have and . After some labor we have the following inequality (where the right hand side is the anti-bubble): Using the binomial theorem on the square root and dividing both sides by we have Now it's obvious that as approaches zero the anti-bubble is bigger. If we let with the same boundary conditions as mentioned earlier, we can simulate the behavior as becomes increasingly concave - that is letting approach infinity. Such a curve will appear to hug the boundaries fur sufficiency large . Using methods given in section 3 and letting , we compute: Using the binomial theorem and upon simplification we have an infinite series that begins: Now it's obvious that goes to zero as n becomes increasingly large. What this means is as a stock chart appears increasingly parabolic, it becomes less stable and more prone to falling. This is because of the increasingly small cross-sectional area of between and . Pricing Options[edit] The Chart to Scalar option pricing formula is a consequence of the broader Chart to Scalar theory through an equivalency relation between the statistical property of buy and sell orders and price. Differences between conventional options pricing models and Chart to Scalar: 1. Volatility-like variable in Chart to Scalar is a combination of variables such as time until expiration, volume/price differentials, and trade size . 2. Normal distribution instead of log-normal. 3. Options prices grow at versus for Black Scholes. Short term options are more expensive and longer term cheaper in Chart to Scalar than Black Scholes. (this is explained in more detail in the final section) Derivation, Part 1: The Displacement Equivalence Relation[edit] The idea is to establish a relationship between discrete 'steps' and price displacement. Consider a hypothetical stock chart where a stock has moved some amount (usually a small percentage) over some time duration (often measured in years). It can be visualized as a triangle, with the vertices being and and where is the present price of the stock and the displacement is . Denote as the time until expiration Consider a discrete sum of up and down stock orders denoted by Each 'up' and 'down' represents a 'time unit'. Adding the 'ups' and 'downs' gives the sum of units. The second part of the fundamental interaction is the difference between 'ups' and 'downs': This means that the difference between 'ups' and 'downs' gives a function in terms of a new displacement, where . The pair of linear equations solves for and : Consider the proportional relation between two displacements, the base one with and our new one, Solving for gives the needed function in terms , which is plugged into : When , the stock is unchanged, meaning that the number of 'up' units is equal to the 'down' ones. What we've done is establish a relationship between displacement of price and 'up' and 'down' units. 'Up' and 'down' units, analogous to tossing a coin, also obey a normal distribution: There is also for the price. (this is because if the stock is unchanged, hence meaning that the number of 'up' units is the same as 'down', resulting in no displacement. We have to find Because of the equivalence between units and price displacement, the can be solved by setting From the equivalence: We have: Rearranging gives the classic result: setting (for a single year and is the fraction of the year) Derivation, Part 2[edit] Let be linear so that The LHS resistor force is denoted by a triangle with the vertexes . The total volume between and is distributed evenly. Thus the volume between segment and is Plugging into the buy order equation we have (first order Taylor approximation for is used): Like above, let Also The normal distribution is a fundamental solution to the heat equation. This is an initial value problem on (0,∞) with homogeneous Dirichlet boundary conditions. is a function of and time, hence So we have (for some constant b) Because k is a function of t, integration is necessary to solve the PDE Letting , and restricting the bounds of the integration from to and evaluate the integral to compute the call: If and are of the form with the time factors multiplied by a constant m, then: Where is the time until expiration. Suppose a hypothetical stock has fallen from $40 to $30 in a 6 month period (120 days) with a daily volume of and and . Calculate the $30 call with an expiration of 20 days. Assume no interest. We find Since , the call option pricing formula simplifies dramatically and we have . Chart to Scalar Option Pricing Vs. Black Scholes[edit] Consider the example above where (at-the-money). The call price for Chart to Scalar can be approximated as: The approximation with Black Scholes for at-the-money calls is: In both instances, is the number of days until expiration. The notable difference between Chart to Scalar and conventional option pricing models is that sigma is proportional to instead of .This generates fat tails due to the intrinsic property of the price/volume dynamic, without the need for added fat tail parameters or the market being incomplete. To test the difference, using the same variables as in the earlier example for the at-the-money call, the strike prices for a variety of times is plotted. The difficulty is converting price displacement into a volatility, but I think seems reasonable based on empirical evidence. This is more volatile than an index fund, which has volatility that fluctuates between .15 to .20. Chart to Scalar v. Black Sholes for an at the money call. For the example above, Chart to Scalar (blue) v. Black Sholes (purple) for an at the money call. The vertical axis is the call price. The horizontal is the number of trading days until expiration. A noted limitation of Black Scholes is that it tends to underestimate the probability of uncommon events, even though these events are observed more frequently in real life. For the example given in this section, if we raise the strike to , reduce the time until expiration to one day , and keep all other variables unchanged, Black Scholes gives a probability of about 1/4000 of the option being exercised, or to put it another way one would have to wait a decade for the underlying to rise 4.7% in a single day. Chart to Scalar, on the other hand, gives a 1/150 probability, which is more realistic. High Frequency Trading (HFT)[edit] We hypothesise HFT has a stabilizing effect on the market. E. Renshaw (1995) showed that the market is more stable than it used to be. Chart to Scalar could provide a mathematical explanation for why this may be. Consider which we derived earlier If the trade size is small relative to , then the variability of moves is smaller. ( for example) . On the other hand, if we increase , ,and by a scaling factor it cancels out and is unchanged. So in the context of this model, HFT is either stabilizing or neutral. For example, without HFT let’s assume we have: Now introduce HFT: We’ve increased the volume of the stock on the LHS (left hand side) and the RHS but the trade size is unchanged because we’re assuming the high frequency trades are no larger than normal trades. Because: We see that the noise orders have a stabilizing effect. The limitations of this model is the assumption that HFT is the same as random trades and ignoring possible feedback effects. Single Solution[edit] Solutions to problems in Chart to Scalar where are not unique, meaning there are two solutions: a concave and a convex one for the RHS (right-hand-side). The introduction of integrals on the LHS and RHS equilibrium equations, respectively, allow for a single solution. This is because solving the equilibrium equation results in two unique scalar values . Due to the variational principle, the value of d which confers with a longer can be discarded, leaving a path of least distance that adheres to the constraints. Define as the average price of : The equilibrium equation with : The buy order equation: Sell order equation: Like before, RHS Convex: RHS Concave: Example 1: Bursting of a Stock Market Bubble[edit] Consider a simulation of a simple stock market market to show how the concavity of the RHS of the bubble bursting must match the LHS of the bubble inflating The inflation of the bubble on the LHS is a concave curve given by bounded between . The RHS is bounded between . For this simulation, the hypothetical stock rises from to and it falls back to as the bubble deflates. () is chosen to indicate the LHS is rising. The LHS and RHS volume is equal and . Thus the conditions are imposed: The time symmetry condition: are imposed, meaning that the duration of events on the LHS is equal to the RHS. For the LHS concave curve of form , the formula is used (making reference to original paper): Hence, (because ) And The equilibrium equation which is solved for d (one for the concave RHS and convex RHS): These are plugged into their respective formulas to calculate the defect. The goal is to show the convex solution has a greater defect than the concave one: Because of the scale invariance properties of , the above formula reduces to: letting , we have the infinite series expansion about : Because the concave has a smaller defect, the path is shorter (closest to a strait line), and hence the concave path is chosen as minimizing the action. Which completes the proof. This concave-on-concave symmetry agrees with examples in real life of various asset bubbles bursting Example 2: (buy order equation)[edit] Example: , the x coordinates are the same as the example above, and , and For this example, the RHS is a concave reflection of the RHS, thus and . To compute the buy and sell orders for the RHS: For the rest of this summary, The actual choice of does not matter for non-statistical problems. Example 3: (system of equations)[edit] What if is different? Consider a general case where is only slightly less than . Then it becomes more complicated because is unknown and cannot be assumed to be equal to and becomes a function in terms of instead of just 2/3. The buy order equation and equilibrium equation must be combined for problems where the and or components are not equal, the result being a system of two equations that is solved for (both for the concave and convex) and (the final price of the stock, for both the concave and convex) The value of and corresponding to the greatest defect is discarded. As before, The components are as follows: The inverse of evaluated at : Both will be specified later. In the first example, . is the functional form of LHS volume in terms of , whereas is the total volume along the interval RHS Convex: RHS Concave: The system of equations are solved for and : After some labor, has the series approximation about : As , the solution is . This is because if the number of buy orders is half of the RHS volume, we expect the stock to end unchanged. Consider a small imbalance: . Let: Solving fox p and d gives six possible solution pairs, but the only one that logically makes sense The convex and concave curves enclose roughly the same area, indicating the that resulting LHS path is very close to being linear. Plugging these solutions into their respective convex and concave shows that the defect is very small, roughly 2.5%. Example 4: deriving the Market impact square-root rule[edit] A formula very similar to the 'square-root' rule [3] is derived. Consider the instantaneous sale of stock. For simplicity, let the LHS be linear. An instantaneous sale means . Therefore, Define as the lower-end of the stock range and as the present price. The LHS be visualized as a triangle with the vertices Since the LHS is linear, is the final price of the stock after the instantaneous sale is rendered. Because (the LHS is rising), Where is the total volume of the LHS between x=0,x=1 (some period of time) This is obtained by taking the inverse of and finding the proportion of volume that is 'liberated' by the stock falling to . Via the triangle, Set . Then and gives the proportion. can be approximated as Set where is the 'impact' Setting up the equilibrium equation and solving for we have: The volatility-like variable can be written as: . Hence, we have: As we would expect, the volatility term is scale invariant, but the impact is proportional to the initial price . If is much small than , we have a greater price range (more volatility). The term is somewhat arbitrary, but we still have the volatility and square-root impact relation. Example 5: inflows and outflows[edit] Inflows and outflows for the RHS chart are calculated. However, this is only an approximation; calculating an exact inflow or outflow using a single, closed-form expression is impossible and impractical, but we can get a ballpark estimate. For small increases or decreases in price (), we can use the linear approximation of the natural log and . We have: Inflow $ If , we have an outflow. If , there is neither an inflow nor outflow Here, the average price is , which is a linear approximation of the exact average price of . This is good enough for small changes in price. can be added if the shape resembles a hyperbola, regardless of concavity Let's assume we want to calculate the dollar inflow for Microsoft for a single trading day. If on the RHS we observe and the shape of the RHS is concave and resembles the function (noticing this satisfies and endpoints and has at ) We have and . Putting it all together, we have an inflow of around $32.6 million while around $303 million changed hands. 1. ^ Baggett, L. Scott; Thompson, James; Williams, Edward; Wojciechowski, William (October 2006). "Nobels for nonsense". Journal of Post Keynesian Economics. 29 (1): 3–18.  2. ^ Hull, John; White, Alan (June 1987). "The Pricing of Options on Assets with Stochastic Volatilities". Journal of Finance. 42 (2): 281–300.  3. ^ Gatheral, Jim (October 2011). "Optimal order execution".  Text " JOIM Fall Conference, Boston " ignored (help); Text " " ignored (help)
2f7d554bd1736d8e
Leonid Chaichenets Dr. rer. nat. Karlsruher Institut für Technologie (KIT) 2018 Germany Dissertation: Modulation spaces and nonlinear Schrödinger equations Advisor 1: Peer Christian Kunstmann Advisor 2: Dirk Hundertmark Advisor 3: Lutz W. Weis No students known.
00fd153f2090e761
Interstate interference of electron wave packet tunneling through a quantum ring Takuma Okunishi, Yusuke Ohtsuka, Masakazu Muraguchi, Kyozaburo Takeda Research output: Contribution to journalArticlepeer-review 20 Citations (Scopus) We theoretically study the time-developed progress of resonant tunneling for an electron wave packet injected into a two-dimensional quantum ring (QR) by solving the time-dependent (TD) Schrödinger equation numerically. Focusing on an extraction of the angular momentum lz, we examine the TD features in the resonant tunneling electron by projection analysis in which the resulting TD wave function at the QR is decomposed into the (static) resonant states by calculating the inner products among them. This analysis reveals that the two-states approach is well applicable for the QR system and the cross terms between these two states are crucial for the TD vacillation of both the expectation value of the angular momentum l z and the electron density ρ. The quasidegeneracy of the resonant states causes a characteristic beating whose frequency is determined by the difference between the eigenenergies. We further study the corresponding TD phenomena under a magnetic field and find that the rotational direction in ρ changes in accordance with the strength of the magnetic field. This feature seems to be very different from the classical prospect of the cyclotron motion where the application of the magnetic field determines the rotational direction uniquely. Original languageEnglish Article number245314 JournalPhysical Review B - Condensed Matter and Materials Physics Issue number24 Publication statusPublished - 2007 Jun 8 ASJC Scopus subject areas • Electronic, Optical and Magnetic Materials • Condensed Matter Physics Fingerprint Dive into the research topics of 'Interstate interference of electron wave packet tunneling through a quantum ring'. Together they form a unique fingerprint. Cite this
4f669b007163bb62
So, this crank John Gabriel exploded on the Mathmatical Mathematics Memes page on facebook recently, and he’s hilarious. Now, there’s cranks in every area of science of course; most notably in physics (quantum woo), biology (creationists), geology (creationists again), history (creationists again, holocaust-deniers), philosophy (theologians 😉) and of course medicine (alternative medicine, faith healing…), but in mathematics they happen to be rather rare – or at least there are few interesting ones. Or I just haven’t found their hiding place yet. But I suspect that’s because to be a crank you either have to flat-out lie to people (and what would be the point with math?) or 1. Not know enough about the subject to realize you’re wrong, while at the same time 2. think you know enough to boldly proclaim your wrongness to the public. I imagine that’s easier with e.g. physics, where people can read popular books dumbed down for a lay audience (and I don’t mean that in a derogatory way – I love pop science!) and come away thinking that they now know all the important stuff and can start drawing their own conclusions on the subject matter (Spoiler alert: No, you can’t. If you can’t solve a Schrödinger equation, you’re simply not qualified when it comes to quantum physics, period.) But with math I can imagine it being a lot harder to both think you understand something well enough to pontificate about it while at the same time not understanding it enough to realize your pontifications make no damn sense. John Gabriel manages to do both, and it’s fascinatingly weird. He’s the perfect embodiment of the Dunning-Krüger effect on steroids: He understands so little about modern mathematics that he doesn’t even realize how little he understands, and instead thinks he’s the only one who really gets how math works. In typical crank fashion he rails against “stupid academia” who get so hung up on useless concepts like “reason” or “making any sense whatsoever” that they just don’t realize what a genius he is. Or it could be that he’s just wrong and makes no fucking sense. It’s a toss-up. John, let me recite Potholer’s Trichotomy to you: If something in science doesn’t make sense to you, you have to conclude that either 1. Research scientists are all incompetent, or 2. they’re all in on a conspiracy to deceive you, or 3. they know something you don’t, and you need to find out what that is. Hint: Try option three first.Potholer54 Interestingly enough, I had read about Gabriel before – years ago on good math, bad math, where he ended up arguing with Mark Chu-Carroll about Cantor’s second diagonal argument. That article is from 2010, but apparently about a year ago Gabriel started a youtube channel, presumably in the hopes to bring more people to his more enlightened (i.e. nonsensical) side and to proclaim the fact that he invented a new calculus! That’s right, he has reinvented calculus, and his version is much better and simpler and it’s easy to understand for anyone open enough to abandon sense and rigor, unlike all those stupid academics. And given that I’ve just been made aware of his existence again, I figured I’d give it a go and dissect that guys videos, because 1. it’s fun (at least to me) and 2. it’s as good a reason as any to explain some of the stuff he gets wrong in some more detail, and any attempt to explain math to people is time well spent in my opinion. So let’s start with his first video: 1. The Arithmetic Mean This is just a short video on the arithmetic mean – i.e. the “average”. This isn’t as cranky as his other stuff, but it already gives a fascinating glimpse into the way Gabriel thinks. Now, as I said, the arithmetic mean is just the average of a bunch of numbers. We all know how to compute it, we all know why it’s useful – we all remember computing or getting told the average grade in exams, for example. And there is absolutely no reason why I mention that particular example. Here’s what Gabriel’s video description says about it: The arithmetic mean is one of the most important concepts in mathematics. While just about anyone knows how to construct an arithmetic mean, almost no one understands it. Right… the average of a bunch of numbers is really hard to grasp. I remember struggling with it in elementary school as well… no, wait, I didn’t. Maybe that’s just because I didn’t realize how awfully complicated it in fact is, after all, almost no one understands it. But Gabriel does, of course. To compute the arithmetic mean of a bunch of numbers, we just add them all up and divide the sum by how many numbers we had. In mathspeak: Definition: The arithmetic mean \(\overline{(a_n)}\) of a finite sequence of real numbers \(a_1,…,a_n\) is given by \[\overline{(a_n)} := \frac{\sum_{i=1}^n}n.\] We’ve all done that for grades: Add up all the grades of all the students in an exam, divide the result by how many students there are and you get the average grade in that exam. Here, by contrast, is Gabriel’s “definition” (and yes, he means definition): An arithmetic mean or arithmetic average is that value which would represent all the elements of a set, if those elements are made equal through redistribution. The Arithmetic Mean (0:18) …now I don’t know about you, but… is that even a sentence? What does that mean? “That value which would represent all the elements of a set“? “If those are made equal…” …well, then the set only has one element, doesn’t it? (Sets have no multiplicity – either a number is in a set or it isn’t.) OK, at least then I can guess what he means by “represent”. But “through redistribution“? What does “redistribution” mean in this context? This is not a definition. This is at best a clumsy attempt at explaining a definition. But he actually calls this a definition, and he runs with it. So here’s a beautiful example of why definitions fucking matter. He goes on to explain, that you can compute the arithmetic mean by drawing squares. He demonstrates this with three sets of squares, the first one having one square, the second two, the third three. He moves one square from the last set to the first so that every set has two squares, thus “making them equal”, hence the arithmetic mean is two. Now at least one can understand what his so-called “definition” was supposed to mean, but the immediate problem now is: What if the total number of squares isn’t divisible by the number of sets you have? Then his “redistribution” attempt fails, so according to his definition there is no arithmetic mean in that case. But he also shows us how to compute it using “algebra“, by which he means arithmetic (pun intended – and yeah, he can’t even get that right) – i.e. summing up and dividing the result according to the definition I stated above. But that’s not what his definition says. See what I mean when I say this guy makes no sense? But yeah, he runs with it: A useful arithmetic mean is one where it makes sense to redistribute the values. Example: Three friends each need 2$ to buy lunch. They decide to pool their money because one of the friends may not have enough. If the total they have is 6$, then it’s evident there is enough money for all three to buy lunch. Redistribution is accomplished by sharing the money. A useless arithmetic mean is one where it makes no sense to redistribute the values. Example: The arithmetic mean of student grades in a given class is a senseless calculation because students cannot share their marks. Redistribution cannot be accomplished by sharing grades. The Arithmetic Mean (1:26)  …jupp. First, notice how no arithmetic mean appears in his first example. Anywhere. Something costs 2$, three friends pool their money, they need at least 6$. The conclusion I’m left to draw is, that a “useful arithmetic mean” is one which isn’t even used, despite the name. Quite counter-intuitive. However, the prime example for an average – namely the average grade in an exam, something everyone has seen hundreds of times in school – is, to him, a “useless arithmetic mean“, because students can’t share grades. How does that even make sense? And don’t think that’s just a term he’s introducing, and that he doesn’t mean the word “useless” in a literal sense. Listen to the derision in his voice when he talks about the “senseless computation“. Of course it makes sense to compute the average grade – it gives you a good baseline to compare your own grade to, a sense of how well you did in comparison to the others without needing to know everyone’s specific result (which are confidential, after all). It gives you a sense of how difficult the exam was, or how lenient it was graded. But no, that’s all meaningless because students can’t share grades. But also, why does this matter? Math is abstract, it doesn’t care how you apply it, what you apply it to and whether the result of that application still has any meaningful interpretation in the real world! Yeah, this is how Gabriel works in a nutshell: 1. He takes a mathematical concept with a proper definition which he either doesn’t know, like or understand (or any non-empty subset of the three), 2. he visualizes or interprets it in some vague way (“making things equal through redistribution“), 3. he insists on his ill-defined vague interpretation to be the actual definition (even though it’s hand-wavy, vague nonsense), 4. he labels everything outside of his vague interpretation as “meaningless” and therefore void and draws absurd conclusions from his “definition”, 5. he proclaims that he has found the ultimate real meaning of the mathematical concept and rails against stupid academia. It’s glorious in its arrogance and ignorance. (Next post on John Gabriel: Calculus 101 (Convergence and Derivatives))
13fa484e65d8a6ca
General conditions for a quantum adiabatic evolution Daniel Comparat Daniel.C Laboratoire Aimé Cotton1, Univ Paris-Sud 11, Campus d’Orsay Bât. 505, 91405 Orsay, France 11Laboratoire Aimé Cotton is associated to Université Paris-Sud and belongs to Fédération de Recherche Lumière Matière (LUMAT). The smallness of the variation rate of the hamiltonian matrix elements compared to the (square of the) energy spectrum gap is usually believed to be the key parameter for a quantum adiabatic evolution. However it is only perturbatively valid for scaled timed hamiltonian and resonance processes as well as off resonance possible constructive Stückelberg interference effects violate this usual condition for general hamiltionian. More general adiabatic condition and exact bounds for adiabatic quantum evolution are derived and studied in the framework of a two-level system. The usual criterion is restored for real two level hamiltonian with small number of monotonicity changes of the hamiltonian matrix elements and its derivative. 03.65. Ca, 03.65. Ta, 03.65. Vf, 03.65. Xp Adiabaticity is at the border between dynamics and statics. It has been introduced by Boltzmann in classical mechanics and by Born and Fock in 1928 in Quantum Mechanics Nakamura (2002); Teufel (2003), extended to the infinite dimensional setting by Kato (1950), studied as a geometrical holonomy evolution by Berry (1984), finally extended to degenerate cases (without gap condition) and to open quantum system more recently Avron and Elgart (1998); Sarandy and Lidar (2005). The quantum adiabatic theorem is usually used to derive approximate solutions of the Schrödinger equation and is strongly related to the (semi-)classical limit of quantum mechanics Berry (1984) and to the Minimal work principle Allahverdyan and Nieuwenhuizen (2005) for the Hamiltonian . The principle is simple: if a quantum system is prepared in an eigenstate of a “slowly” varying Hamiltonian it remains (without taking into account of the phase evolution) close to the instantaneous eigenstate of this Hamiltonian as time goes on. The applications range from two-level systems (nuclear magnetic resonance, atomic laser transitions, Born-Oppenheimer molecular adiabatic coupling, collisional processes …) to quantum algorithms Farhi et al. (2001). “Usual” adiabatic conditions are (for all ): where the dot designs the time derivative and are the instantaneous eigenstates for the energy eigenvalue with 222We use the time derivative of and leading, for non degenerate case, to . Some confusion occurs recently Marzlin and Sanders (2004); Tong et al. (2005a); Cholascinski (2005); Duki et al. (2005); Tong et al. (2005b); Pati and Rajagopal (2004) because, this condition seems written for a general hamiltonian . However, it has been studied by many different techniques (see for instance Hagedorn and Joye (2005); Jansen et al. (2006)) but only for special types of hamiltonian such as time scaling one 333An important example is the interpolating hamiltonian . have also been considered with a monotonic function controlling locally the speed of the process. When the timing is not an issue is the simplest choice.. Furthermore, even for such a time scaled hamiltonian, condition (1) is not sufficient because it is only the leading order term Vértesi and Englman (2006); MacKenzie et al. (2006), in a time evolution perturbation point of view, and more accurate conditions are needed to prove adiabatic evolution Jansen et al. (2006). The goal of this article is to derive general quantum adiabatic conditions for general hamiltonian. We start our study on a two level system example in order to study some possible violation of the usual adiabatic conditions. Afterwords, considering a more general type of levels hamiltonians, we derive a general criterion for adiabaticity. Finally, the study of the interference during multiple passages allows us to precise the validity of the usual adiabatic condition. A quite general hamiltonian matrix, written in the Pauli Matrix () basis, leads to the a spin form : where is the Larmor frequency, is a rotating magnetic field with a polar angle , an azimuthal rotating angular frequency . Where the second form of the hamiltonian represents, in the rotating wave approximation (RWA), a two level system coupled to an external (laser with angular frequency for instance) field which is frequency detuned by from the resonance and with a real Rabi frequency . For future developments we also define with . One natural choice for is the “first order” choice annulling the whole diagonal terms. Let us treat the (Schwinger 1937) example, where all the parameters are real and time independent. The evolution operator in the adiabatic basis (where is the evolution operator in the diabatic basis) verifies and, with , is given by the matrix: The adiabaticity (negligible off-diagonal terms in ) evolution is given by the following condition which has to be compared with the “usual” adiabatic condition given by Eq. (1): notations will be generally defined latter. Looking at the and very small resonant case (), we see, in a simpler way than in Ref. Marzlin and Sanders (2004); Tong et al. (2005a) and contrary to what is sometimes claimed Tong et al. (2005b); Pati and Rajagopal (2004), that Eq. (3) is verified but not Eq. (2). This fundamental conclusion, based on a hamiltonian is still valid for the time scaling case . Indeed, the Schwinger hamiltonian can be of the type if is taken to be constant, for instance by looking at the evolution after one period depending on the parameter value. Indeed, To be more general let us now study a discrete, but possibly degenerate, hamiltonian with the state evolution (). The phase is real but not necessary equals to the first order choice : geometrical phase (which is the Berry Phase for cyclic evolution) plus dynamical phase neither contains the (Pancharatnam) phase . To study the adiabatic evolution we shall assume that (i.e. ). The evolution is adiabatic if or equivalently if Jansen et al. (2006). The Schrödinger’s equation leads for each state to: where . Using , and the norm inequality we find the first (very restrictive) valid adiabatic condition for the interaction time : where . This condition is optimal because it is reached (see ) by the Schwinger level system for (). It illustrates the quantum Zeno effect: during a time much smaller than the system evolution is frozen. In order to find more useful adiabatic conditions we integrate by part Eq. (4) using (for ) : It is now straightforward, with , to look back to the standard adiabatic theorem with the time scaling . The evolution equation for , is then and the limit is similar to . With , we have (for ) and the stationary phase theorem (saddle-point or steepest descent method) annuls, for , the integrals in Eq. (General conditions for a quantum adiabatic evolution) leading to valid quantum adiabatic condition: A comparison with Eq. (1) indicates, as also shown by the two level model where , that a better understanding of the term is in fact needed to have useful condition Jansen et al. (2006). We could now go back to the general case. verifies with . Using Eq. (General conditions for a quantum adiabatic evolution) and choice, it be bounded by The typewriter style, such as , indicates terms that can be annulled by using a better phase for namely the “second order” one . The three important parameters are: Where, is the energy spectrum gap. Another (better for large ) bound for is obtained using in Eq. (General conditions for a quantum adiabatic evolution) and the norm inequality: and a point fix study leads to Finally one (not optimized) adiabatic condition is We define two useful reals : for the choice , and for the choice where . When the hamiltonian is real in the canonical basis, the eigenstates and are reals and so, . If all , or , are monotonics in and the condition (8) becomes simpler: or , where indicates that it should be calculated using the choice. For smallness and monotonicity of is equivalent to smallness and no more than one monotonicity change of . Thus, a final general, simple and useful adiabatic condition is (for monotonics ) It is even possible to refine the condition by dividing the interval in smaller intervals where all are monotonics. A perturbative point of view, neglecting the term, has been used to derive similar results Ye et al. (2005). The case is illustrative because it is the only one where a time independent adiabatic condition exists: where is the number of monotonicity change of in . This generalize the Schwinger conditions Eq. (2). For real hamiltonian the condition is and becomes the usual adiabatic condition if is small, for instance if the matrix elements of and have small number of monotonicity changes. This explain why the real dressed state hamiltonian, , obtained from in the rotating frame (with the simple phase choice ) or simply by , have been luckily combined with the usual adiabatic theorem to describe several adiabatic evolutions such as, the RAP (Rapid Adiabatic Passage), the SCRAP (frequency or Stark-Chirped RAP) or the STIRAP (STImulated Raman Adiabatic Passage). However when real oscillatory terms are present the usual adiabatic condition is no more sufficient to provide adiabatic evolution. As example we use the cycling hamiltonian Milena Grifoni and Peter Hänggi (1998); Martinez (2005), with and are (positives to simplify) constants. It is relevant in many areas in physics: magnetic resonance, atomic collision, laser-atom interactions without the RWA and even localization by exchanging the parameters and (hamiltonian with ). The weak-coupling and large amplitude case is simple because the non-adiabatic transition probability (so called single-passage or one-way transition) is given by one of the simplest of the several existing approximate formulas (Landau-Zener-Stückelberg, Rosen-Zener-Demkov, Nikitin, Zhu-Nakamura models, … Nakamura (2002); Nikitin (2006)) namely the Landau-Zener one: Kayanuma (1994). The double-passage transition probability , which depends of a relative (Stückelberg) phase of the wavefunction, can be times higher than and the (even) multiple passage probability can be times higher than . Here small value leads to the adiabatic limit and with we could have Kayanuma (1994). Interestingly enough, the reverse case, namely the diabatic limit () can leads (for instance when annul the Bessel function) to the reverse phenomenum of adiabaticity created after multiple passages () known as suppression of the tunneling, coherent destruction of tunneling, dynamical localization or population trapping depending on the context Milena Grifoni and Peter Hänggi (1998); Kayanuma (1994). This two level example illustrate why monotonicity is require to avoid constructive interferences transforming an adiabatic (resp. diabatic) single passage in a fully diabatic (resp. adiabatic) transition after multiple passages. The two level system with several crossings is very similar to the case of single crossing but with several levels leading to sum of dephased Landau-Dykhne-Davis-Pechukas formulas Joye et al. (1991); Giller (2004). Moreover, the transition probability in a multilevel system is the product of several Landau-Dykhne type terms, corresponding to several successive transitions between pairs of levels Wilkinson and Morgan (2000). However, several consecutive constructive interferences are exceptional and the generic most common case concern a system “complex enough” with small total probability when the single crossing probability is small Akulin (2006). In conclusion, we have derived exact bounds for the evolution Eqs. (5), (General conditions for a quantum adiabatic evolution) as well as general adiabaticity criterion Eqs. (9), (10). The key parameters for adiabaticity are the smallness and the small number of monotonicity change of as well as a short evolution time (). For real hamitonian the adiabatic (Pancharatnam) phase type is the spectrum frequency gap and the usual adiabatic condition are restored if the matrix elements of and have small number of monotonicity changes in the two level () case. The results presented here, and demonstrated for the discrete, but possibly degenerate case, might be useful for adiabatic quantum evolution and adiabatic quantum computation studies. Extension to the infinite dimensional or non hermitian cases are some of the next steps needed to derive more universal quantum adiabatic conditions. The author acknowledge Andréa Fioretti for helpful discussions. This work has been realized in the framework of ”Institut francilien de recherche sur les atomes froids” (IFRAF). • Nakamura (2002) H. Nakamura, Nonadiabatic Transition: Concepts, Basic Theories and Applications (World Scientific Pub Co Inc, 2002). • Teufel (2003) S. Teufel, Adiabatic perturbation theory in quantum dynamics, Lecture Notes in Mathematics 1821. (Springer-Verlag, Berlin, Heidelberg, New York (2003), 2003). • Avron and Elgart (1998) J. E. Avron and A. Elgart, Phys. Rev. A 58, 4300 (1998). • Sarandy and Lidar (2005) M. S. Sarandy and D. A. Lidar, Phys. Rev. A 71, 012331 (2005). • Berry (1984) M. V. Berry, Journal of Physics A Mathematical General 17, 1225 (1984). • Allahverdyan and Nieuwenhuizen (2005) A. E. Allahverdyan and T. M. Nieuwenhuizen, Phys. Rev. E 71, 046107 (2005). • Farhi et al. (2001) E. Farhi, J. Goldstone, S. Gutmann, J. Lapan, A. Lundgren, and D. Preda, Science 292, 472 (2001). • Marzlin and Sanders (2004) K.-P. Marzlin and B. C. Sanders, Physical Review Letters 93, 160408 (2004). • Tong et al. (2005a) D. M. Tong, K. Singh, L. C. Kwek, and C. H. Oh, Physical Review Letters 95, 110407 (2005a). • Cholascinski (2005) M. Cholascinski, Phys. Rev. A 71, 063409 (2005). • Duki et al. (2005) S. Duki, H. Mathur, and O. Narayan, ArXiv Quantum Physics e-prints (2005), eprint arXiv:quant-ph/0510131. • Tong et al. (2005b) D. M. Tong, K. Singh, L. C. Kwek, X. J. Fan, and C. H. Oh, Physics Letters A 339, 288 (2005b). • Pati and Rajagopal (2004) A. K. Pati and A. K. Rajagopal, ArXiv Quantum Physics e-prints (2004), eprint arXiv:quant-ph/0405129. • Hagedorn and Joye (2005) G. A. Hagedorn and A. Joye, ArXiv Mathematical Physics e-prints (2005), eprint arXiv:math-ph/0511067. • Jansen et al. (2006) S. Jansen, M.-B. Ruskai, and R. Seiler, ArXiv Quantum Physics e-prints (2006), eprint arXiv:quant-ph/0603175. • Vértesi and Englman (2006) T. Vértesi and R. Englman, Physics Letters A 353, 11 (2006). • MacKenzie et al. (2006) R. MacKenzie, E. Marcotte, and H. Paquette, Phys. Rev. A 73, 042104 (2006). • Ye et al. (2005) M.-Y. Ye, X.-F. Zhou, Y.-S. Zhang, and G.-C. Guo, ArXiv Quantum Physics e-prints (2005), eprint arXiv:quant-ph/0509083. • Milena Grifoni and Peter Hänggi (1998) Milena Grifoni and Peter Hänggi, Physics Reports 304, 229 (1998). • Martinez (2005) D. F. Martinez, Journal of Physics A Mathematical General 38, 9979 (2005). • Nikitin (2006) E. E. Nikitin, Handbooks of Atomic, Molecular, and Optical Physics (Springer, 2006), chap. 49: Adiabatic and Diabatic Collision Processes at Low Energies. • Kayanuma (1994) Y. Kayanuma, Phys. Rev. A 50, 843 (1994). • Joye et al. (1991) A. Joye, G. Mileti, and C.-E. Pfister, Phys. Rev. A 44, 4280 (1991). • Giller (2004) S. Giller, Acta Physica Polonica B 35, 551 (2004). • Wilkinson and Morgan (2000) M. Wilkinson and M. A. Morgan, Phys. Rev. A 61, 062104 (2000). • Akulin (2006) V. M. Akulin, Coherent Dynamics of Complex Quantum Systems (Springer, 2006). For everything else, email us at [email protected].
c7068d8e19e7e429
Is bonding specifically for and between electrons? Why cant two atoms share muons, which are different particles of same charge, spin and different mass? Why there aren't muon-electron bonds? Why is the octet (or eighteen valence electron) rule only for electrons, and not for all particles with similar charge and spin compared to the electron? • $\begingroup$ I wrote 32 valence electron rule for f-block elements but then I remembered f orbitals dont participate in bonding.:) $\endgroup$ – Mrs Chemistry Dec 16 '19 at 0:22 • $\begingroup$ for lanthanides at least . $\endgroup$ – Mrs Chemistry Dec 16 '19 at 0:22 • 2 $\begingroup$ Remember that the Pauli exclusion principle applies to identical fermions. A muon is not identical to an electron, so if you introduce a muon into an atom it will decay into the lowest possible orbital. This orbital is 1s like of course, so in principle you could make a molecule out of protons and muons; however muons decay very quickly and are captured by the nucleus on a similar time scale. $\endgroup$ – PJ R Dec 16 '19 at 1:29 • 2 $\begingroup$ Also, wouldn't the fact that muons "orbit" much closer to the nucleus mean that it would be very hard to get two nuclei close enough together to have enough interaction of both nuclei with the muon that there is a net benefit? I think the cost of bringing the nuclei together would likely be greater than the benefit of bonding. $\endgroup$ – Andrew Dec 16 '19 at 13:23 • 1 $\begingroup$ I've tried adjusting the question title and content, let me know if it's not what you meant. $\endgroup$ – Nicolau Saker Neto Dec 16 '19 at 23:48 You are correct; electrons and muons are fermions with different quantum numbers (specifically, they differ in the electron number and the muon number), so the Pauli exclusion principle does not apply between them (though it of course applies among electrons and muons separately). A somewhat similar case happens with protons and neutrons (also fermions) in the nuclear shell model, which attempts to describe nuclei as containing proton and neutron shells, analogous to electron shells. The proton and neutron shells are filled independently. Because (negative) muons are the second generation Standard Model equivalent of the electron, whatever electrons do, muons can copy - since there are "electronic" orbitals, there are also "muonic" orbitals. However, there are two main differences. First, for electron orbitals, it's a good approximation to assume the nucleus is stationary with respect to the electrons (the Born-Oppenheimer approximation), due to the great difference in their masses (the lightest nucleus, a proton, has approximately 1836 times the mass of an electron). Because the muon is approximately 207 times heavier than an electron, now a muon has an appreciable mass relative to a proton (approximately one-ninth), and therefore the BO approximation is considerably worse. This is a scenario in-between a regular atom and positronium, where an electron "orbits" a positron "nucleus", which has the same mass. See Phys. Rep. 1982, 86 (4), 169-216 for a quantitative analysis of BO approximation errors in a simple muonic molecule. The second and more striking difference is that, again due to the muons being 207 times heavier, muonic orbitals are accordingly around 207 times smaller meaning a muonic orbital has a typical radius of around 0.5-1 pm compared to 100-200 pm for an electronic orbital. The energies are also 207 times larger in magnitude - the 1s electronic orbital in hydrogen has an energy of -13.6 eV, whereas for "muonic hydrogen", the 1s muonic orbital has an energy of -2815 eV. These facts can be determined simply by solving the Schrödinger equation, except inputting a mass 207 times greater for the negatively-charged particle. As you can see, this leads to a severe mismatch between the realms of electronics and muonics. There is no kind of shared electron-muon bond - at best, there would be a separate electron bond and a muon bond. However, because the energetics involved in the muon bond are so much higher, the system in the ground state is essentially equivalent to just having the muon bond, plus a small correction due to a grossly warped electron bond. The muon bond is formed normally, and the electron bond has to deal with whatever geometry is forced by the muons, however crazy. As an example, imagine a neutral atom of regular hydrogen and a neutral atom of muonic hydrogen interacting. The muonic hydrogen atom basically pierces into the depth of the electron cloud of the regular hydrogen atom (the electron can hardly repel the muon efficiently, since it is so tightly bound to its nucleus), until both nuclei get quite close. Then the muon latches onto the other proton. The system is stabilised when the two hydrogen nuclei are approximately 0.5 pm apart, and the lone muon forms half of a muonic sigma bond. From the "point of view" of the muon, the system looks like a singly-ionised muonic dihydrogen molecule ($\ce{\mu-H_2^+}$), with slight corrections due to the electron buzzing around, most of the time far away. However, from the "point of view" of the electron, it sees a bizarrely elongated nucleus (from the typical ~1 fm sphere to a ~500 fm spindle) with a total charge of +1e (since the muon almost perfectly screens out a full positive charge). This spindly nucleus is still quite small relative to the electron cloud, and so the electron likely behaves similarly to a normal isolated hydrogen atom, with some corrections due to the non-spherical distribution of charge at its "nucleus" (the two separate protons bound by a muon). The electron still provides a slight amount of bonding between the protons, but it is much less than normal due to the odd geometry forced by the muon. Muonic chemistry in its full glory would be a fascinating (and extremely dangerous!) copy of the electronic chemistry we know, but the two would operate almost completely independently. The muon is actually one of the most stable subatomic particles, with a lifetime of 2.2 µs. That sounds like almost nothing, but it's many orders of magnitude more than what is necessary to observe "chemistry". Unfortunately, it's just too difficult to produce for how ephemeral it is... Your Answer
8c464a50c1865fd0
Molecular Science Vol. 10 (2016) No. 1 p. A0085- Award Accounts Vibrational spectroscopy is a viable tool to reveal the mechanism of various molecular systems at the atomic and molecular resolution; yet the interpretation of the observed spectrum is often non-trivial and requires a theoretical assistance. Although it is rather common to calculate the vibrational spectrum based on the harmonic approximation, anharmonicity plays a crucial role, in particular, for the OH and NH stretching vibrations that lie in a high frequency region. In this article, recent advances in the vibrational structure theory are reviewed regarding: (1) The generation of anharmonic potential energy surface by the electronic structure calculation, (2) An efficient solver of vibrational Schrödinger equation by the vibrational quasi-degenerate perturbation theory based on variationally optimized coordinates, (3) A weight average approach to simulate the vibrational spectrum of condensed phase systems. Copyright © 2016 分子科学会
a794901b624466e6
Friday, June 16, 2017 Co-hygiene and quantum gravity [l'Universo] è scritto in lingua matematica ([The Universe] is written in the language of mathematics) — Galileo Galilei, Il Saggiatore (The Assayer), 1623. Here's another installment in my ongoing exploration of exotic ways to structure a theory of basic physics.  In our last exciting episode, I backtraced a baffling structural similarity between term-rewriting calculi and basic physics to a term-rewriting property I dubbed co-hygiene.  This time, I'll consider what this particular vein of theory would imply about the big-picture structure of a theory of physics.  For starters, I'll suggest it would imply, if fruitful, that quantum gravity is likely to be ultimately unfruitful and, moreover, quantum mechanics ought to be less foundational than it has been taken to be.  The post continues on from there much further than, candidly, I had expected it to; by the end of this installment my immediate focus will be distinctly shifting toward relativity. To be perfectly clear:  I am not suggesting anyone should stop pursuing quantum gravity, nor anything else for that matter.  I want to expand the range of theories explored, not contract it.  I broadly diagnose basic physics as having fallen into a fundamental rut of thinking, that is, assuming something deeply structural about the subject that ought not to be assumed; and since my indirect evidence for this diagnosis doesn't tell me what that deep structural assumption is, I want to devise a range of mind-bendingly different ways to structure theories of physics, to reduce the likelihood that any structural choice would be made through mere failure to imagine an alternative. The structural similarity I've been pursuing analogizes between, on one side, the contrast of pure function-application with side-effect-ful operations in term-rewriting calculi; and on the other side, the contrast of gravity with the other fundamental forces in physics.  Gravity corresponds to pure function-application, and the other fundamental forces correspond to side-effects.  In the earlier co-hygiene post I considered what this analogy might imply about nondeterminism in physics, and I'd thought my next post in the series would be about whether or not it's even mathematically possible to derive the quantum variety of nondeterminism from the sort of physical structure indicated.  Just lately, though, I've realized there may be more to draw from the analogy by considering first what it implies about non-locality, folding in nondeterminism later.  Starting with the observation that if quantum non-locality ("spooky action at a distance") is part of the analog to side-effects, then gravity should be outside the entanglement framework, implying both that quantum gravity would be a non-starter, and that quantum mechanics, which is routinely interpreted to act directly from the foundation of reality by shaping the spectrum of alternative versions of the entire universe, would have to be happening at a less fundamental level than the one where gravity differs from the other forces. On my way to new material here, I'll start with material mostly revisited from the earlier post, where it was mixed in with a great deal of other material; here it will be more concentrated, with a different emphasis and perhaps some extra elements leading to additional inferences.  As for the earlier material that isn't revisited here — I'm very glad it's there.  This is, deliberately, paradigm-bending stuff, where different parts don't belong to the same conceptual framework and can't easily be held in the mind all at once; so if I hadn't written down all that intermediate thinking at the time, with its nuances and tangents, I don't think I could recapture it all later.  I'll continue here my policy of capturing the journey, with its intermediate thoughts and their nuances and tangents. Until I started describing λ-calculus here in earnest, it hadn't registered on me that it would be a major section of the post.  Turns out, though, my perception of λ-calculus has been profoundly transformed by the infusion of perspective from physics; so I found myself going back to revisit basic principles that I would have skipped lightly over twenty years ago, and perhaps even two years ago.  It remains to be seen whether developments later in this post will sufficiently alter my perspective to provoke yet another recasting of λ-calculus in some future post. Side-effect-ful variables Quantum scope Geometry and network Cosmic structure There were three main notions of computability in the 1930s that were proved equi-powerful by the Church-Turing thesis:  general recursive functions, λ-calculus, and Turing machines (due respectively to Jacques Herbrand and Kurt Gödel, to Alonzo Church, and to Alan Turing).  General recursive functions are broadly equational in style, λ-calculus is stylistically more applicative; both are purely functional.  Turing machines, on the other hand, are explicitly imperative.  Gödel apparently lacked confidence in the purely functional approaches as notions of mechanical calculability, though Church was more confident, until the purely functional approaches were proven equivalent to Turing machines; which to me makes sense as a matter of concreteness.  (There's some discussion of the history in a paper by Solomon Feferman; pdf.) This mismatch between abstract elegance and concrete straightforwardness was an early obstacle, in the 1960s, to applying λ-calculus to programming-language semantics.  Gordon Plotkin found a schematic solution strategy for the mismatch in his 1975 paper "Call-by-name, call-by-value and the λ-calculus" (pdf); one sets up two formal systems, one a calculus with abstract elegance akin to λ-calculus, the other an operational semantics with concrete clarity akin to Turing machines, then proves well-behavedness theorems for the calculus and correspondence theorems between the calculus and operational semantics.  The well-behavedness of the calculus allows us to reason conveniently about program behavior, while the concreteness of the operational semantics allows us to be certain we are really reasoning about what we intend to.  For the whole arrangement to work, we need to find a calculus that is fully well-behaved while matching the behavior of the operational semantics we want so that the correspondence theorems can be established. Plotkin's 1975 paper modified λ-calculus to match the behavior of eager argument evaluation; he devised a call-by-value λv-calculus, with all the requisite theorems.  The behavior was, however, still purely functional, i.e., without side-effects.  Traditional mathematics doesn't incorporate side-effects.  There was (if you think about it) no need for traditional mathematics to explicitly incorporate side-effects, because the practice of traditional mathematics was already awash in side-effects.  Mutable state:  mathematicians wrote down what they were doing; and they changed their own mental state and each others'.  Non-local control-flow (aka "goto"s):  mathematicians made intuitive leaps, and the measure of proof was understandability by other sapient mathematicians rather than conformance to some purely hierarchical ordering.  The formulae themselves didn't contain side-effects because they didn't have to.  Computer programs, though, have to explicitly encompass all these contextual factors that the mathematician implicitly provided to traditional mathematics.  Programs are usually side-effect-ful. In the 1980s Matthias Felleisen devised λ-like calculi to capture side-effect-ful behavior.  At the time, though, he didn't quite manage the entire suite of theorems that Plotkin's paradigm had called for.  Somewhere, something had to be compromised.  In the first published form of Felleisen's calculi, he slightly weakened the well-behavedness theorems for the calculus.  In another published variant he achieved full elegance for the calculus but slightly weakened the correspondence theorems between the calculus and the operational semantics.  In yet another published variant he slightly modified the behavior — in operational semantics as well as calculus — to something he was able to reconcile without compromising the strength of the various theorems.  This, then, is where I came into the picture:  given Felleisen's solution and a fresh perspective (each generation knows a little less about what can't be done than the generation before), I thought I saw a way to capture the unmodified side-effect-ful behavior without weakening any of the theorems.  Eventually I seized an opportunity to explore the insight, when I was writing my dissertation on a nearby topic.  To explain where my approach fits in, I need to go back and pick up another thread:  the treatment of variables in λ-calculus. Alonzo Church also apparently seized an opportunity to explore an insight when doing research on a nearby topic.  The main line of his research was to see if one could banish the paradoxes of classical logic by developing a formal logic that weakens reductio ad absurdum — instead of eliminating the law of the excluded middle, which was a favored approach to the problem.  But when he published the logic, in 1932, he mentioned reductio ad absurdum in the first paragraph and then spent the next several paragraphs ranting about the evils of unbound variables.  One gathers he wanted everything to be perfectly clear, and unbound variables offended his sense of philosophical precision.  His logic had just one possible semantics for a variable, namely, a parameter to be supplied to a function; he avoided the need for any alternative notions of universally or existentially quantified variables, by the (imho quite lovely) device of using higher-order functions for quantification.  That is (since I've brought it up), existential quantifier Σ applied to function F would produce a proposition ΣF meaning that there is some true proposition FX, and universal quantifier Π applied to F, proposition ΠF meaning that every proposition FX is true.  In essence, he showed that these quantifiers are orthogonal to variable-binding; leaving him with only a single variable-binding device, which, for some reason lost to history, he called "λ". λ-calculus is formally a term-rewriting calculus; a set of terms together with a set of rules for rewriting a term to produce another term.  The two basic well-behavedness properties that a term-rewriting calculus generally ought to have are compatibility and Church-Rosser-ness. Compatibility says that if a term can be rewritten when it's a standalone term, it can also be rewritten when it's a subterm of a larger term.  Church-Rosser-ness says that if a term can be rewritten in two different ways, then the difference between the two results can always be eliminated by some further rewriting.  Church-Rosser-ness is another way of saying that rewriting can be thought of as a directed process toward an answer, which is characteristic of calculi.  Philosophically, one might be tempted to ask why the various paths of rewriting ought to reconverge later, but this follows from thinking of the terms as the underlying reality.  If the terms merely describe the reality, and the rewriting lets us reason about its development, then the term syntax is just a way for us to separately describe different parts of the reality, and compatibility and Church-Rosser-ness are just statements about our ability (via this system) to reason separately about different aspects of the development at different parts of the reality without distorting our eventual conclusion about where the whole development is going.  From that perspective, Church-Rosser-ness is about separability, and convergence is just the form in which the separability appears in the calculus. The syntax of λ-calculus — which particularly clearly illustrates these principles — is T   ::=   x | (TT) | (λx.T)  . That is, a term is either a variable; or a combination, specifying that a function is applied to an operand; or a λ-expression, defining a function of one parameter.  The T in (λx.T) is the body of the function, x its parameter, and free occurrences of x in T are bound by this λ.  An occurrence of x in T is free if it doesn't occur inside a smaller context (λx.[ ]) within T.  This connection between a λ and the variable instances it binds is structural.  Here, for example, is a term involving variables x, y, and z, annotated with pointers to a particular binding λ and its variable instances: ((λx.((λy.((λx.(xz))(xy)))(xz)))(xy))  .   ^^                 ^     ^ The x instance in the trailing (xy) is not bound by this λ since it is outside the binding expression.  The x instance in the innermost (xz) is not bound since it is captured by another λ inside the body of the one we're considering.  I suggest that the three marked elements — binder and two bound instances — should be thought of together as the syntactic representation of a deeper, distributed entity that connects distant elements of the term. There is just one rewriting rule — one of the fascinations of this calculus, that just one rule suffices for all computation — called the β-rule: ((λx.T1)T2)   →   T1[x ← T2]   . The left-hand side of this rule is the redex pattern (redex short for reducible expression); it specifies a local pattern in the syntax tree of the term.  Here the redex pattern is that some particular parent node in the syntax tree is a combination whose left-hand child is a λ-expression.  Remember, this rewriting relation is compatible, so the parent node doesn't have to be the root of the entire tree.  It's important that this local pattern in the syntax tree includes a variable binder λ, thus engaging not only a local region of the syntax tree, but also a specific distributed structure in the network of non-local connections across the tree.  Following my earlier post, I'll call the syntax tree the "geometry" of the term, and the totality of the non-local connections its "network topology". The right-hand side of the rule specifies replacement by substituting the operand T2 for the parameter x everywhere it occurs free in the body T1; but there's a catch.  One might, naively, imagine that this would be recursively defined as x[x ← T]   =   T x1[x2 ← T]   =   x1   if x1 isn't x2 (T1 T2)[x ← T]   =   (T1[x ← T] T2[x ← T]) (λx.T1)[x ← T2]   =   (λx.T1) (λx1.T1)[x2 ← T2]   =   (λx1.T1[x2 ← T2])   if x1 isn't x2. This definition just descends the syntax tree substituting for the variable, and stops if it hits a λ that binds the same variable; very straightforward, and only a little tedious.  Except that it doesn't work.  Most of it does; but there's a subtle error in the rule for descending through a λ that binds a different variable, The trouble is, what if T1 contains a free occurrence of x2 and, at the same time, T2 contains a free instance of x1?  Then, before the substitution, that free instance of x1 was part of some larger distributed structure; that is, it was bound by some λ further up in the syntax tree; but after the substitution, following this naive definition of substitution, a copy of T2 is embedded within T1 with an instance of x1 that has been cut off from the larger distributed structure and instead bound by the inner λx1, essentially altering the sense of syntactic template T2.  The inner λx1 is then said to capture the free x1 in T2, and the resulting loss of integrity of the meaning of T2 is called bad hygiene (or, a hygiene violation).  For example, ((λy.(λx.y))x)   ⇒β   (λx.y)[y ← x] but under the naive definition of substitution, this would be (λx.x), because of the coincidence that the x we're substituting for y happens to have the same name as the bound variable of this inner λ.  If the inner variable had been named anything else (other than y) there would have been no problem.  The "right" answer here is a term of the form (λz.x), where any variable name could be used instead of z as long as it isn't "x" or "y".  The standard solution is to introduce a rule for renaming bound variables (called α-renaming), and restrict the substitution rule to require that hygiene be arranged beforehand.  That is, (λx1.T)   →   (λx2.T[x1 ← x2])   where x2 doesn't occur free in T (λx1.T1)[x2 ← T2]   =   (λx1.T1[x2 ← T2])   if x1 isn't x2 and doesn't occur free in T2. Here again, this may be puzzling if one thinks of the syntax as the underlying reality.  If the distributed structures of the network topology are the reality, which the syntax merely describes, then α-renaming is merely an artifact of the means of description; indeed, the variable-names themselves are merely an artifact of the means of description. Side-effect-ful variables Suppose we want to capture classical side-effect-ful behavior, unmodified, without weakening any of the theorems of Plotkin's paradigm.  Side-effects are by nature distributed across the term, and would therefore seem to belong naturally to its network topology.  In Felleisen's basic calculus, retaining the classical behavior and requiring the full correspondence theorems, side-effect-ful operations create syntactic markers that then "bubble up" through the syntax tree till they reach the top of the term, from which the global consequence of the side-effect is enacted by a whole-term-rewriting rule — thus violating compatibility, since the culminating rule is by nature applied to the whole term rather than to a subterm.  This strategy seems, in retrospect, to be somewhat limited by an (understandable) inclination to conform to the style of variable handling in λ-calculus, whose sole binding device is tied to function application at a specific location in the geometry.  Alternatively (as I seized the opportunity to explore in my dissertation), one can avoid the non-compatible whole-term rules by making the syntactic marker, which bubbles up through the term, a variable-binder.  These side-effect-ful bindings are no longer strongly tied to a particular location in the geometry; they float, potentially to the top of the term, or may linger further down in the tree if the side-effect happens to only affect a limited region of the geometry.  But the full classical behavior (in the cases Felleisen addressed) is captured, and Plotkin's entire suite of theorems are supported. The calculus in which I implemented this side-effect strategy (along with some other things, that were the actual point of the dissertation but don't apparently matter here) is called vau-calculus. Recall that the β-rule of λ-calculus applies to a redex pattern at a specific location in the geometry, and requires a binder to occur there so that it can also tie in to a specific element of the network topology.  The same is true of the side-effect-ful rules of the calculus I constructed:  a redex pattern occurs at a specific location in the geometry with a local tie-in to the network topology.  There may then be a substitutive operation on the right-hand side of the rule, which uses the associated element of the network topology to propagate side-effect-ful consequences back down the syntax tree to the entire encompassed subterm.  There is a qualitative difference, though, between the traditional substitution of the β-rule and the substitutions of the side-effect-ful operations.  A traditional substitution T1[x ← T2] may attach new T2 subtrees at certain leaves of the T1 syntax tree (free instances of x in T1), but does not disturb any of the pre-existing tree structure of T1.  Consequently, the only effect of the β-rule on the pre-existing geometry is the rearrangement it does within the redex pattern.  This is symmetric to the hygiene property, which assures (by active intervention if necessary, via α-renaming) that the only effect of the β-rule on the pre-existing network topology is what it does to the variable element whose binding is within the redex pattern.  I've therefore called the geometry non-disturbance property co-hygiene.  As long as β-substitution is the only variable substitution used, co-hygiene is an easily overlooked property of the β-rule since, unlike hygiene, it does not require any active intervention to maintain. The substitutions used by the side-effect-ful rewriting operations go to the same α-renaming lengths as the β-rule to assure hygiene.  However, the side-effect-ful substitutions are non-co-hygienic.  This might, arguably, be used as a technical definition of side-effects, which cause distributed changes to the pre-existing geometry of the term. Quantum scope Because co-hygiene is about not perturbing pre-existing geometry, it seems reasonable that co-hygienic rewriting operations should be more in harmony with the geometry than non-co-hygienic rewriting operations.  Thus, β-rewriting should be more in harmony with the geometry of the term than the side-effect-ful operations; which, subjectively, does appear to be the case.  (The property that first drew my attention to all this was that α-renaming, which is geometrically neutral, is a special case of β-substitution, whereas the side-effect-ful substitutions are structurally disparate from α-renaming.) And gravity is more in harmony with the geometry of spacetime than are the other fundamental forces; witness general relativity. Hence my speculation, by analogy, that one might usefully structure a theory of basic physics such that gravity is co-hygienic while the other fundamental forces are non-co-hygienic. One implication of this line of speculation (as I noted in the earlier post) would be fruitlessness of efforts to unify the other fundamental forces with gravity by integrating them into the geometry of spacetime.  If the other forces are non-co-hygienic, their non-affinity with geometry is structural, and trying to treat them in a more gravity-like way would be like trying to treat side-effect-ful behavior as structurally akin to function-application in λ-calculus — which I have long reckoned was the structural miscue that prevented Felleisen's calculus from supporting the full set of well-behavedness theorems. On further consideration, though, something more may be suggested; even as the other forces might not integrate into the geometry of spacetime, gravity might not integrate into the infrastructure of quantum mechanics.  All this has to do with the network topology, a non-local infrastructure that exists even in pure λ-calculus, but which in the side-effect-ful vau-calculus achieves what one might be tempted to call "spooky action at a distance".  Suppose that quantum entanglement is part of this non-co-hygienic aspect of the theory.  (Perhaps quantum entanglement would be the whole of the non-co-hygienic aspect, or, as I discussed in the earlier post, perhaps there would be other, non-quantum non-locality with interesting consequences at cosmological scale; then again, one might wonder if quantum entanglement would itself have consequences at cosmological scale that we have failed to anticipate because the math is beyond us.)  It would follow that gravity would not exhibit quantum entanglement.  On one hand, this would imply that quantum gravity should not work well as a natural unification strategy.  On the other hand, to make this approach work, something rather drastic must happen to the underpinnings of quantum mechanics, both philosophical and technical. We understand quantum mechanics as describing the shape of a spectrum of different possible realities; from a technical perspective that is what quantum mechanics describes, even if one doesn't accept it as a philosophical interpretation (and many do accept that interpretation, if only on grounds of Occam's Razor that there's no reason to suppose philosophically some other foundation than is supported technically).  But, shaped spectra of alternative versions of the entire universe seems reminiscent of whole-term rewriting in Felleisen's calculus — which was, notably, a consequence of a structural design choice in the calculus that actually weakened the internal symmetry of the system.  The alternative strategy of vau-calculus both had a more uniform infrastructure and avoided the non-compatible whole-term rewriting rules.  An analogous theory of basic physics ought to account for quantum entanglement without requiring wholesale branching of alternative universes.  Put another way, if gravity isn't included in quantum entanglement, and therefore has to diverge from the other forces at a level more basic than the level where quantum entanglement arises, then the level at which quantum entanglement arises cannot be the most basic level. Just because quantum structure would not be at the deepest level of physics, does not at all suggest that what lies beneath it must be remotely classical.  Quantum mechanics is mathematically a sort of lens that distorts whatever classical system is passed through it; taking the Schrödinger equation as demonstrative, iℏ Ψ  =   Ĥ Ψ , the classical system is contained in the Hamiltonian function Ĥ, which is plugged into the equation to produce a suitable spectrum of alternatives.  Hence my description of the quantum equation itself as basic.  But, following the vau-calculus analogy, it seems some sort of internal non-locality ought to be basic, as it follows from the existence of the network topology; looking at vau-calculus, even the β-rule fully engages the network topology, though co-hygienically. Geometry and network The above insights on the physical theory itself are mostly negative, indicating what this sort of theory of physics would not be like, what characteristics of conventional quantum math it would not have.  What sort of structure would it have? I'm not looking for detailed math, just yet, but the overall shape into which the details would be cast.  Some detailed math will be needed, before things go much further, to demonstrate that the proposed approach is capable of generating predictions sufficiently consistent with quantum mechanics, keeping in mind the well-known no-go result of Bell's Theorem.  I'm aware of the need; the question, though, is not whether Bell's Theorem can be sidestepped — of course it can, like any other no-go theorem, by blatantly violating one of its premises — but whether it can be sidestepped by a certain kind of theory.  So the structure of the theory is part of the possibility question, and needs to be settled before we can ask the question properly. In fact, one of my concerns for this sort of theory is that it might have too many ways to get around Bell's Theorem.  Occam's Razor would not look favorably on a theory with redundant Bell-avoidance devices. Let's now set aside locality for a moment, and consider nondeterminism.  Bell's Theorem calls (in combination with some experimental results that are, somewhat inevitably, argued over) for chronological nondeterminism, that is, nondeterminism relative to the time evolution of the physical system.  One might, speculatively, be able to approximate that sort of nondeterminism arbitrarily well, in a fundamentally non-local theory, by exploiting the assumption that the physical system under consideration is trivially small relative to the whole cosmos.  We might be able to draw on interactions with distant elements of the cosmos to provide a more-or-less "endless" supply of pseudo-randomness.  I considered this possibility in the earlier post on co-hygiene, and it is an interesting theoretical question whether (or, at the very least, how) a theory of this sort could in fact generate the sort of quantum probability distribution that, according to Bell's Theorem, cannot be generated by a chronologically deterministic local theory.  The sort of theory I'm describing, however, is merely a way to provide a local illusion of nondeterminism in a non-local theory with global determinism — and when we're talking chronology, it is difficult even to define global determinism (because, thanks to relativity, "time" is tricky to define even locally; made even trickier since we're now contemplating a theory lacking the sort of continuity that relativity relies upon; and is likely impossible to define globally, thanks to relativity's deep locality).  It's also no longer clear anymore why one should expect chronological determinism at all. A more straightforward solution, seemingly therefore favored by Occam's Razor, is to give up on chronological determinism and instead acquire mathematical determinism, by the arguably "obvious" strategy of supposing that the whole of spacetime evolves deterministically along an orthogonal dimension, converting unknown initial conditions (initial in the orthogonal dimension) into chronological nondeterminism.  I demonstrated the principle of this approach in an earlier post.  It is a bit over-powered, though; a mathematically deterministic theory of this sort — moreover, a mathematically deterministic and mathematically local theory of this sort — can readily generate not only a quantum probability distribution of the sort considered by Bell's Theorem, but, on the face of it, any probability distribution you like.  This sort of excessive power would seem rather disfavored by Occam's Razor. The approach does, however, seem well-suited to a co-hygiene-directed theory.  Church-Rosser-ness implies that term rewriting should be treated as reasoning rather than directly as chronological evolution, which seemingly puts term rewriting on a dimension orthogonal to spacetime.  The earlier co-hygiene post noted that calculi, which converge to an answer via Church-Rosser-ness, contrast with grammars, which are also term-rewriting systems but exist for the purpose of diverging and are thus naturally allied with mathematical nondeterminism whereas calculi naturally ally with mathematical determinism.  So our desire to exploit the calculus/physics analogy, together with our desire for abstract separability of parts, seems to favor this use of a rewriting dimension orthogonal to spacetime. A puzzle then arises about the notion of mathematical locality.  When the rewriting relation, through this orthogonal dimension (which I used to call "meta-time", though now that we're associating it with reasoning some other name is wanted), changes spacetime, there's no need for the change to be non-local.  We can apparently generate any sort of physical laws, quantum or otherwise, without the need for more than strictly local rewrite rules; so, again by Occam's Razor, why would we need to suppose a whole elaborate non-local "network topology"?  A strictly local rewriting rule sounds much simpler. Consider, though, what we mean by locality.  Both nondeterminism and locality must be understood relative to a dimension of change, thus "chronological nondeterminism"; but to be thorough in defining locality we also need a notion of what it means for two elements of a system state to be near each other.  "Yes, yes," you may say, "but we have an obvious notion of nearness, provided by the geometry of spacetime."  Perhaps; but then again, we're now deep enough in the infrastructure that we might expect the geometry of spacetime to emerge from something deeper.  So, what is the essence of the geometry/network distinction in vau-calculus? A λ-calculus term is a syntax tree — a graph, made up of nodes connected to each other by edges that, in this case, define the potential function-application relationships.  That is, the whole purpose of the context-free syntax is to define where the interactions — the redex patterns for applying the β-rule — are.  One might plausibly say much the same for the geometry of spacetime re gravity, i.e., location in spacetime defines the potential gravitational interactions.  The spacetime geometry is not, evidently, hierarchical like that of λ-calculus terms; that hierarchy is apparently a part of the function-application concept.  Without the hierarchy, there is no obvious opportunity for a direct physical analog to the property of compatibility in term-rewriting calculi. The network topology, i.e., the variables, provide another set of connections between nodes of the graph.  These groups of connection are less uniform, and the variations between them do not participate in the redex patterns, but are merely tangential to the redex patterns thus cuing the engagement of a variable structure in a rewriting transformation.  In vau-calculi the variable is always engaged in the redex through its binding, but this is done for compatibility; by guaranteeing that all the variable instances occur below the binding in the syntax tree, the rewriting transformation can be limited to that branch of the tree.  Indeed, only the λ bindings really have a fixed place in the geometry, dictated by the role of the variable in the syntactically located function application; side-effect-ful bindings float rather freely, and their movement through the tree really makes no difference to the function-application structure as long as they stay far enough up in the tree to encompass all their matching variable instances.  If not for the convenience of tying these bindings onto the tree, one might represent them as partly or entirely separate from the tree (depending on which kind of side-effect one is considering), tethered to the tree mostly by the connections to the bound variable instances.  The redex pattern, embedded within the geometry, would presumably be at a variable instance.  Arranging for Church-Rosser-ness would, one supposes, be rather more challenging without compatibility. Interestingly, btw, of the two classes of side-effects considered by vau-calculus (and by Felleisen), this separation of bindings from the syntax tree is more complete for sequential-state side-effects than for sequential-control side-effects — and sequential control is much more simply handled in vau-calculus than is sequential state.  I'm still wondering if there's some abstract principle here that could relate to the differences between various non-gravitational forces in physics, such as the simplicity of Maxwell's equations for electromagnetism. This notion of a binding node for a variable hovering outside the geometry, tethered more-or-less-loosely to it by connections to variable instances, has a certain vague similarity to the aggressive non-locality of quantum wave functions.  The form of the wave function would, perhaps, be determined by a mix of the nature of the connections to the geometry together with some sort of blurring effect resulting from a poor choice of representing structures; the hope would be that a better choice of representation would afford a more focused description. I've now identified, for vau-calculus, three structural differences between the geometry and the network. • The geometry contains the redex patterns (with perhaps some exotic exceptions). • The geometric topology is much simpler and more uniform than the network topology. • The network topology is treated hygienically by all rewriting transformations, whereas the geometry is treated co-hygienically only by one class of rewriting transformations (β). But which of these three do we expect to carry over to physics? The three major classes of rewriting operations in vau-calculus — function application, sequential control, and sequential state — all involve some information in the term that directs the rewrite and therefore belongs in the redex pattern.  All three classes of operations involve distributing information to all the instances of the engaged variable.  But, the three classes differ in how closely this directing information is tied to the geometry. For function application, the directing information is entirely contained in the geometry, the redex pattern of the β-rule, ((λx.T1)T2).  The only information about the variable not contained within that purely geometric redex pattern is the locations of the bound instances. For sequential control, the variable binder is a catch expression, and the bound variable instances are throw expressions that send a value up to the matching catch.  (I examined this case in detail in an earlier post.)  The directing information contained in the variable, beyond the locations of the bound instances, would seem to be the location of the catch; but in fact the catch can move, floating upward in the syntax tree, though moving the catch involves a non-co-hygienic substitutive transformation — in fact, the only non-co-hygienic transformation for sequential control.  So the directing information is still partly tied to the syntactic structure (and this tie is somehow related to the non-co-hygiene).  The catch-throw device is explicitly hierarchical, which would not carry over directly to physics; but this may be only a consequence of its relation to the function-application structure, which does carry over (in the broad sense of spacetime geometry).  There may yet be more to make of a side analogy between vau-calculus catch-throw and Maxwell's Equations. For sequential state, the directing information is a full-blown environment, a mapping from symbols to values, with arbitrarily extensive information content and very little relation to geometric location.  The calculus rewrite makes limited use of the syntactic hierarchy to coordinate time ordering of assignments — not so much inherently hierarchical as inherently tied to the time sequencing of function applications, which itself happens to be hierarchical — but this geometric connection is even weaker than for catch-throw, and its linkage to time ordering is more apparent.  In correspondence with the weaker geometric ties, the supporting rewrite rules are much more complicated, as they moderate passage of information into and out of the mapping repository. "Time ordering" here really does refer to time in broadly the same sense that it would arise in physics, not to rewriting order as such.  That is, it is the chronological ordering of events in the programming language described by the rewriting system, analogous to the chronological ordering of events described by a theory of physics.  Order of rewriting is in part related to described chronology, although details of the relationship would likely be quite different for physics where it's to do with relativity.  This distinction is confusing even in term-rewriting PL semantics, where PL time is strictly classical; one might argue that confusion between rewriting, which is essentially reasoning, and evaluation, which is the PL process reasoned about, resulted in the unfortunately misleading "theory of fexprs is trivial" result which I have discussed here previously. It's an interesting insight that, while part of the use of syntactic hierarchy in sequential control/state — and even in function application, really — is about compatibility, which afaics does not at all carry over to physics, their remaining use of syntactic hierarchy is really about coordination of time sequencing, which does occur in physics in the form of relativity.  Admittedly, in this sort of speculative exploration of possible theories for physics, I find the prospect of tinkering with the infrastructure of quantum mechanics not nearly as daunting as tinkering with the infrastructure of relativity. At any rate, the fact that vau-calculus puts the redex pattern (almost always) entirely within a localized area of the syntax, would seem to be more a statement about the way the information is represented than about the geometry/network balance.  That is, vau-calculus represents the entire state of the system by a syntactic term, so each item of information has to be given a specific location in the term, even if that location is chosen somewhat arbitrarily.  It is then convenient, for time ordering, to require that all the information needed for a transformation should get together in a particular area of the term.  Quantum mechanics may suffer from a similar problem, in a more advanced form, as some of the information in a wave function may be less tied to the geometry than the equations (e.g. the Schrödinger equation) depict it.  What really makes things messy is devices that are related to the geometry but less tightly so than the primary, co-hygienic device.  Perhaps that is the ultimate trade-off, with differently structured devices becoming more loosely coupled to the geometry and proportionately less co-hygienic. All of which has followed from considering the first of three geometry/network asymmetries:  that redex patterns are mostly contained in the geometry rather than the network.  The other two asymmetries noted were  (1) that the geometric structure is simple and uniform while the network structure is not, and  (2) that the network is protected from perturbation while the geometry is not — i.e., the operations are all hygienic (protecting the network) but not all are co-hygienic (protecting the geometry).  Non-co-hygiene complicates things only moderately, because the perturbations are to the simple, uniform part of the system configuration; all of the operations are hygienic, so they don't perturb the complicated, nonuniform part of the configuration.  Which is fortunate for mathematical treatment; if the perturbations were to the messy stuff, it seems we mightn't be able to cope mathematically at all.  So these two asymmetries go together.  In my more cynical moments, this seems like wishful thinking; why should the physical world be so cooperative?  However, perhaps they should be properly understood as two aspects of a single effect, itself a kind of separability, the same view I've recommended for Church-Rosser-ness; in fact, Church-Rosser-ness may be another aspect of the same whole.  The essential point is that we are able to usefully consider individual parts of the cosmos even though they're all interconnected, because there are limits on how aggressively the interconnectedness is exercised.  The "geometry" is the simple, uniform way of decomposing the whole into parts, and "hygiene" is an assertion that this decomposition suffices to keep things tractable.  It's still fair to question why the cosmos should be separable in this way, and even to try to build a theory of physics in which the separation breaks down; but there may be some reassurance, re Occam's Razor, in the thought that these two asymmetries (simplicity/uniformity, and hygiene) are two aspects of a single serendipitous effect, rather than two independently serendipitous effects. Cosmic structure Most of these threads are pointing toward a rewriting relation along a dimension orthogonal to spacetime, though we're lacking a good name for it atm (I tend to want to name things early in the development process, though I'm open to change if a better name comes along). One thread, mentioned above, that seems at least partly indifferent to the rewriting question, is that of changes in the character of quantum mechanics at cosmological scale.  This relates to the notion of decoherence.  It was recognized early in the conceptualization of quantum mechanics that a very small entangled quantum system would tend to interact with the rest of the universe and thereby lose its entanglement and, ultimately, become more classical. We can only handle the quantum math for very small physical systems; in fact, rather insanely small physical systems.  Intuitively, what if this tendency of entanglement to evaporate when interacting with the rest of the universe ceases to be valid when the size of the physical system is sufficiently nontrivial compared to the size of the whole universe?  In the traditional quantum mechanics, decoherence appears to be an all-or-nothing proposition, a strict dichotomy tied to the concept of observation.  If something else is going on at large scales, either it is an unanticipated implication of the math-that-we-can't-do, or it is an aspect of the physics that our quantum math doesn't include because the phenomena that would cause us to confront this aspect are many orders of magnitude outside anything we could possibly apply the quantum math to.  It's tantalizing that this conjures both the problem of observation, and the possibility that quantum mechanics may be (like Newtonian mechanics) only an approximation that's very good within its realm of application. The persistently awkward interplay of the continuous and discrete is a theme I've visited before.  Relativity appears to have too stiff a dose of continuity in it, creating a self-reference problem even in the non-quantum case (iirc Einstein had doubts on this point before convincing himself the math of general relativity could be made to work); and when non-local effects are introduced for the quantum case, continuity becomes overconstraining.  Quantum gravity efforts suffer from a self-reference problem on steroids (non-renormalizable infinities).  The Big Picture perspective here is that non-locality and discontinuity go together because a continuum — as simple and uniform as it is possible to be — is always going to be perceived as geometry. The non-local network in vau-calculus appears to be inherently discrete, based on completely arbitrary point-to-point connections defined by location of variable instances, with no obvious way to set up any remotely similar continuous arrangement.  Moreover, the means I've described for deriving nondeterminism from the network connections (on which I went into some detail in the earlier post) exploits the potential for chaotic scrambling of discrete point-to-point connections by following successions of links hopscotching from point to point.  While the geometry might seem more amenable to continuity, a truly continuous geometry doesn't seem consistent with point-to-point network connections, either, as one would then have the prospect of an infinitely dense tangle of network connections to randomly unrelated remote points, a sort of probability-density field that seems likely to wash out the randomness advantages of the strategy and less likely to be mathematically useful; so the whole rewriting strategy appears discrete in both the geometry and network aspects of its configuration as well as in the discrete rewriting steps themselves. The rewriting approach may suffer from too stiff a dose of discreteness, as it seems to force a concrete choice of basic structures.  Quantum mechanics is foundationally flexible on the choice of elementary particles; the mathematical infrastructure (e.g. the Schrödinger equation) makes no commitment on the matter at all, leaving it to the Hamiltonian Ĥ.  Particles are devised comparatively freely, as with such entities as phonons and holes.  Possibly the rewriting structure one chooses will afford comparable flexibility, but it's not at all obvious that one could expect this level of versatile refactoring from a thoroughly discrete system.  Keeping in mind this likely shortfall of flexibility, it's not immediately clear what the basic elements should be.  Even if one adopts, say, the standard model, it's unclear how that choice of observable particles would correspond to concrete elements in a discrete spacetime-rewriting system (in one "metaclassical" scenario I've considered, spacetime events are particle-like entities tracing out one-dimensional curves as spacetime evolves across an orthogonal dimension); and it is by no means certain that the observable elements ought to follow the standard model, either.  As I write this there is, part of the time, a cat sitting on the sofa next to me.  It's perfectly clear to me that this is the correct way to view the situation, even though on even moderately closer examination the boundaries of the cat may be ambiguous, e.g. at what point an individual strand of fur ceases to be part of the cat.  By the time we get down to the scale where quantum mechanics comes into play and refactoring of particles becomes feasible, though, is it even certain that those particles are "really" there?  (Hilaire Belloc cast aspersions on the reality of a microbe merely because it couldn't be seen without the technological intervention of a microscope; how much more skepticism is recommended when we need a gigantic particle accelerator?) Re the structural implications of quasiparticles (such as holes), note that such entities are approximations introduced to describe the behavior of vastly complicated systems underneath.  A speculation that naturally springs to mind is, could the underlying "elementary" particles be themselves approximations resulting from complicated systems at a vastly smaller scale; which would seem problematic in conventional physics since quantum mechanics is apparently inclined to stop at Planck scale.  However, the variety of non-locality I've been exploring in this thread may offer a solution:  by maintaining network connections from an individual "elementary" particle to remote, and rather arbitrarily scrambled, elements of the cosmos, one could effectively make the entire cosmos (or at least significant parts of it) serve as the vastly complicated system underlying the particle. It is, btw, also not certain what we should expect as the destination of a spacetime-rewriting relation.  An obvious choice, sufficient for a proof-of-concept theory (previous post), is to require that spacetime reach a stable state, from which there is either no rewriting possible, or further rewriting leaves the system state unchanged.  Is that the only way to derive a final state of spacetime?  No.  Whatever other options might be devised, one that comes to mind is some form of cycle, repeating a closed set of states of spacetime, perhaps giving rise to a set of states that would manifest in more conventional quantum math as a standing wave.  Speculatively, different particles might differ from each other by the sort of cyclic pattern they settle into, determining a finite — or perhaps infinite — set of possible "elementary particles".  (Side speculation:  How do we choose an initial state for spacetime?  Perhaps quantum probability distributions are themselves stable in the sense that, while most initial probability distributions produce a different final distribution, a quantum distribution produces itself.) Granting that the calculus/physics analogy naturally suggests some sort of physical theory based on a discrete rewriting system, I've had recurring doubts over whether the rewriting ought to be in the direction of time — an intuitively natural option — or, as discussed, in a direction orthogonal to spacetime.  At this point, though, we've accumulated several reasons to prefer rewriting orthogonal to spacetime. Church-Rosser-ness.  CR-ness is about ability to reason separately about the implications of different parts of the system, without having to worry about which reasoning to do first.  The formal property is that whatever order one takes these locally-driven inferences in ("locally-driven" being a sort of weak locality), it's always possible to make later inferences that reach a common conclusion by either path.  This makes it implausible to think of these inference steps as if they were chronological evolution. Bell's Theorem.  The theorem says, essentially, the probability distributions of quantum mechanics can't be generated by a conventionally deterministic local theory.  Could it be done by a non-local rewriting theory evolving deterministically forward in time?  My guess would be, probably it could (at least for classical time); but I suspect it'd be rather artificial, whereas my sense of the orthogonal-dimension rewriting approach (from my aforementioned proof-of-concept) is that it ought to work out neatly. Relativity.  Uses an intensively continuous mathematical infrastructure to construct a relative notion of time.  It would be rather awkward to set an intensively discrete rewriting relation on top of this relative notion of time; the intensively discrete rewriting really wants to be at a deeper level of reality than any continuous relativistic infrastructure, rather than built on top of it (just as we've placed it at a deeper level than quantum entanglement), with apparent continuity arising from statistical averaging over the discrete foundations.  Once rewriting is below relativity, there is no clear definition of a "chronological" direction for rewriting; so rewriting orthogonal to spacetime is a natural device from which to derive relativistic structure.  Relativity is however a quintessentially local theory, which ought to be naturally favored by a predominately local rewriting relation in the orthogonal dimension.  Deriving relativistic structure from an orthogonal rewriting relation with a simple causal structure also defuses the self-reference problems that have lingered about gravity. It's rather heartening to see this feature of the theory (rewriting orthogonal to spacetime) — or really any feature of a theory — drawing support from considerations in both quantum mechanics and relativity. The next phase of exploring this branch of theory — working from these clues to the sort of structure such a theory ought to have — seems likely to study how the shape of a spacetime-orthogonal rewriting system determines the shape of spacetime.  My sense atm is that one would probably want particular attention to how the system might give rise to a relativity-like structure, with an eye toward what role, if any, a non-local network might play in the system.  Keeping in mind that β-rule use of network topology, though co-hygienic, is at the core of what function application does and, at the same time, inspired my suggestion to simulate nondeterminism through repeatedly rescrambled network connections; and, likewise, keeping in mind evidence (variously touched on above) on the possible character of different kinds of generalized non-co-hygienic operations. Monday, August 29, 2016 Interpreted programming languages Last night I drifted off while reading a Lisp book. xkcd 224. Universal program Half a century's worth of misunderstandings and confusions [if #t M1 M2]   →   M1 [if #f M1 M2]   →   M2 [car (S1 . S2)]   →   S1 [cdr (S1 . S2)]   →   S2 [cons S1 S2]   →   (S1 . S2) [eq? S S]   →   #t Universal program • φ(S) = (QUOTE S). would encode through φ as That didn't happen, though. Of interest: S1 ≅ S2 [eval S1 e0] ≅ [eval S2 e0] Putting it all together, [eval d T]   →   d Half a century's worth of misunderstandings and confusions These are your father's parentheses Elegant weapons for a more... civilized age. xkcd 297.
e2e99cf62842cfa0
Friday, January 14, 2011 Philosophy by metabolism again… From "Darwin's Rape Whistle" by Jesse Bering (, 13 Jan. 2011): Thornhill and Palmer, Malamuth, and the many other investigators studying rape through an evolutionary lens, take great pains to point out that "adaptive" does not mean "justifiable," but rather only mechanistically viable. Yet dilettante followers may still be inclined to detect a misogyny in these investigations that simply is not there. As University of Michigan psychologist William McKibbin and his colleagues write in a 2008 piece for the Review of General Psychology, "No sensible person would argue that a scientist researching the causes of cancer is thereby justifying or promoting cancer. Yet some people argue that investigating rape from an evolutionary perspective justifies or legitimizes rape." I want to rework this paragraph to see what might fall out: Investigators studying honesty through an evolutionary lens, take great pains to point out that "adaptive" does not mean "vicious," but rather only mechanistically viable. Yet dilettante followers may still be inclined to detect a naivete in these investigations that simply is not there. As University of Burpelson psychologist Manfried Rawhide and his colleagues write in a 2079 piece for the Review of Major Pneumatology, "No sensible person would argue that a scientist researching the causes of cancer is thereby justifying or promoting cancer. Yet some people argue that investigating honesty from an evolutionary perspective condemns or undermines honesty." The second paragraph exemplifies a rebuttal of Bulverism. Bulverism is the tactic of assuming some persons are wrong based on physiological and psychological––or, in this case, evolutionary––factors which dictate their rational biases. We may "believe in" honesty as a fundamental "moral" principle, the Bulverist argues, but this is only because we have been shaped by our evolutionary past to be so biased. Therefore, the preference for honesty, under the aegis of "morality", is just atavistic naivete, which ought to be supplanted by a truly rational ethics that is cognizant of the autonomy we know have over our own natural selection. Beren casts his vote against the Bulverists thus: The unfortunate demonization of this brand of inquiry is rooted in the fallacy of biological determinism (according to which men are programmed by their genes to rape and have no free will to do otherwise) and the naturalistic fallacy (that because rape is natural it must be acceptable). These are resoundingly false assumptions that reveal a profound ignorance of evolutionary biology. Yet the purpose of the remaining article is not to belabor that tired ideological dispute, but to look at things from the female genetic point of view. We've heard the argument that men may have evolved to sexually assault women. Have women evolved to protect themselves from men? Beren's point is that, just because the rape instinct is strong in numerous males, does not mean rape is therefore morally acceptable. The implication of his article, however, points in an obverse direction, namely, that because rape is bad, though natural selection has kept it going, the equally naturally selected measures of the female body against rape are a kind of good. It is interesting to note how Darwinian ethics is essentially Kantian in so far as the former rejects behavior which, if applied on a species-wide level, would lead to the degradation and dissolution of prior reproductive success. I will call this Darwikantian ethics. Kant, under the rubric of the "categorical imperative", argued that we should do only that which we believe could be practiced by everyone at all times, and abstain from that which we realize could not be practiced by all people at all times. As he writes in Grounding for the Metaphysics of Morals (tr. James W. Ellington. 3rd ed. Hackett. p. 30 ( [1785] (1993)): "Act only according to that maxim whereby you can, at the same time, will that it should become a universal law." Lying, for instance, is unacceptable because, if everyone did it––i.e. if it became literally universally acceptable––, our entire means of communication and cooperation would collapse. Likewise, Darwinian ethics rejects selfishness on the grounds that widespread selfishness––i.e. as a 'universal' feature of human behavior––would undermine ultimate reproductive stability. And "it would be bad," or at least as "bad" as a Darwinian is allowed to say something is in 'purely' adaptive terms. Aye, there's the rub. If the basis for defending, say, altruism is that altruism has generally promoted reproductive success in the past, then we can take it as a general ethical principle that that which is morally defensible is morally defensible because it promotes reproductive success. On this principle, however, what basis do we have for condemning rape in every case? Presumably, again, the saving principle is the Darwikantian categorical imperative (DCI), but this is a feeble moral guide for at least two reasons. First, how would we define the rapist's principle for action? Does he believe it would be a universal law that every man should rape every woman under any circumstances? Certainly not, since he would certainly defend his mother and sister and other favored females against male aggressors. His principle may, therefore, be so nuanced that it could be a universal basis for action, say, "Rape a woman only when the coast is clear, you have already sired at least another child, she does not appear to be pregnant, etc." If the conditions for the action were so specific that, even if universally accepted, they would come together only rarely, and therefore would not undermine the collective reproductive success of the species, it's hard to see how the DCI could coherently reject it. Further, if the rapist used a prophylactic so that pregnancy and its burdens on the woman were not an issue, he'd seem to be that much less immoral. But surely such moral reasoning is amiss. A second problem with the DCI is that it cuts both ways. For, if an action cannot be "morally" endorsed unless it could be applied universally for the species, then altruism seems to be morally unacceptable. No species could survive if all its members all the time acted altruistically, since, if they literally never acted for their own interests, they would become paralyzed by inaction, like Buridan's ass, and probably starve to death. More realistically, if it were only the case that nearly everyone always acted altruistically (as we are, in fact, expected to make the case!), the altruists would eventually be overtaken by the minority of "deviants" acting selfishly. The point is that if the DCI proscribes actions that would have universally negative results, then altruism is morally proscribed by the DCI. As soon as the proponent of the DCI admits there must be some 'intermediate' principle between sheer relativism and DCI-absolutism, however, she is back in the folds of traditional moral argumentation and Darwikantian ethics offers little, if any, light in the discussion. The institution of marriage, for instance, is seen as a good in Darwikantianism because it enhances social stability and thereby promotes reproductive success. This does not, however, mean everyone can or must get married, which shows once more that there is some other domain of moral wisdom by which otherwise "natural" behaviors are deemed justifiable and not merely "mechanistically viable." If marriage is wrong in some cases, presumably because a DCI-style universalization of such cases would undermine reproductive success, then it's hard to see why rape would not be right in some cases (say, as a form of cathartic vengeance which restores the social order by taking one male down a peg by the symbolic attack of his daughter or wife). That kind of socially beneficial "ritual rape" could be applied universally, since it would only apply in certain circumstances. But again, surely such moral reasoning is flawed. The notion of a universally applicable specific law is not incoherent; indeed, it is highly common in science. The Bode-Titius Law, for instance, is universally valid if taken in conjunction with limiting conditions (e.g. the absence of Neptune). Indeed, the whole of Newtonian physics is still scientifically, universally "true", even though it is theoretically false, when qualified thus and such. Likewise, quantum mechanics is technically deterministic according to the universal validity of the Schrödinger equations, though it is universally indeterminate in every specific case. Paradoxical, perhaps, but true. So, while rape––and altruism––would be universally unacceptable, specific cases of rape, and specific cases of altruism, would be acceptable in Darwikantianism as long as they are qualified in their particular applications. Yet, we all know that rape is intrinsically wrong, not merely generally undesirable. How do we know this, though? Not by a vague nod to natural selection, but rather by an awareness of the intrinsic principles of right human conduct. There seems to be an important difference between universally and absolutely true (i.e. between always potentially and intrinsically valid). I will not explore that difference now, mainly because I still must ponder it, but I want to close with a syllogism that captures the point of this post. 1. Humans are intrinsically moral agents. 2. Moral action is not intrinsically derived from natural selection. 3. Therefore, the nature of humans is neither intrinsically nor exhaustively based on natural selection. Because we can decided to be better than our instincts, we are better than the basis for our instincts. One Brow said... A well-considered post, detailing a specific instance of the general notion that science is not a source for maorality. djr said... I wish I could say you were just beating a dead horse here, but unfortunately some people still labor under the delusion that 'selected for' entails 'good.' I wonder, though, whether we can't cut to the chase a little more directly. If 'morality' is important at all, it must be because agents who can act for reasons have good reasons to do what 'morality' requires. So, for any purported moral requirement, the question we need to ask is whether or not there are good reasons to act in accordance with that requirement. Now, there is ample room for philosophical disagreement about what gives us good reasons to act. Humeans will say that our basic reasons for action flow from our desires; Kantians will say that our basic reasons for action derive from the formal requirements of coherent and integral practical reasoning; Aristotelians will say that our basic reasons for action take their normative force from the goods to be achieved in and through the exercise of our basic capacities as rational animals. Doubtless there are some other alternatives and subtle combinations of these three generic views, and this sketch greatly oversimplifies matters. But even painting with broad strokes, it should be immediately apparent that in no case does the fact, if it is one, that X has been selected for give us a reason to do X. The normative irrelevance of natural selection follows even on a Humean picture of practical reason, which is the view that most people who appeal to evolution when talking about ethics are presupposing. It may, of course, be true that certain desires have been selected for and are, as it were, 'hard-wired.' If we've got the desires, then ceteris paribus we have a prima facie reason to fulfill the desire. But even the simplest Humean views of practical reason recognize that it will usually be unreasonable to seek to fulfill any particular desire considered in isolation, because we all have a very complex set of desires. The reasonable thing to do will not be to, say, sexually assault anyone who strikes your fancy, but to act with a view to fulfilling more of one's desires more fully and over a longer period of time (though there are some disputes among Humeans about time preferences). So even if we adopt a very crude form of subjectivism in which the good just is desire-satisfaction, the fact that some desire is selected-for (if it is a fact) doesn't go very far. Perhaps even more importantly, the fact that it's selected-for doesn't matter at all; what matters is that it is a desire, and perhaps whether or not we can get rid of it. In other words, the Humean desire-satisfactionist would not be compelled to change his theory of practical reason in the slightest if it somehow turned out that human beings are not products of natural selection, but that the whole of life on earth is the product of a very ornery alien genius who crafted things to look as though they were products of natural selection. So, given that even one of the simplest theories of practical reason leaves no room for natural selection to have anything more than an indirect and causal relationship to what we have reasons to do, why bother working up any more sophisticated arguments against the 'selected-for = good' view?
6c595cedf3c822cc
Science, Mathematics, And Sufism I know a professor of theoretical physics, with whom I’ve had many interesting discussions over the years. (Disclosure: I came to Sufism via science.) I wanted to do an interview on the topics we covered with someone who, like me, had progressed from science to Sufism. For those who are the least bit interested in science, physics, and mathematics, the article below will, I believe, prove quite rewarding. The language is simple and no higher mathematics is involved, except only briefly. Of the “99 Beautiful Names of God,” one is al-Muhsi (The Reckoner, Appraiser,  or Accountant): The One who possesses all quantitative knowledge, who comprehends everything, small or great, who knows the number of every single thing in existence. In Arabic, the root HSY connotes “to number, count, reckon, compute,” “to collect in an aggregate by numbering,” “to register or record something,” “to take an account of something.” I conclude that a more concise rendition in English would be: God the Mathematician. Of course, another of God’s Beautiful Names, the Omniscient (al-Alim), is all-inclusive, so that God’s Knowledge (ilm) encompasses mathematics, physics, and biology alike. But “the Mathematician” makes it more explicit. In fact, quantity (miqdar) and destiny (qadar) both derive from the root QDR, and thus are inseparably intertwined. On May 18, 2014, I recorded a lively conversation with my friend, who wishes to remain anonymous. Highlights from that discussion follow. Text in bold, in brackets, and below graphics belongs to me. The incredibly sophisticated nanotech machine designs within a single cell. Watch it and weep. (Go to “Settings” and select 480 for best view.) Then ask yourself: can this be the outcome of any random collocation of atoms? When a cell dies, it has precisely the same components. Why then do they lie motionless in the case of a dead cell? So… Where shall we start? Well… They say, “When a person comes of age, s/he becomes responsible” [religion-wise]. Why? Because a person can comprehend the existence of God by reason alone. The mind is enough to know that God exists.newton1 A flower, a bit of soil, a car. Can these nice things have come about by themselves? We’re talking about initial creation, of course. Once the mechanism is in place, after it becomes self-reproducing, things are easier. Order, disorder. What I’ve seen in life is, unless it’s cultivated, nothing tends to improvement. If something has a chance of going wrong, it will. That’s Murphy’s Law. But there’s such an established order that you don’t have to be a professor, you could be a mountain peasant. When you look around, you see this exists. Your child is born. If you leave it alone, it won’t grow up, the child will die. You have to show it exceptional care. There is no need for intelligence to know that a child has parents. That is, you already know it has parents. And this child that is the universe has a parent too, it has an Owner, a Creator. You go to the moon, you find a color television there. Would anyone in their right mind say, “This TV was formed spontaneously out of the ground”? This is absurd. But they do say that. It’s called evolution by random mutation. What do they take refuge in? They take refuge in time. But the law of entropy tells us the exact opposite. Time is more of a negative factor in these matters. Time is something that degenerates, unless there is a driving force supporting the process. They say that radiation causes the mutations, but in all the examples I know of, radiation has a deleterious effect on living tissue. Radiation is one of the causes of cancer. “A drowning man will grasp at any straw.” That’s why a child is responsible upon reaching eighteen years of age. Because the child is no longer a child, s/he can analyze and see certain things. As a result, I think the intellect alone is sufficient to comprehend God. Prophethood and so on are something else. They’re more specialized matters. Now science has a dead-end of this sort. They used to define the law of entropy as: “Left by themselves, systems tend to disorder.” Now they’ve changed this, they’ve removed the word “disorder.” They’re trying to abstract entropy away from disorder, they’re trying not to use the words “entropy” and “disorder” together. This is in the newer textbooks. Because otherwise, you ask: “how did this order come about?” Now they write entropy as an equation, they don’t mention disorder. Physicists have tried to circumvent this, to find a solution to the question of entropy, and have wound up nowhere. That’s the second law of thermodynamics, isn’t it? Yes. And a peasant doesn’t call this entropy, but he says, “If you don’t tend your garden, you’ll get weeds.” If you were to bring together all the ingredients of a cell and shake them up, the probability that something will come of that is inconceivably less than 1 divided by 10130, which is already a vanishingly small number. That is, it’s zero. For all practical purposes, this means zero. [See Appendix A. We’re talking about the first living, self-replicating cell.] But people usually miss the really important point here. If the probability of something occurring randomly is zero, then the probability that it did not occur by chance is a certainty. 1 – 10-130 = 1 – 0 = 1. Now they don’t emphasize that, of course! Mind-blowing Animations of Molecular Machines inside Your Body [TED]. To claim that all the intricate mechanisms and processes of life could have arisen from inert matter by blind chance, given no matter how many billions of years, is not just an insult to God’s intelligence, but also to our own. It is to elevate the “intelligence” that can emerge from chance to the level of God’s, to impute the highest IQ to random events. Is that anything other than “chance-olatry”—the worship of chance? And if you say it will form into a cell if shaken for umpteen billion years, that’s an untestable hypothesis, and hence not science. Actually, quite to the contrary, entropy militates that not long afterwards, you’ll have a homogeneous mixture, and it’ll stay that way. Try it with two or three different powders or differently colored liquids, and you’ll see. Shaking more vigorously, adding more energy, doesn’t change the result. So time is no solution, either. On the contrary, time has an adverse effect. Hence, a mind that can’t perceive this shouldn’t be considered responsible. Because from the point of view of religion, there’s no responsibility when there’s a problem with the intellect. A sacred verse says, “God casts defilement on those who don’t use their reason” (10:100). So you have to use your intellect. There are so many verses that say “men possessed of minds,” “do you not reflect?” But we use our mind for other things. We know very well how to use it for diabolical stuff. What do scientists do when they’re desperate? They resort to time. Whereas entropy tells us the exact opposite. So a cause is an unavoidable problem. What do you do to get rid of it? You say there was a big bang, and before the big bang there was something else, and before that… you look for a way to wiggle out. Even if you didn’t know about the big bang, I think one ought to know that this can’t be of itself when one beholds this order. One has to see. This is insight. [For more on this see the Appendix B, taken from another discussion.] planthoppergears planthoppergear-jumping Interacting Gears Synchronize Propulsive Leg Movements in a Jumping Insect (Science, 13 September 2013; click on picture at right to view animation (size: 3 MB)). Gear technology designed into legs (and hence the genes and DNA) of young planthoppers. The mechanical gear was invented around 300 B.C. by humans. For millions of years, a 3-millimeter long hopping insect known as Issus coleoptratus has had intermeshing gears on its legs with 10 to 12 tapered teeth, each about 80 micrometers (or 80 millionths of a meter) wide. The gears enable the creature to jump straight. The teeth even have filleted curves at the base, a design also used in human-made mechanical gears since it reduces wear over time. Right: screw-and-nut system in hip joint of the weevil Trigonopterus oblongus. The screw thread is half a millimeter in size. Weevils, of which there are 50 thousand species, are a kind of beetle, and have been around for 100 million years. These are examples of God’s handiwork in His aspect of Engineer. You pose a problem in mathematics. One person sees the solution in a second, another sees it in an hour, a third doesn’t see it at all. I think this is like that, with the difference that psychology plays no role in a mathematical problem. Psychology does have a role when you look at nature and infer God. The way you were raised, what your parents taught you, what you received from your surroundings, can prevent you at that point. Because there’s a phenomenon called hypnotism, and this is a form of hypnosis. I hypnotize someone, I plant the suggestion: “when you wake up, you won’t see that phone.” After they wake up, I ask for the phone. They just can’t find the phone. These experiments have been performed. And human beings are hypnotized like that, only they’re not aware of it. So that person can’t ever find God, because they’ve been hypnotized since childhood. They’ve been conditioned. Conditioning takes time. The Prophet said, “Every child is born a Moslem, their parents turn them into something else.” How do they do that? Just so, by conditioning. So the intellect is very important. But intellect is not enough by itself. Until about the year 1700, we talked trusting our intellect. Science didn’t advance much. We talked for thousands of years. Physics was like history, like geography. Everybody was a physicist. How did this change? With Newton. Prior to the twentieth century, there are three great scientists: Newton, Galileo, and Maxwell. Maxwell isn’t emphasized that much, but he did something of paramount importance. He’s the one who solidified the mathematization of physics. Newton started the mathematics. He laid down the “method of fluxions” (differential calculus)… He introduced mathematics to mechanics. Galileo emphasized the importance of experiment. But Maxwell is the person who wrote down all electromagnetic phenomena in the form of differential equations. So there’s a solid mathematization there. And at that point, a discrepancy in the equations presented itself: a conceptual discrepancy. Maxwell resolved the discrepancy according to his own lights, he balanced the equations by adding another term. That’s when it emerged mathematically that electromagnetic waves exist. And so, we actually owe the foundations of our present technology to Maxwell. The mathematization there is as significant as Newton’s. The physicists of his time objected. One of the protesters was Faraday. Maxwell mathematized Faraday’s Law, as well. Faraday’s objection at that time was: By itself, mathematics does not include any laws of physics. In other words he’s saying, you’re doing this, but you’re doing it in vain. He objects, he says it won’t contribute much to physics. But Maxwell mathematizes these laws. Now this is very important in present-day physics. You pose a problem, you build a mathematical model of it. Writing the math is a skill all its own. Maxwell did this, and then the objections ceased. When Newton did it, they said, “You’ve done this, but physics has become a specialized science.” We were all physicists before that. You’ve done this math, but it’s a specialized field. So you’ve reduced it to a very small scope, they said, and the objections continued. The principle of gravitation, for example: you say it’s mathematical, but you don’t explain how it occurs. But after Maxwell, because there was that prediction, the objections ceased. Hence, mathematics accomplishes a very great thing. Looking at it from the viewpoint of classical mechanics, Galileo says, “Let’s check it with experiment.” The mathematical mind may be beautiful, but it’s not everything. The superiority of the mathematical mind to other kinds of mind is that it is a very concrete form of mind. For instance, there’s water vapor and then there’s ice. But the second is concrete. Water vapor exists, too, but it’s not as tangible as ice. Now you have your way of thinking, I have mine, she has her own. And the logic of each of us has internal weaknesses which we can’t perceive. But mathematics prevents that. Mathematics has become concrete, that is, it has been tested, formulated, thought through by thousands of people. When you apply mathematics, you’re automatically freed of the weaknesses, the fallacies of your personal logic. So mathematics is a more concrete form of logic, of the mind. feynmanI’m saying this in terms of its application to physics. Otherwise, there are fields where it can’t be applied. It can’t be applied that much to psychology. I don’t know to what extent it will prove applicable to neuroscience, to modeling the brain. But much that is useful has come of this. We know the seven planets, it was thanks to mathematics that the existence of the eighth planet was proved. Mathematics predicts. You do the calculations, they don’t agree. The coordinates don’t match, they diverge. Either our model is wrong, or something else is afoot that we don’t know about. What is required for this to occur? You say, there has to be a planet of this mass in such-and-such a position. They say, look at this point on this day, at this hour, and you’ll see a planet. That’s how the eighth planet was first sighted. Two astronomers, one French and the other British, are involved. Lo and behold, on that day at that hour at that point, a planet [Neptune] is observed. Now this invalidates Faraday’s claim. He was saying that mathematics could not make physical predictions on its own. What did it do? It predicted. That is, mathematics is usually regarded as a tool. But it’s slowly going beyond being a tool. It’s becoming a means of discovery. It’s becoming something of a trailblazer, a pioneer. A tool is a thing that helps you do something, it’s passed beyond that. And the same with the ninth planet, too. This time, perturbations in the orbit of the eighth planet led to the discovery of the ninth [Pluto]. But the ninth planet was discovered with more difficulty. And then it was demoted from the status of being a planet. They call them “dwarf planets.” Because of the tenth planet, the ninth was demoted. Now, back to Maxwell: he says there’s a discrepancy, a mathematical, a logical discrepancy. As he gets rid of that, he finds a wave equation there. Hence he says, electromagnetic waves exist. He calculates their velocity, it turns out to be the speed of light. Therefore, says he, light is an electromagnetic wave. And these are all things that were subsequently verified experimentally. Hertz, Marconi… The basis of today’s technology and communications lies there. This is one of the major breakthroughs. What did mathematics do? It paved the way for something. It led to a new discovery. After being confirmed by experiment, of course. In physics, one should never forget that principle of Galileo. mathpauldirac1Examples of this abound. We now come to quantum mechanics. For instance, in quantum mechanics, Dirac’s equation. Dirac’s equation renders quantum mechanics and relativity compatible with each other. The solutions of this equation are more accurate than those of the Schrödinger equation.  But here, too, there is a discrepancy, just as there was in the case of electromagnetics. Then Dirac says, there has to be a particle with the same mass as an electron, but with opposite charge. Within a year or two, the positron is discovered. It was so unexpected that the discoverer was awarded the Nobel Prize. Now what has mathematics done? It has again led to a new discovery, it has served as the means to finding a new physical entity. Again, it has passed beyond being a mere tool. And there are many more examples like this. Now, physicists are amazed by this. Eugene Wigner wrote an article on “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” Stephen Hawking has a saying like that, too. He asks: “What breathes fire into the equations?” This confusion arises from the assumption that the system excludes the God concept. Otherwise, they wouldn’t be amazed. Because such precision… everywhere there is a logic, a mind, an Infinite Mind at work. Entropy is the cause of our amazement: how can such order exist? The assumptions are wrong. These phenomena clearly tell us that these things can’t happen by themselves, there is an Infinite Mind here. And as far as physicists are concerned, this is a real conversation-stopper. From here, scientists and philosophers go on to other things. They say [with mathematician David Hilbert]: “Mathematics is a game.” Well, if it is a game, how come it’s so effective in physics? Mathematics is real. But there is no mathematics in nature. The numbers 2, 3, … don’t exist as objects in nature. You infer these yourself. For instance, half-integers. Irrational numbers. Rational numbers. Complex numbers. These are entirely constructs of the mind. For example, complex numbers were invented completely independently of physics, so that certain mathematical equations could have a solution. And what do we find, centuries after they were invented? Without complex numbers, quantum mechanics cannot be formulated. There are four or five formulations of quantum mechanics, all of them require complex numbers. There’s just no way to avoid them. Isn’t it the same with electricity? No. Complex numbers provide simplicity there. But you can do the calculations without resorting to complex numbers at all. Here, on the other hand, you can’t do anything without complex numbers. You don’t have that luxury. And here again, the question arises: Weren’t numbers a construct of the mind? Why are mind and nature such an inseparable whole? These are presumably surprising questions for physicists. Also, there is intellect there, but not every intellect. That’s why Galileo is so important. You have to test it against nature, to check whether that intellect is there or not. For instance, there are four kinds of what are called “division algebras”: real numbers, complex numbers, quaternions and octonions. If a number has an inverse, it’s part of a division algebra. As you move from the first to the last, you lose a property at each stage. Real numbers have the property of ordering: for instance, 5 is greater than 3. With complex numbers, you can no longer say which is greater, 3 + 5i or 5 – 6i. With quaternions, you lose the property of commutativity, and with octonions, you also lose the property of associativity. Now real numbers and complex numbers are used in nature, but quaternions and octonions are not. A group of physicists tried to formulate quantum mechanics in terms of quaternions, and nothing came of it. And the same holds for octonions. So that’s why experimentation is so important: you have to check the applicability of your mathematics to reality. In conclusion, the effectiveness of mathematics is unreasonable only if you exclude God. If you include that concept, then it becomes eminently reasonable. Now Plato says that mathematics has a reality independent of us. He says we access it by extensions of the mind, and project it on the physical world. That’s why it’s called a Platonic reality. And the same with love: you love another, that person doesn’t know anything about it, it’s all in the lover’s mind. That’s why that love is Platonic love. But this Platonic reality is a peculiar kind of reality. Where would physics be without mathematics? We would still be talking. We would be in the situation that existed prior to 1600-1700. There would still be a physics, crude, experimental, somewhat like meteorology. In meteorology you make forecasts. But is it like that now? I launch a rocket, thanks to my calculations I know where it’s going to fall, down to the centimeter. With our calculations, we can predict the exact time and duration of a solar or lunar eclipse that will happen 100 years from now down to the second. Now these are not trivial things. Mathematics equates with the mind, an intelligence that pervades the entire universe. Now we have trouble admitting this. So we don’t want to see or hear certain things. The question of entropy remains unresolved. The formation of the first living cell remains unresolved. It cannot be resolved, because there’s the law of entropy. Those experiments have been performed, that organic soup has been made. Stanley Miller did one experiment, Sidney Fox did another. You place the gases you imagine composed the atmosphere at that time, you give the electric current, that corresponds to lightning strokes. You get amino acids. Amino acids are the building blocks of proteins, so you conclude that life emerged from there. But it’s not merely a giant step, it’s an impossible step, from amino acids to proteins, if you’re going by chance. OK, how are these organized? Sidney Fox did that experiment. Nothing came of it. By that time, ten years had passed. And nothing would come of it if they were to remain there for ten million years more, because there’s the law of entropy. We say that given time, we’ll solve this. And that’s just kicking the can down the road. Now, why is mathematics so effective? Because nature is the product of a mind. There’s an Infinite Mind in the universe, a Mind that beggars our minds, that makes us look like mongoloids. Moreover, that Mind also has to possess infinite power, in order to enforce those laws all across the universe, from the macrocosmos down to the microcosmos at every level. Take a single cell, a single human, a single life form. There’s a phenomenal mechanism there, there’s a monumental set of laws. We’ve understood little bits and pieces of these, that is, what we understand doesn’t amount to much. And that, we understand by isolating. For example, we understand an atom, we try to understand a hydrogen atom. We act from the principle of linear superposition. We dismantle things like a clock and assume that like a clock, they’ll work in the same way when they’re reassembled. Of course, because our approach is atomistic. We haven’t seen any other kind, we don’t know. And we can’t wrap our minds around it, because it’s nothing comprehensible. Now a holistic approach, that’s something else. It’s the outcome of a different state of consciousness. Since we’re in atomistic states of consciousness, our minds too are atomistic. If we had holistic states of consciousness, perhaps we would have holistic minds. There are people with holistic consciousness. We don’t always understand what they say, because they’re talking from a different state of consciousness. A butterfly has a consciousness of its own, a mind of its own. A human has a consciousness of his own, a mind of his own. It’s like that, that is. There’s a relationship between consciousness and mind. You always say that “Quantum physics is holistic”… Not many people realize this. Before Newton, mathematics is at the level of arithmetic. Until quantum mechanics, in classical physics, we understand events atomistically, that is, we understand them one at a time. We draw diagrams, those diagrams have correlates. The resultant of two forces, and so on. In quantum mechanics, the dose of mathematics is stepped up even more. But our understanding diminishes. We have difficulty in comprehending the phenomena. In classical physics, we thought we understood the phenomena. We could take events on a piecemeal basis. In quantum mechanics, there’s a helium atom, it has 2 electrons and a nucleus, the nucleus has 2 protons and 2 neutrons. But we deal with it as a system. When we speak of the energy level of the helium atom, we don’t mean the energy level of the electron, the nucleus, or the proton, we consider the energy level of the system. The phenomenon is approached as a whole. What happens then? We can’t draw a diagram. The diagrams we draw are abstract. Hence, they have no pictorial representation. Pictures are out. So, three stages: first, arithmetic. Next, a physics at the level of calculus. Third, again physics at the level of calculus, but depiction is lost. Because our assumptions changed. We approached the phenomenon holistically. Why did we do that? Not because we wanted to. We were forced to do so. In order to make sense of the experiments. We can’t comprehend the results of experiments. The experiment is there, but its results don’t make any sense. We had to derive this formulation in spite of ourselves. The experiments forced this on us. And what is essential in physics is the experiment. Then we sat down and thought about what it was we had discovered. We had found something holistic. How about a definition of “holistic,” while we’re at it? First, let’s clarify what we mean by “atomistic.” Let’s say there’s an event in the solar system. We take the sun separately, the moon separately, this planet separately. Then we do our calculations. Each component has an identity of its own. The values of every component are important. Now, for example, the helium atom, the hydrogen atom, the individual states of protons, of electrons, are no longer of importance. We’re looking at it as a system, that is, as a whole. That’s what “holistic” is. In other words, not to go from the parts to the whole, but to deal only and directly with the whole. To make a jump, could we deal with the universe in the same way? The wave function of the universe. There have been studies like that. [Everett-Wheeler-Graham (EWG), “The Many-Worlds Interpretation of Quantum Mechanics.”] Here’s what this means: let there be a wave function, let all that can be known in the universe be in that wave function. And in the representation of the hydrogen atom, there’s all the information related to the system. Now this is a significant jump. First, it places us in a more helpless situation. It’s like Gödel’s theorems in mathematics. What do Gödel’s theorems do? They undermine the foundations of mathematics, they make it more insecure. We used to be determinists, we used to know everything. Now, we don’t know everything. We don’t know what we’re going to find when we conduct an experiment. We can only say, you’ll find this with this probability and that with that probability. And I don’t know how correct that is, because in order to say that with certainty, you’d have to conduct an infinite number of experiments. Only the menu I’m offering you is definite. But I can’t tell you which item you’ll discover. Because this is a holistic matter, there’s an indeterminacy there. There’s always this in holistic things: a lack of certainty. We can’t understand it, but in the end, we can know the energy levels. And we can do this with great accuracy. We can observe them in experiments. And this has been a very great success. [Quantum electrodynamics, or QED, has been tested to an accuracy of one part in 100 billion (more recently, in 2006, eight parts in a trillion). The famous American physicist Richard Feynman compared this degree of accuracy to mathematically calculating the distance between New York and Los Angeles to within a hair’s breadth. In other words, this is equivalent to predicting the width of North America with the precision of plus or minus one human hair.] There’s no such thing in classical physics. But actually, there’s a parallel between classical physics and quantum mechanics. Classical mechanics has four or five different formalisms, quantum mechanics has four or five different formalisms. This is not valid for every formalism. For example, the Poisson bracket formalism of classical mechanics is almost the same as the formalism of quantum mechanics, with one difference. The general appearance of the equations is the same. To me, this looks like the following: in the Koran, they say Ibn Abbas gave a verse’s hidden meaning by interpreting it differently. That’s not what you understand when you read the verse. And I say, that’s what the equation states, but you have to take it as a commutator. That is, there can be different approaches like that in reading the book of nature. There’s actually a one-to-one correspondence, so you penetrate to a deeper level of meaning. But you can’t logically prove one from the other. That is, you can’t prove the equations of quantum mechanics starting from the equations of classical mechanics. You see the similarity, but there’s no direct proof. That sounds like pattern recognition, doesn’t it? That is, there’s a form-al similarity. It’s not just a morphological similarity. For instance, the values of the commutators are identical. So it’s not only a matter of form. Give me the Poisson bracket of anything, I’ll write down its quantum mechanical equivalent for you. This goes beyond form-al. I know the Poisson bracket of a hydrogen atom, of a harmonic oscillator, I can write down the corresponding equation in quantum mechanics, because of this similarity. And the results are phenomenal. This is a different meaning of “a book with twin verses” [the Koran], that is, they have dual meanings. [The book of the universe is here being compared to the Koran.] Taking the meaning of “verse” (ayah) as “sign” here… Of course, not as words, but as God’s universe, God’s signs. That is, there’s a signifier in everything. In fact, there are even deeper meanings, and that happens in quantum field theory. Then you give a slightly different meaning. Now there are operators, and the things they operate on. If you assume commutation relations in the operated (operand), it becomes quantum field theory, that yields even more accurate results. In other words, there are nested meanings. Maybe that’s the case with everything, I don’t know. I’m saying this in terms of physics. But mathematics has an extraordinary role in our discovery of these. From the viewpoint of physics, however, not every mathematics is always useful. If the assumptions are valid, if you base your mathematics on those, the result is sensational. If the assumptions are wrong, nothing will come of it even if the math is correct. thought-of-god-ramanujan1That is, mathematics is actually a kind of gardening. Seed, cultivation, result. If the seed is the seed of a thorn, no matter how well you cultivate, you won’t get apples from it. Your seed simply has to be the right seed. And that seed is your assumptions. Why, for example, can’t we reach a result in the case of entropy? We can’t sow the right seed there, due to psychological reasons. That’s our problem. So we continue to be surprised. “The unreasonable effectiveness of mathematics” is not unreasonable at all. Why should you be surprised about the mind of God? [Nothing lies beyond its ken.] You mean it’s not so hard to pass from science to religion? You can pass to religion from anything, even from art. Perhaps you’ve heard of the joke: “I used to believe in no God, until I saw her. That’s when my opinion changed.” That is, such beauty can’t be accidental. This art can’t happen of itself. This rose doesn’t grow of itself. This scent doesn’t emerge by itself. This beauty, this intricate design, can’t exist of itself. You don’t have to be a physicist to understand this. Take any phenomenon. After you see the balance, the beauty there, you’ll say, this can’t happen on its own. Of course, there’s the matter of faith here. Anything can be a cause of faith. But there’s also the verse: “Nobody can have faith unless God desires it” (10:100). Some come to faith easily, others just can’t. But if there has to be an occasion for it, it doesn’t have to be mathematics or physics. But mathematics and physics make it crystal clear. So does medicine. A doctor. If the diagnosis is wrong, you can’t heal no matter what the therapy is, right? But for the diagnosis to be correct, you have to have a firm grasp of the processes. And you have to know that nothing is accidental, you have to know the mechanisms, to be able to reach the right diagnosis. Feynman explains all this elegantly. There were two objections against Newton: 1. You mathematized physics, you made it specialized. 2. You didn’t explain how gravitation occurs, you called it “action at a distance.” This is magic, and it has to remain so. The sun attracts the earth. How does it do this? The mechanism isn’t described. This was also Einstein, Podolsky and Rosen’s (EPR) objection to quantum theory. Einstein opposed quantum entanglement on the grounds that it was “spooky action at a distance” (spukhafte Fernwirkungen). It was everyone’s objection. Einstein turns gravitation into the curvature of spacetime, that has problems of its own. For two hundred years, people tried to devise a mechanism for it. There isn’t any. According to Feynman, there’s no difference between saying that gravitation attracts and that “the angel of gravitation” performs the attraction, because we don’t know what it is. For example, how does a proton attract an electron? Via an “electric field.” These are just words. Are these empty concepts, or can they be filled with meaning? That’s what we have to look at. Except that in quantum field theory, there’s an exchange of photons. We call them “virtual photons.” This tosses a photon to that, and vice versa. That’s how attraction occurs. Mathematically, many nice things have emerged from this. In the weak interaction (weak nuclear force), there is an exchange of W and Z bosons instead of photons. And in the strong interaction (strong nuclear force), gluons are exchanged. All these are by analogy. But there’s nothing there. There is no impressive prediction. Those in the know don’t say it out loud, but they know and feel it in their hearts. Because the assumptions are wrong, nothing comes of it. It’s the same in every science. Science is an activity performed by humans, and human beings have egos. How did you pass from science to religion? From the intellect rather than from science. But science refines this further. You see the accuracy more clearly. Let’s say that a human with a mind, anyone intelligent enough, can comprehend that all this can’t happen by itself when s/he looks at these relationships, this order, this art. But when you go deeper into the relationships, you discover how finely tuned, how delicate, how highly ordered the relationships are, with such great precision, and that cements it. That’s the real contribution of science. For instance, a doctor. When a doctor goes into that, s/he begins to see things on more of a micro level. They see much deeper than you or I do. So what happens? That cements it. And the same in other places, as well. For instance, if the distance between the sun and the earth were not what it is, there would be no life on earth. There are a thousand things like that. These things wouldn’t be if the ratio between gravitation and the electromagnetic force were not what it is. You perceive that so many coincidences just can’t coincide by themselves. I actually found God before science, but science riveted it. For example, human beings, couples. There’s a man, a woman. God created them compatible with each other. From that union, a child is born. He gave affection so that that child could live, He created that environment. The male seed, the female seed, there’s an extraordinary design. At that stage, there’s no need to be a physicist to see this. What’s really important here is the patent. Once the factory is in place and working, things are a bit easier. I built up the shop, I left it to my child and went off. The child’s task is a bit easier. Forming it is more difficult. But if the children can’t take it forward, it’ll degenerate and get closed. “It happened by itself.” If so, why can’t I take it forth? Why can’t the child, who took it over in a ready state, take it forward? Therefore, it didn’t happen of itself. Now, this logic is all very clear, very simple. But you won’t see it if you don’t want to. That’s the real issue. One has to be blind to not see it. Or you have to have grown up blinded. For me, it’s impossible not to see. Now, it’s possible to pass to the concept of God from science or art or something else. But how do we go from the God concept to religion? There, a vehicle is needed. The mind sees: OK, there’s something here. Why is experiment important in physics? The mind can’t solve everything. Reason has to be tested against reality. Experience is more important. Sometimes we know something from experience, we construct its reasoning later. To understand religion completely, experience is very important. The phenomenon of prophethood. You can’t understand that with physics, with mathematics. The phenomenon of sainthood, you can’t understand it with the intellect. Our Prophet inspired such a sense of trust in everyone, but in spite of that, not everyone believed in him. Either that, or you have to be able to reach great conclusions from small experiences you live. You saw something in your dream, the next day it took place, it came true. This happened once, twice, three times, … There’s no place for this in science. Well then, hold on, friend, there’s something here that eludes your intellect. Now of course, this gives way to listening, to heeding. Why don’t literate people take religion seriously? Because they trust their own mind and do not listen. They don’t listen, they don’t feel the need to listen. First, they were raised that way. Second, they haven’t had experiences like that to astound them. Even if they have, they feel the immediate need to rationalize it. They bypass it. Otherwise, if only they were to start researching, the place to be reached is clear. There’s a world you don’t know,  a whole range of experiences you don’t know about. It’s all here. We call it the world of light. There’s the Realm of Power (Jabarut),  the Stage of Nondetermination (La ta’ayyun), right? If you ask when that was, they say it’s all simultaneous. That is, they’re all here, and they’re here according to the level of consciousness you’re in. You mean they’re not in any temporal sequence. They’re not. Not anywhere else either, they’re actually here [and now]. That doesn’t mean nobody sees them. And that’s our main error. For example, I study mathematics, but I don’t understand it. That doesn’t mean nobody understands it. Or, there’s going to be an earthquake, a dog hears it, I can’t hear it. In other words, there are things I can’t perceive. For example, elephants can hear a sound from a distance of ten kilometers. Its ears are designed that way. Its trunk is designed to emit that sound. That is, both its transmitter and its receiver are suited to the task. My ears and mouth haven’t been designed for that. So the sizes and frequencies tally. Because its wavelength is greater, its frequency is lower. I would be wrong to claim it doesn’t exist. This has also been said of vision: of the electromagnetic spectrum, we see only a tiny sliver. Of course, of course. Now they can photograph the same place in every spectrum. crabnebulaThis is used in science, it’s even used in daily life. A thing that can’t be seen at one frequency can be seen at another. Why didn’t this exist before? It wasn’t done until now because we said, this can’t be. In the infrared spectrum, you see something there that you don’t normally see. So we shouldn’t trust our own perceptions too much, just as we shouldn’t trust our intellect too much. This is also an ego problem. The stronger your sense of self, the more heedless you are, the more you trust yourself. And the greatest catastrophes occur because of that. It’s also true in daily life: you trust yourself too much, your company folds. And such like. Either you have to have nonordinary experiences, or you have to have experienced people by your side. They explain certain things to us. But of course, in order to understand these events, holistic concepts are needed. This makes comprehension even more difficult. Do we need to think holistically in order to understand religion? Religion [Islam] has its own kind of classical mechanics, that’s the Divine Law. It has its quantum mechanics, that’s Paths and Schools. For example, religion tells us, “Do this and this,” “Don’t do that and that.” These are things at the atomistic level. You have to do them yourself, you’re not exonerated if someone else does them. To understand other concepts, holistic things enter: “He who kills one person, kills entire humankind. He who saves one person, saves entire humankind” (5:32). Or, “Don’t gossip, you’ll put that person’s spirit in pain.” You find that you are no longer yourself, everything is interlocked, everything is connected with everything else. Holistic concepts are less well-understood, more delicate things. One reads them in one way, another in another. Like in classical physics versus quantum physics. The second taxes you from a holistic viewpoint, you understand with difficulty unless you’re used to it in terms of experience. If not, you shouldn’t deny, you shouldn’t take risks. That’s what the great Sufi saint Ibn Arabi says: “Even if you don’t believe, don’t deny.” Don’t say, How can this be? “What is in the universe, that is in man.” Don’t say this is impossible. You don’t have that, but don’t say nobody can have it, don’t take that risk. Now this is entirely holistic. Everything is in the human being. “In man there’s a mountain,” as the Master said. Well, I see no such thing? I can’t reconcile a mountain with a human being. Neither my intellect nor my spiritual condition are up to the task. I can’t understand quantum mechanics, either. Nothing in a high school student is ready for quantum mechanics. And those who understand aren’t entirely there either, but at least we agree that there’s truth in it. There’s a similar situation here. You can’t explain everything to everyone, because they won’t understand. Plus, maybe there’s nothing to be understood, only something to be experienced. Mind alone is not sufficient to discover religion. The mind that comprehends the existence of God is responsible religiously. In order to go beyond that, you need an extra grace from God. Belief in God is a must. For that, the mind is enough. But believing in religion, believing in the Prophet, is a grace from God. There’s a verse to the effect: Noah says, “I’m telling you these things, but they’re no use if God doesn’t wish it.”  As Joseph’s brothers are going on their second visit to him, their father Jacob says, “Enter through separate gates. But if God doesn’t desire it, it won’t work.” No matter what you do, it’ll make no difference. Now we don’t understand this. We don’t understand the will of God. The Master once said, “God scattered a light. It struck some and didn’t strike others.” We don’t know the reason why. In particular, faith in the Prophet rests with God. That is, it’s a very special grace, believing in him is very difficult. Because when you say “God,” you bow to a superior authority. But the Prophet? “Well, he’s human and so am I.” There, the ego enters at once. “He could only have been an ordinary man. The conditions then were such-and-such, he said this, he administered, he was wild,” in the end there’s nothing there. “There was a clever man,” you say. And with that, you miss a lot. You need a special favor to believe that our Prophet was very special, that he was very different, that he was “a mercy to the worlds.” There’s no other way. Or else, God has to have given you the aptitude to derive great conclusions from small experiences. Then it’s possible. The Master riveted this. I reached that faith only with difficulty: the Prophet is a prophet. But the Master riveted down that faith in place. Our Prophet is very special. Now, this is very hard to believe. He is the best locus of manifestation the world has ever known. To believe like that is very difficult. Why is that true? Because all the Names of God were manifested in him. There’s no need for someone else. Why is there no need for another Book? Everything is in it [the Koran], even if we can’t understand this. So it’s not necessary. Whereas with the others, it wasn’t like that. Now it’s hard to accept it like this for our mind-dominated human beings. The ego is strong. Even at birth, children are princes or queens. Those egos won’t bend when they grow up. Here, you need to bend. You need to believe that God gave a mind-boggling boon to someone other than you. But I’m the king… In the language of his state, he says, “If He were to give it to someone, He’d give it to me, I’m king.” But God favors some human beings. Now, we look at the Koran. What’s there that’s bad about it? It says: “Do good, don’t do evil, don’t harm your neighbor, don’t charge interest on money, don’t be a burden to others.” It counsels all that is good. It says, “Don’t hurt anyone.” It also defines what is good. It says “This is good, do it, that is bad, don’t do it.” Otherwise, goodness is a relative thing. Thieves think what they’re doing is good. And that is like abandoning your mind to mathematics. Before Newton, everyone had intelligence. They still do, but everyone does things according to their own lights. In science, you receive guidance from mathematics, in religion you receive guidance from the Koran. You have to have a reference. Otherwise, everyone has their own reference point. Take morality. Everyone’s ethics is good from their own standpoint. Why are saints necessary? They hold a mirror to you. They show you yourself, they make you know yourself. Otherwise, nobody is aware of themselves. The Master shows you your error with extraordinary finesse. These things are entirely beyond the ken of contemporary human beings, even conceptually. They can’t even conceive of them, they can’t even conceive what they’re missing. These university professors, these people who think they’re clever, they don’t even know what they’re missing. Meeting the Master, I regard as God’s grace. There’s no other explanation. That is, the mind is at sea here. Everybody’s smart. Many university professors are more intelligent than I am. So this can’t be solely a matter of intelligence, there’s something else. I’m not smarter than they are just because I was graced with the presence of the Master. I realized that the world is not as I thought it was. This left me shaken. From that I passed on to other things. I already had faith in God, I believed in the Prophet, too. Scientists need experiences that will stagger them, experiences that will shake their belief that they know everything. That’s the only way. Because these are matters of consciousness. In its essence, religion has to do with consciousness. You have to observe changes in your consciousness. You’ll realize then that things are different. There are different states of consciousness: your present state of consciousness, there’s hypnosis, there are different levels in hypnosis, there’s the consciousness of sleep, there’s dream consciousness, there’s lucid dream consciousness. Each is different than the other. And there are who-knows-what-other states of consciousness that I don’t know about. Would you define religion as consciousness alteration? Here’s how I view religion: religion is the process of becoming worthy of God by changing one’s morality. But as you alter your ethics, that has an impact on your consciousness. That’s of secondary importance. Being moral is more important than being in a different state of consciousness. The person whose ethics, whose character traits, are closer to the Prophet’s, that person is the winner. This is the primary criterion that I’ve come to understand in the long run. Morality is very important. For example, we read in the Koran: “I chose him for Myself.” This is about Moses: “I chose you for Myself.” And the same for Abraham: “God chose Abraham as His friend.” Many of Abraham’s morals, character traits, are recounted in the Koran: “Abraham was of mild-mannered mien.” It also tells what God looks at: “He looks at your heart.” “God loves these, God does not love those,” right? “God does not love misers,” “God loves the generous,” God has given all the codes. Those things all pertain to morality. It doesn’t say,  “God loves those who go to Mars in one leap.” It doesn’t say, “God loves those who do Spacefolding.” Nor does it mean that God doesn’t love those who do Spacefolding, but it’s important only in the second-third-fourth degree. It’s not important if it’s not there. The Koran states very clearly: “God loves these, God does not love those.” If we were to list these, that’s where religion is. Because this is a matter of love. The heart of religion is love. Justice, that’s the Divine Law. Conscience, that’s the Paths. Love is the Reality. [The reference here is to the Master’s pamphlet: “The Secret That is Love.”] The main task is love. In other words, He created human beings out of love. That’s how I understand it. He loves human beings very much. The Master stated that clearly: “God loved human beings very much.” (Teachings of A Perfect Master, p. 56.) The “Secret of Islam” is Love, nothing else. But if I remain at the level of a dog or some other animal, how is God going to love me? That is, religion is more a matter of changing one’s state of morality than of changing one’s state of consciousness. The focus is always on ethics. After the New Age philosophies, this all became: “Let’s change our state of consciousness.” But without a change in one’s state of morality, a permanent change in one’s state of consciousness can’t be obtained. You go up in a helicopter, five minutes later it comes down when it runs out of gas. For example, let’s get top grades in the exam. How? Let’s cheat. But the means are more important than the ends. To obtain those credentials legitimately. This is actually stated very clearly in the books of great Sufis. For instance, in the “Holy Bestowal” [by Abdulqader Geylani]. Worshipers: worship is very important. Scientists/scholars: knowledge is very important. The wise: the secret and maybe the state of consciousness are very important. But most important of all is the love of God. Then, the question becomes: “How can we attain that love?” And that’s not possible except by ethics, and that’s a very hard thing to do. If only our ethics were beautified by our saying so, my ethics would have improved long ago. No, that happens by suffering. By suffering hardships. It’s not easy for a rock to become earth. It happens in time, by suffering hardships. It happens by paying careful attention to principles. It happens by paying careful attention to the Prohibited and the Permitted. Religion is a matter of ethics, a matter of becoming worthy of God by this means. First things first. That’s what God wants. He says, “First fix your ethics, then come to Me.” Intelligence is also important in these matters. “Who has no mind has no religion.” There’s a Tradition of the Prophet. Someone said: “My friend is highly moral.” The Prophet asked: “How is his intelligence?” “Not that much.” “Then he can’t progress very far.” On the other hand, if you’re not straight inwardly, the more intelligent you are, the more harmful you are. But the Master posits courtesy. Why? Because courtesy is actually morality. Courtesy is the refined form of morality. If you want the Owner, you have to fix your ethics. At first, I didn’t understand that. I’m reading the Koran, it says “those who want Paradise,” but it also says “those who want God.” So there is such a concept as desiring God. What is this? It’s in the Koran. So some people desire God more than Paradise. [The Turkish Sufi poet] Yunus Emre said that, and the expression is in the verses of the Koran. But it’s hard to discern it there. He sang, “I need You and You alone.” It’s been said, “When God is present, neither heaven nor hell exist,” right? That is something amazing. Because we want to re-establish our severed link with God [re-ligio]. That’s our real quest. Heaven and hell pale in comparison. When you’re dealing with God, everything pales in comparison. Of course. Compared to infinity, every finite thing is zero.dyson1 It’s like this in our lives, too. How so? When our friends come visiting, we prepare a treat. But our friends don’t come for that bounty, they come for a reunion. The reunion is the important thing, not bounties or Paradise. Now suppose that some come for the food. Well, let them! Let no one remain hungry. But the main point is not the bounty. Paradise is a boon, a wonderful boon. But in the end, it’s a boon. The phenomenon of Union is very different. What’s important for us is Union, just as it is for God. I see this in Sufi writings. What God desires is Union. God created human beings for Himself. And He said: “Fix your ethics, and come.” There’s something that will put “blessings such as no eye has seen and no ear has heard” to shame. That must be what they mean by “the Truth of Certainty.” You reach the highest level of proximity. Beyond “the Knowledge of Certainty” and “the Eye of Certainty.” That’s how we see the Master, he’s at the level of the Truth of Certainty. We’re going to perform the Prayer, we’re going to Fast. But what does the Master say? “Even if your head doesn’t rise from prostration, it won’t happen without these.” So it’s a matter of ethics. Actually, this is religion: religion is the task of making yourself worthy of God. Can we achieve that? That’s another matter entirely. But that’s the purpose. We don’t know if we can go to Mars, but that’s our calling: to go to Mars. It’s not a matter of knowledge, of consciousness. You can have those too, but there’s a ranking in terms of importance. The important thing is to display praiseworthy conduct. A man rescues a kitten from the rain, that night he dreams that the Prophet is stroking his beard. So it pleased him. And what’s pleasing to the Prophet is pleasing to God as well. He couldn’t have dreamt that if he had spent that whole night in worship. Let him worship, by all means, but the thing is beauteous conduct. That is, God’s pleasure, something that pleases Him. broccoli1Mathematics is important because it represents the mind. Physics plus mathematics proves God’s existence. For it is by mathematics that we best analyze nature. The root of the matter is there. Nothing is accidental. Everything is calculated, programmed, precise. And this is a very clear indicator of God’s existence. If the seed is right, it will yield results. God attaches great importance to the intellect. If you have no mind, you’re not responsible. Because you can deduce the existence of God based purely on reason. If you accept the Prophet too, that’s awesome. And mathematics is important because it has become a means of discovery. But if your assumptions are wrong, mathematics won’t help you. If they’re correct, unexpected things can emerge from that. The mind, mathematics, and experiment have brought us to a place in three hundred years that we hadn’t been able to reach in the previous three thousand. It’s magnificent. Great scientists, and Dirac is one of them,  have arrived at the point that from now on, we need to study consciousness. We don’t know how to study it yet. The Sufi masters have been studying it for centuries. So where Dirac ends, the Sufi masters begin. Dirac arrived at that point. So did [Roger] Penrose. And that’s where everyone will arrive at, sooner or later. That’s the point where the masters enter the loop. And then, you have to understand the importance of religion better. You have to perceive that religion is important, that morality is important, that things are not as you imagine them, that the intellect alone is not sufficient, in order to come to that door. A small protein may typically contain 100 amino acids, each with 20 varieties. For example, the protein histone-4 has a chain of 102 amino acids. The probability of even one small enzyme/protein molecule of 100 amino acids being arranged randomly in a useful (and hence, necessarily specific) sequence would be 1 part in 20100 = ~10130. For comparison, there are ~1080 protons in the entire universe. Even the smallest catalytically active protein molecules of the living cell consist of at least a hundred amino acid residues, and they thus already possess more than 10130 sequence alternatives. Getting a useful configuration of amino acids from the zillions of useless combinations is an exercise in futility. A primitive organism has about the same chance of arising by pure chance as a general textbook of biochemistry has of arising by the random mixing of a sufficient number of letters. And the moment you say that non-chance events are involved, such as the folding and fitting of molecules, you fall outside the field of randomness. You implicitly admit the presence of order. It appears that some people lack an adequate understanding of either the mathematical law of large numbers, or the physical law of entropy, or both. The law of large numbers (LLN) solidifies the expected probability or improbability of an event. If an event is improbable to begin with, an extremely large number of trials will only certify that improbability. Actually, the two are linked: “The law of the increase of entropy is guaranteed by the law of large numbers… order is an exception in a world of chance” (Hans Reichenbach, p. 54-55), and the LLN is at the core of the second law of thermodynamics. It would be unfair to one of the great names in quantum physics, Erwin Schrödinger, if we were to neglect mention here of his monograph, What Is Life? (1944). There, he explicitly associated life with negative entropy, or “negentropy” for short. This also ties in with Information Theory: information is a measure of order, entropy is a measure of disorder, so information is the negative of entropy. The “randomists”—that’s what I call people who try to explain the origin and development of life by random events occurring over eons—claim that there are highly improbable events which nevertheless occur every once in a while. For instance, winning the lottery is a highly improbable event, yet somebody does win the lottery. And getting a royal flush in a card game is an extremely improbable event, yet it does happen every now and then. Starting from such examples, they argue that highly improbable events can become possible, probable, and even actual, given billions of years. First, I should perhaps clarify that I’m not opposed to evolution as such. There’s the fossil record and all that. Natural selection exists. Mutations are a fact of life. What I’m against is supposing that extremely highly ordered phenomena, such as we witness everywhere in life, can be the outcome of chance events. Order does not arise spontaneously out of disorder. [To be more explicit: directed evolution is a possibility, random evolution is not. Nature cannot produce blueprints that have not been encoded into it.] Now like I said, the reason for this can’t be found in logic. Rather, it’s psychological. Those who make this claim, the “randomists” as you’ve called them, are in a hypnotic state that makes you Godproof. They don’t want to see. These people who impute the most important things to chance: observe them and you’ll see, in their own lives they leave nothing to chance. Because deep in their hearts, they know that chance alone won’t get you there. The lottery is designed so that at least one person will win. And you need not one, but a run of at least a thousand consecutive royal flushes to even begin to approximate the complexity of life processes. You know Murphy’s law. It says: “If anything can go wrong, it will.” This is actually the law of entropy. And you need, not only intelligence, but also will, to counteract this. Consider a TV set. One component in the wrong place, and the device won’t work. Now put all the components of a TV set in a sack and start shaking. Do you actually expect that after a sufficient number of shakes, they will all fall into the right place and the TV will assemble itself? First you need a plan, a blueprint. For that, intelligence is needed. And then, you need an iron will and constant, diligent supervision at every step of the way, to ensure that the thing actually gets done. Otherwise, it’s hopeless. Without that, everything tends to disorder, as anyone who’s ever accomplished anything knows firsthand. Let’s say you’re a Martian, and you see the Mars Rover moving about doing things. There’s no human being around, there’s nothing around, and yet it’s doing those things. It seems to be doing everything by itself, but it’s not. Someone has built it and is guiding it from millions of miles away. A chick lives and dies, but someone has to have programmed it, to have arranged it that way. We now have pilotless planes, but they were planned and developed over time. It didn’t happen all of a sudden. That reminds me of what a friend once said about the “infinite monkey theorem,” as it’s called. There’s even a jingle about it, which I can’t resist quoting here:babassoon There once was a brassy baboon Who used to breathe down a bassoon He said: “It appears, in millions of years, I’m certain to hit on a tune.” In its simplest form, the infinite monkey theorem states that a monkey randomly punching at the keys of a typewriter (or keyboard) will, given infinite time, type out the complete works of William Shakespeare, without a single error, punctuation marks included. This is one of the arguments set forth to support the idea of evolution by random mutation. Now this friend was a doctor, and he said this when he was a medical student, when they were studying the intricate workings of the human body. He said: “OK, I’ll accept that a monkey can actually do that, given infinite time. What I cannot accept is that this human body, with its millions of processes going on simultaneously, can ever be the work of chance.” How do those who deny the lack of randomness do so? They defer to infinite time. Because you can’t test it. Or they invoke higher dimensions. You can’t test that, either. Or they call it a “quantum jump” [punctuated equilibrium]. That is, they throw the issue into untestable territory. Feynman’s principles here are great. I like his approach. He says no theory is right. For it to be correct, it has to pass an infinite series of experiments. A theory passes an experiment, that means it has passed that experiment, it has not yet been falsified. [The concept of falsifiability was developed by philosopher of science Karl Popper.] Today, there’s the situation that when a theory doesn’t conform with experimental facts, you go back and mathematically tweak the theory until it does, and hence you remove the possibility of falsifying it. And that’s an illusion. There’s a couplet by the famous Turkish Sufi poet, Niyazi Misri, that expresses all this in a nutshell: Nothing is more apparent than God He is hidden only to the eyeless. 3 comments on “Science, Mathematics, And Sufism 1. Dear Imran Khan, You have asked: >how do you find the two related, I mean physics to Sufism. The two are related through quantum mechanics. Not through its mathematics, but through the interpretation of that mathematics. Of course there have been various interpretations of QM, but one thing that is not in doubt is that QM is “holistic.” In the words of physicist David Bohm, it treats the world as an “undivided whole.” In the interview, it is said that it treats its scope of investigation as a “system.” A collection of fifty atoms or particles is not treated as some kind of sum of fifty separate atoms or particles, but as a single, indivisible system. For this reason, it is difficult to understand, because pictorial representation is not possible. In fact, the observer/subject and observed/object themselves constitute a single whole. Now Sufism, too, is holistic. In the Koran it says: “Who kills one innocent person (is like one who) has killed all humankind” (5:32). It treats all humanity as a single entity. This is a holistic worldview. And it has been articulated by the famous Sufi Ibn Arabi in particular. Sometimes he sounds as if he is talking about quantum physics. Though not widely known, Sufism’s and Ibn Arabi’s affinity with quantum physics has been noted by various researchers. Google “Ibn Arabi quantum physics” and you will find various examples of this. NOTE: Modern quantum field theory conceives of physical phenomena as fluctuations of the underlying quantum vacuum. A 2015 Physics Today article described the quantum vacuum as “a turbulent sea, roiling with waves…” This has its exact counterpart in Sufism, which hundreds of years ago conceived of phenomena as waves on the surface of a sea. “The best credo of all times is that of modern physics — that everything is an unbroken, undivided wholeness.” —Pir Vilayat Inayat Khan, echoing Ibn Arabi’s famous doctrine of the Unity of Being (wahdat al-wujûd). —Erwin Schrödinger 2. Rukhsan ul Haq on said: Dear Henry Bayman I read your articles with a lot of interest and they always give joyful insights into the wisdom of Islam and what I like about them is the modern langauge based mostly on physics. I am a theoretical physicist myself so they appeal to me in that vein as well… With lots of love Bangalore India 3. Rukhsan ul Haq on said: I have the privilege to have known you through articles and books available from your website. I feel blessed to have the opportunity to cherish the wisdom you share with us and which you have inherited directly from a Sufi master in Turkey. I am a theoretical physicist by profession and a Sufi at heart. So there is no wonder that your articles and writings resonate with me because I see that you present Sufi wisdom in a scientific idiom… I will always behold you with love in my heart. With best wishes and regards Rukhsan ul Haq Bangalore India
3afd700a971c3214
Sunday, 19 Nov 2017 You are here: Home Research Strong-field dynamics Strong-field dynamics Our research focuses on the investigation and strong-field control of ultrafast electron dynamics in atoms and molecules, restructuring and dissociation dynamics in molecules, as well as on the invention of new methods for measurements on the attosecond time-scale and the generation of XUV and X-ray pulses. We apply intense few-cycle laser pulses with a fully characterized electric field and intense multi-color synthesized waveforms. Typical observables are energies, angles and momenta of photo-ions and photo-electrons or XUV/X-ray photon spectra. Here we present some exemplary results. 1. Ionization of polyatomic molecules We investigated the ejection protons from a series of polyatomic hydrocarbon molecules (methane, ethylene, 1,3-butadiene, hexane) exposed to 27 fs laser pulses from our Titanium-Sapphire laser system. We found that the energies of the protons are surprisingly high – too high to be consistent with Coulomb explosions from typical molecular ionic charge states, see Figure 1. Using multi-particle coincidence imaging we were able to decompose the observed proton energy spectra into the contributions of individual fragmentation channels, see Figure 2. We could show that the molecules can completely fragment already at relatively low peak intensities of a few 1014 W/cm2, and that the protons are ejected in a concerted Coulomb explosion from unexpectedly high charge states. Our observations can be explained by enhanced ionization taking place at many C-H bonds in parallel – a thus far unreported highly efficient type of ionization [Roither2011]. High-energy proton spectra Figure 1: Measured proton energy spectra (left) and cutoff energies (right) for 1,3-butadiene and hexane recorded with linearly polarized laser pulses of different peak intensities from below 1014 [spectrum (1)] to slightly above 1015 W/cm2 [spectrum (6)]. The red squares and blue circles correspond to linearly and circularly polarized light, respectively. We showed the mechanism for hydrocarbon molecules using coincidence momentum imaging, but we believe that such a molecular decomposition process should occur during the interaction of strong laser pulses with any polyatomic molecule, when the time scale of the intramolecular nuclear motion matches the laser pulse duration. In the particular case of hydrocarbon molecules, the very fast motion of hydrogen atoms in C-H bonds on the order of 10 fs can make this process very efficient even for the interaction with quite short pulses. Decomposed proton spectra Figure 2: Decomposition of the total proton energy spectra (gray lines) for ethylene (a,b) and 1,3-butadiene (c,d) into the proton spectra of separate fragmentation channels (colored lines). The black line shows the sum of all individual spectra. 2. Attosecond electron wavepacket interferometry Interferometry is a powerful technique providing access to a relative phase of interfering optical or matter waves. We experimentally and theoretically demonstrated [Xie2011] a self-referenced wavefunction retrieval of a valence electron wavepacket during its creation by strong-field ionization, based on a distinct separation of interferences arising at different time scales, see Figure 3. Our work showed that the measurement of sub-cycle electron wavepacket interference patterns can serve as a tool to assess structure and dynamics of the valence electron cloud in atoms and molecules on a sub-10-attosecond time scale. Interference structures schematics Figure 3: Interferences of electron wavepackets created and driven by sculpted ω-2ω laser pulses with a relative phase φ between the two colors. A free electron born at time tb reaches a final momentum along the light polarization direction given by the negative vector potential at birth time, p=-A(tb). Electron wavepackets that reach the same final momentum interfere. This is possible either for wavepackets created during different cycles [gray dots and lines] or for wavepackets created within the same cycle [blue dots and lines]. The former are separated in time by multiples of the fundamental cycle period T giving rise to interference fringes separated by the photon energy ℏω=2πℏ/T, referred to as ATI-peaks or intercycle interferences. The latter result from sub-cycle time delays and lead to a modulation of the ATI peaks. Interference patterns extracted from measured electron momentum spectra, shown in Figure 4, sensitively depend on the shape of the laser field cycle which we control by varying the relative phase φ between the two colors that the sculpted laser field is composed of. The experimental spectra have been recorded by coincidence momentum imaging. The simulated spectra, also shown in Figure 4, result from a numerical solution of the time-dependent Schrödinger equation (TDSE) in three spatial dimensions. All spectra feature a strong ionization signal in the central stripe (|py|<0.2 au) and weaker finger-like structures for |py|>0.2 au. Measured 2D interference structures Figure 4: Two-dimensional electron wavepacket interference patterns. (a)-(c) Measured interferograms extracted from electron momentum spectra for single ionization of helium atoms for various relative phases φ of a two-color laser field with its laser polarization direction along pz. The gray bars blank out regions where our detector has no resolution for electrons. (d)-(f) Solutions of the time-dependent Schrödinger equation for a single-cycle pulse for the same values of φ as in (a)-(c). In the lower half of (d) and (f) we also show the TDSE results for a multi-cycle pulse for which ATI fringes appear. The position and shape of the sub-cycle interference peaks depend sensitively on the shape of the laser field cycle, i.e. on φ. By contrast, the ATI peaks are created by interference of wavepackets released during different laser cycles and their positions thus reflect the periodicity T of the field giving equispaced peaks in energy independent of φ. Sub-cycle and ATI fringes can thus be clearly separated from each other by studying the variation of the longitudinal electron spectrum with φ, see Figure 5. The sub-cycle fringes appear as bow-like structures whose positions vary strongly with φ. The strong asymmetry of the spectra about pz=0, which is a further consequence of the two-color field, allows to detect them well apart from low-energy resonances and leads to a much broader spectral detection range and therewith to a strongly enhanced useful temporal probe window. Maximum fringe spacing and highest momenta are reached for φ=(0.5+n)π. In contrast, the position of the ATI peaks is independent of φ only determined by T (or, equivalently, ω). Field-control of sub-cycle interference structures Figure 5: Control of sub-cycle interference patterns with a sculpted laser field. Interference fringes extracted from the measured ion momentum spectra as a function of the longitudinal momentum pz and the relative phase φ. ATI peaks are independent of φ and form straight lines at fixed pz. By contrast, sub-cycle interferences are strongly dependent on φ and form bow-like structures. 3. Molecular restructuring and proton migration dynamics Polyatomic molecules subject to strong laser pulses are exposed to electric field strengths that are comparable to or may even exceed the intra-molecular Coulomb binding fields. As a consequence, the molecules can become singly or multiply ionized during their interaction with the laser field. The details of the accompanying field driven internal electronic dynamics are still far from being understood and strongly depend on the parameters of the laser field as well as on the electronic structure of the molecules. After the removal of electrons not only the charge density will redistribute very quickly within the molecule, also the molecule itself may undergo partly severe structural deformation. A very interesting restructuring process is the migration of hydrogen atoms or protons. Protons are known to take a special role in polyatomic molecules since their dynamics take place on a timescale that is in between the one of the sub-femtosecond motion of electrons and the one of the other moieties, that due to their much bigger mass is by at least an order of magnitude slower. Eventually, after or during the geometric restructuring and migration processes, the multiply charged complex may break into two or several charged fragments that are driven apart by Coulomb repulsion, and the excessive molecular potential energy is released into kinetic energy of the resulting set of final fragment ions. In this project we investigate the dynamics of laser-induced intra-molecular proton migration and large-scale molecular restructuring prior to fragmentation [Xu2010a, Zhang2011]. The goal is to move from observation to control by steadily improving our understanding of the molecular response to the laser field. Using coincidence momentum imaging it is possible to selectively investigate the momentum correlation between certain moieties in a given fragmentation reaction and therewith to reveal the break-up dynamics, see Figure 6. Path-selective investigation of molecular fragmentation Figure 6: Experimentally obtained momentum correlation map of two ionic fragments in a three-body break-up reaction of 1,3-Butadiene induced by a 25 fs laser pulse with an intensity of 1.5×1014 W/cm2. For the here shown fragmentation channel the molecule breaks along the center bond and additionally a proton is ejected from one of the two fragments. By using momentum correlation it is possible to show that events in region A are created by a fragmentation dynamics where first the proton is ejected and then the center bond is broken, and events in region B are created when first the center bond breaks and then a proton is ejected [Zhang2011]. From the measured momentum correlation maps we can numerically reconstruct the position of the proton prior to Coulomb explosion, see Figure 7. These proton maps revealed that not only one but also two protons can migrate to the same molecular site prior to molecular fragmentation [Xu2010b], see the explanation in the figure caption. Proton mapProton mapProton map Figure 7: Intra-molecular spatial distributions of the protons as numerically reconstructed from the measured momentum values of the three recorded fragment moieties by using an algorithm that assumes concerted fragmentation dynamics. (a)-(c) show the results of the numerical reconstruction for 3 different fragmentation channels as indicated in the figure. The position of the two heavy moieties is depicted below each panel. The two dominant regions, where the protons are situated left and right of the molecule's center of mass are labeled by Ai and Bi, where i=1..3 indicates the 3 identified fragmentation channels. The region denoted with A2 is special, since it indicates the ejection of a proton from CH3+. This is remarkable as the appearance of this moiety with its 3 hydrogen atoms already indicates the “capture“ of an additional hydrogen atom or proton. In turn, this means that 2 protons (or hydrogen atoms) must have migrated to this molecular site prior to Coulomb explosion [Xu2010b, Zhang2011] – a process that our experiments revealed for the first time. [Roither2011] S. Roither, X. Xie, D. Kartashov, L. Zhang, M. Schöffler, H. Xu, A. Iwasaki, T. Okino, K. Yamanouchi, A. Baltuska, and M. Kitzler, High Energy Proton Ejection from Hydrocarbon Molecules Driven by Highly Efficient Field Ionization Physical Review Letters 106, 163001 (2011). [Xie2011] X. Xie, S. Roither, D. Kartashov, E. Persson, D.G. Arbó, Li Zhang, S. Gräfe, M. Schöffler, J. Burgdörfer, A. Baltuska, and M. Kitzler Attosecond probe of valence electron wavepackets by sub-cycle sculpted laser fields submitted (2011) [Xu2010a] H. Xu, T. Okino, K. Nakai, K. Yamanouchi, S. Roither, X. Xie, D. Kartashov, M. Schöffler, A. Baltuska, and M. Kitzler Hydrogen migration and C–C bond breaking in 1,3-butadiene in intense laser fields studied by coincidence momentum imaging Chemical Physics Letters 484, 119-123 (2010). [Xu2010b] H. Xu, T. Okino, K. Nakai, K. Yamanouchi, S. Roither, X. Xie, D. Kartashov, L. Zhang, A. Baltuska, and M. Kitzler Two-proton migration in 1,3-butadiene in intense laser fields Physical Chemistry Chemical Physics  12, 12939-12942 (2010). [Zhang2011] Li Zhang, S. Roither, X. Xie, D. Kartashov, M. Schöffler, H. Xu, A. Iwasaki, S. Gräfe, T. Okino, K. Yamanouchi, A. Baltuska, and M. Kitzler Path-selective investigation of intense laser pulse-induced fragmentation dynamics in triply charged 1,3-butadiene submitted (2011) Last update Dec. 5, 2011
4a77094c351d1e39
Density-functional based tight-binding: an approximate DFT method Augusto F. Oliveira Gotthard Seifert Thomas Heine Hélio A. Duarte About the authors The DFTB method, as well as its self-consistent charge corrected variant SCC-DFTB, has widened the range of applications of fundamentally well established theoretical tools. As an approximate density-functional method, DFTB holds nearly the same accuracy, but at much lower computational costs, allowing investigation of the electronic structure of large systems which can not be exploited with conventional ab initio methods. In the present paper the fundaments of DFTB and SCC-DFTB and inclusion of London dispersion forces are reviewed. In order to show an example of the DFTB applicability, the zwitterionic equilibrium of glycine in aqueous solution is investigated by molecular-dynamics simulation using a dispersion-corrected SCC-DFTB Hamiltonian and a periodic box containing 129 water molecules, in a purely quantum-mechanical approach. DFT; DFTB; SCC; glycine; zwitterion O método DFTB, bem como a sua extensão com carga corrigida auto-consistente SCC-DFTB, tem ampliado a faixa de aplicações das ferramentas teóricas com fundamentos bem estabelecidos. Como uma aproximação do método do funcional de densidade, o método DFTB mantém aproximadamente a mesma precisão, mas com custo computacional menor, permitindo a investigação da estrutura eletrônica de sistemas grandes que não podem ser explorados com métodos ab initio convencionais. No presente artigo, os fundamentos dos métodos DFTB, SCC-DFTB e da inclusão das forças de dispersão de London são revisados. Para mostrar um exemplo da aplicabilidade do método DFTB, o equilíbrio zwitteriônico de glicina em solução aquosa é investigado. Foram realizadas simulações de dinâmica molecular usando o hamiltoniano SCC-DFTB corrigido para incluir a dispersão e uma caixa periódica contendo 129 moléculas de água, a partir de uma abordagem puramente mecânico-quântica. Density-functional based tight-binding: an approximate DFT method Augusto F. Oliveira * e-mail:; I, II, * * e-mail:; ; Gotthard SeifertII; Thomas HeineIII; Hélio A. Duarte * e-mail:; I, * * e-mail:; IDepartamento de Química, Instituto de Ciências Exatas, Universidade Federal de Minas Gerais, Av. Antonio Carlos, 6627, 31270–901 Belo Horizonte-MG, Brazil IIPhysikalische Chemie, Technische Universität Dresden, Mommsenstr, 13, D–01062 Dresden, Germany IIISchool of Engineering and Sciences, Jacobs University, P.O. Box 750 561, 28725 Bremen, Germany The DFTB method, as well as its self-consistent charge corrected variant SCC–DFTB, has widened the range of applications of fundamentally well established theoretical tools. As an approximate density-functional method, DFTB holds nearly the same accuracy, but at much lower computational costs, allowing investigation of the electronic structure of large systems which can not be exploited with conventional ab initio methods. In the present paper the fundaments of DFTB and SCC–DFTB and inclusion of London dispersion forces are reviewed. In order to show an example of the DFTB applicability, the zwitterionic equilibrium of glycine in aqueous solution is investigated by molecular-dynamics simulation using a dispersion-corrected SCC–DFTB Hamiltonian and a periodic box containing 129 water molecules, in a purely quantum-mechanical approach. Keywords: DFT, DFTB, SCC, glycine, zwitterion 1. Introduction Density functional theory (DFT) methods are the standard and the most used theoretical techniques for electronic structure calculations.1-5 The advent of the generalized gradient approximation (GGA) for the exchange-correlation functional enhanced the DFT accuracy6 and the predicted molecular structures, relative energies and frequencies are nearly comparable to the second order Møller-Plesset perturbation theory (MP2) method, with remarkable success to treat transition metal complexes.7 Efficient algorithms to solve the Kohn-Sham equations have been implemented, scaling to N3 with respect to the size of the basis sets and, hence, being much more efficient than the N5 of the MP2 methods. DFT is the method chosen for a huge range of applications. The formalism of the DFT and its extension to the reactivity indexes are subject of intensive research and many empirical concepts such as electronegativity, chemical potential and hardness are now formally defined within the DFT framework.3,5,8-10 With respect to the methodology, developments concerning improved exchange-correlation functionals and hybrid quantum-mechanics/molecular-mechanics (QM/MM) methods are still the main subjects of research of many theoreticians.11 Chemical property estimates based on DFT are now well established, and even optical properties are accessible through the generalization to time-dependent DFT,7,12,13 a method which is nowadays implemented in many different computer codes. Notwithstanding the marvelous ability of the DFT to treat systems of increasing complexity, many systems are still intractable at the actual stage of computer technology development. Biosystems, adsorption processes, nanostructures, molecular dynamics, clusters and aggregates with thousands of atoms, self-assembling systems, nanoreactors and supramolecular chemistry are some of the fields in which ab initio methods cannot be used with adequate chemical models. For this range of systems, semi-empirical methods seem to have their applicability. Semi-empirical methods such as AM1,14 PM315-17 and, more recently, RM118 have many empirical parameters that are fitted to a set of molecular properties, estimated either theoretically or experimentally. Therefore, the applicability of such methods is restricted. Density-functional tight-binding (DFTB) is an approximate method based on the density functional framework which does not require large amounts of empirical parameters. The virtues and weaknesses of the DFTB are a heritage from DFT. In fact the parameters are consistently obtained from DFT calculations of a few molecules per pair of atom types. On the other side, DFTB is closely connected to the tight-binding methods. In fact, it can be seen as a non-orthogonal tight-binding method parameterized from DFT. The self-consistent charge extension of DFTB (SCC-DFTB) improves very much the accuracy of the method. For improvement of physical approximations, all DFT extensions, such as treatment of relativistic effects and London dispersion, can be easily used in the DFTB method. Large number of applications has been reported showing its usefulness in the calculations of hyperfine coupling constants, magnetic properties, vibrational spectra of solids and molecules, nuclear magnetic shielding tensors, geometries, dynamic properties and many others.19-24 Calculation of optical properties is also possible due to the time-dependent DFTB,25-27 which is not covered in the present paper. The goal of the present review is to call the attention of the chemistry community to the DFTB method, which can be a good complement of the set of semi-empirical methods available. Its advantages and weaknesses are highlighted. As an example of application, the zwitterionic and neutral forms of glycine in aqueous solution are discussed in terms of fully quantum mechanical molecular dynamics of this molecule in water. 2. Background Fundaments Density functional theory has been extensively reviewed.7,28 In this section a very brief review of DFT is done in order to highlight its crucial aspects to the formulation of the DFTB method. The Hohenberg-Kohn (HK) theorems29 have rigorously made the electronic density acceptable as basic variable to electronic-structure calculations. However, development of practical DFT methods only became relevant after W. Kohn and L. J. Sham published their famous set of equations: the so-called Kohn-Sham (KS) equations.30 The use of the electronic density within the KS scheme allows a significant reduction of the computational demand involved in quantum calculations. Furthermore, the KS method paved the way for studying systems that could not be investigated by conventional ab initio methods (which use the wave function as basic variable). Even though DFT methods have been successfully applied for systems of increasing complexity, methods which can include approximations to reduce even more the computational demand, without compromising the reliability of results, are still required. The application of tight-binding (TB) to the calculation of electronic structures starts with the paper by J. Slater and G. Koster.31 The main idea behind this method is to describe the Hamiltonian eigenstates with an atomic-like basis set and replace the Hamiltonian with a parameterized Hamiltonian matrix whose elements depend only on the internuclear distances (this requires the integrals of more than two centers to be neglected) and orbital symmetries. Although the Slater–Koster method was conceived for the calculation of band structures in periodic systems, it was later generalized to an atomistic model, capable of treating finite systems as well. The transition to atomistic has three main requirements, as discussed by Goringe et al.32 First, the elements of the Hamiltonian matrix must have a functional dependence on the interatomic distance. In the case of band structures one just has to know the matrix elements for discrete values of distance. This requirement was solved by Froyen and Harrison,33 who proposed that the interatomic distance was related to the Hamiltonian elements by 1/r2. The second requirement is to obtain an expression for the total energy and not only for the band energy. In 1979 Chadi34 proposed that the total energy could be described as a sum of two contributions, where Ebnd is the sum over the energies of all occupied orbitals obtained by diagonalization of the parameterized Hamiltonian matrix, and Erep is the repulsive contribution, obtained by the sum of the atomic-pair terms, in which N is the number of atoms in the system. The third and last requirement is the possibility to derive the atomic forces from the total energy. This is especially important for geometry optimization and molecular dynamics. By assuming differentiability of Uαβ in equation 2, the only problem is to derive Ebnd, which depends on the parameterization method chosen for the Hamiltonian matrix. The DFTB method attends these three requirements with the additional advantage of completely avoiding any empirical parameterization, since the Hamiltonian and overlap matrices are calculated using atom-like valence-orbitals which are derived from DFT. Therefore, the DFTB method can be considered as a simplification of the Kohn–Sham method. 3. The Kohn–Sham Method Although the Hohenberg and Kohn theorems29 have proven that the electronic energy of a system can be totally determined from its electronic density through the variational principle, they did not propose any procedure to perform this calculation. This was done about one year later, by Kohn and Sham,30 with the publication of their equations known as Kohn-Sham equations. The solution of Kohn and Sham starts from the idea of using monoelectronic orbitals to calculate the kinetic energy in a simple, yet reasonably precise, way leaving a residual correction that could be calculated apart. Thus, one starts with a reference system of M non-interacting electrons subjected to the external potential nS, with Hamiltonian in which there are no electron–electron repulsion terms and for which the electronic density is exactly the same as in the corresponding system of interacting electrons. By introducing the single particle orbitals ψi all electronic densities physically acceptable for the system of non-interacting electrons can be written in the form Therefore, the HK functional can be written as where TS represents the kinetic-energy functional for the reference system of M non-interacting electrons, given by J represents the classic Coulomb interaction functional and the remaining interactions are grouped in Exc, the exchange-correlation functional, which contains the difference between the exact kinetic energy T and TS, besides the non-classic part of the electron-electron interactions Vee, i.e. After combining equations 6, 7 and 8 within the second HK theorem, the chemical potential can be written as with the KS effective potential where νext is the external potential, normally due to the atomic nuclei, and the exchange-correlation potential νxc is defined as Equation 10, restricted by ∫ρ()d = M, is exactly the same equation that would be obtained for a system of M non-interacting electrons submitted to the external potential νS = νKS. Thus, for a given νKS a suitable value of ρ can be calculated for equation 10 by solving the M monoelectronic equations and by using the calculated ψi in equation 5. Equations 5 and 11-13 are the so-called Kohn-Sham equations. Since νKS depends on ρ through νxc the KS equations must be solved iteratively using a self-consistent procedure similar to the one depicted in Figure 1. An electronic density modelρ0 is normally chosen to start the iterative procedure. In principle, any positive function normalized for the number of electrons would be applicable, but a good initial estimate of ρ can significantly accelerate convergence. At the end of the iterative procedure, the total energy can be calculated, which is given in the KS method by the following expression: The most difficult part of the KS scheme is to calculate νxc in equation 12. The existence of an exact density functional is assured by the first HK theorem, but the exact form of the Exc functional remains unknown. However, many approximations of this functional have been described in the scientific literature over the last 30 years. In practice, the approximation chosen for Exc and the way by which the KS orbitals are represented define the different DFT methods. 4. DFT as Basis for a Tight–Binding Method Following Foulkes and Haydock35 the electronic density is written as a reference densityρ0 plus a small fluctuation δρ, This electronic density is then inserted in equation 14: where = ρ0(') and δρ' = δρ(') are defined as short-hand notations. The second term in equation 16 corrects the double counting in the Coulomb term; the third term corrects the new exchange-correlation contribution; and the fourth term results from splitting the Coulomb energy into one part related to ρ0 and another related to δρ. Enn is the nuclear repulsion. Afterwards, Exc 0 + δρ] is expanded in a Taylor series up to the second-order term: Substitution of equation 17 into 16 and use of the definition (δ Exc/δρ)ρ0 = νxc0] results in From equation 18 it is possible to define four important terms. The first is a reference Hamiltonian 0 depending only upon ρ0 The sum in the first line of equation 18 is analogue to Ebnd in equation 1. The terms in the second line of equation 18 define the repulsive contribution, Finally, the last term in equation 18 includes the corrections related to the fluctuations in the electronic density. This term is defined as Therefore, equation 18 can be rewritten as In order to obtain a good estimate of the reference electronic density, ρ0 is written as a superposition of atom-like densities centered on the nuclei α, With this approximation it is assured that Erep does not depend on the electronic-density fluctuations. Furthermore, due to the neutrality of the Coulomb contributions become negligible for long distances. Therefore, Erep can be expanded as The contributions of 3 and more centers are rather small and can be neglected. These approximations can also be justified by Coulomb screening, i.e., since is the electronic density of a neutral atom, the electron-electron interaction terms with more than two centers are canceled by the nucleus-nucleus interactions. Due to the screening of terms of more than two centers, one can assume the two-center contributions to be short ranged. However, the repulsion energy does not decay to zero for long interatomic distances. Instead, it decays to a constant value given by the atomic contributions: Thus, is assumed in order to make Erep dependent only on two-center contributions: Although it would be possible to calculate Erep for known values of , it is more convenient to adjust Erep to ab initio results. Thus, Erep is fitted to the difference between the DFT energy and Ebnd as a function of the interatomic distance Rαβ using a suitable reference structure, i.e. The value of Ebnd can be obtained by diagonalization of the Hamiltonian matrix, which leads to The value of Erep is usually fitted to a polynomial function or to a series of splines. Typical plots of EDFT, Ebnd and Erep for a reference structure are shown in Figure 2. Based on the considerations discussed so far, the DFTB model can be derived. 5. The Standard DFTB Model without Self-Consistency In the standard DFTB scheme, the second-order correction term, E2nd of equation 22, is neglected. Therefore, the calculation of the total energy does not depend on the electronic-density fluctuations δρ and, accordingly, it does not have to be solved iteratively. In DFTB the KS orbitals are represented with a linear combination of atomic orbitals (LCAO) centered on the nuclei. Denoting the basis functions by øν and the expansion coefficients by Ciν one can write the KS orbitals in the form From this LCAO model, one obtains the secular problem where the elements of the Hamiltonian matrix and Sµν of the overlap matrix are defined as follows: The second term of equation 22 can be transformed, with equations 29 and 11, into in which the elements of the density matrix P are defined as follows In order to restrict the LCAO to valence orbitals only, it is necessary to assure the orthogonality of the basis functions with respect to the core basis-functions of the remaining atoms (by using atomic orbitals as basis functions the orthogonality between the core and valence functions within the same atoms is already assured). Denoting |ø) as a non-orthogonalized basis-function and as the core basis-functions of atom β, the corresponding orthogonalized basis-function of |ø) is obtained by: By using this orthogonalization procedure, equation 32 is transformed into where denotes the eigenvalue of the state c in atom β. The effective potential νKS and the core correction in equation 35 can be interpreted as a pseudo-potential (Vpp). Writing νKS as the sum of potentials Vα centered on the atoms, and using this definition in equation 35, the effective potential is transformed into a pseudo-potential for all atoms in the system, except for atoms to which øµ and øν belong. Therefore, the pseudo-potential appears in the three-center terms and in the two-center terms whose valence orbitals belong to the same atom (so called crystal field terms). The pseudo-potential contributions are considerably smaller than the contributions of the full potentials and are neglected. Thus, the Hamiltonian matrix elements are defined as where δαβ is the Kronecker's delta. This approach, the potential superposition, has been used since the 1980's for the calculation of DFTB parameters. In 1998, Elstner et al.36 presented an alternative approach to derive the DFTB equations through a second order expansion of the DFT total energy with respect to the electron density. As result the Hamiltonian matrix elements are calculated as density superpositions, which is identical to equation 37 except for the contribution of the exchange correlation potential. Indeed, due to the non-linear nature of νxc, the effective potential cannot be described as a simple sum of reference potentials within this approach, instead one obtains Both approaches are physically motivated and their results are similar, which is not surprising if the potential difference between equations 37 and 38 is explicitly calculated. Both approaches have been used extensively in the past, the potential superposition being more popular for standard DFTB calculations, and the density superposition more widely used for SCC-DFTB. The øν basis functions and the reference atom-like densities are obtained by solving the Schrödinger equation for the free atom within a self-consistent DFT method, as shown in Figure 1. The contraction potential (r/r0)2 in equation 39 constrains the wave functions, resulting in better basis sets for the study of condensed-phase systems and free molecules as well. The value for the parameter r0 is normally chosen between 1.85rcov and 2rcov, with rcov being the atomic covalent radius.37 In practice, the Hamiltonian matrix elements are calculated as follows: For the diagonal elements the energy level of the free atom is chosen, which ensures correct dissociation limits. Due to the orthogonality of the basis functions the off-diagonal elements of the intra-atomic blocks are exactly zero. The interatomic blocks are computed as given in equation 37 or 38, depending on the choice of potential generation. Within the density superposition approach the Hamiltonian matrix elements unfold as follows: It should be noted that the Hamiltonian elements depend only on atoms α and β and, therefore, only the two-center matrix elements are explicitly calculated, as well as two-center elements of the overlap matrix. According to equation 40 the free atom eigenvalues form the diagonal of the Hamiltonian matrix, which assures the correct limit for free atoms. By using øν and the Hamiltonian and overlap matrix elements can be calculated and tabulated as a function of the distance between atomic pairs. Thus, it is not necessary to recalculate any integrals during, e.g., a geometry optimization or molecular dynamics simulation. At last, an analytic expression for atomic forces can be derived from the total energy with respect to the atomic space-coordinates, By this approach, the DFTB method covers all three requirements for an atomistic tight-binding approach. 6. The Self–Consistent Charge Correction: SCC-DFTB The non-self-consistent DFT scheme described so far is very suitable to study systems in which the polyatomic electronic density can be well represented by a sum of atom-like densities, i.e. homonuclear covalent systems or highly ionic systems. However, the uncertainties in the standard DFTB increase when the chemical bonds in the system are controlled by a more delicate charge balance between atoms, especially in the case of heteronuclear molecules and polar semiconductors. In order to have a better description of electronic systems and better transferability of DFTB in the cases where long-range Coulomb interactions are significant, the method has been improved, giving rise to the self-consistent charge correction DFTB (SCC-DFTB).36 In this new scheme, the electronic density is corrected through inclusion of the second-order contributions E2nd in equation 22, which are neglected in standard DFTB. In order to include the density fluctuations in a simple yet efficient way according to a tight-binding approach, δρ is written as the superposition of atom-like contributions δρα, which fast decay along the distance from the corresponding atomic center, where the atom-like contributions can be simplified with the monopole approximation: Here Δqα is the Mulliken charge, difference between the atomic Mulliken population qα38 and the number of valence electrons of the neutral free atom denotes the normalized radial dependence of the density fluctuation in atom a approximated to spherical by the angular function Y00. In other words, the effects of charge transfer are included, but changes in the shape of the electronic density are neglected. Equation 21 then becomes in which the notation γαβ was introduced merely for convenience. In order to solve equation 44, γαβ must be analyzed. In the limit case where the interatomic separation is very large (|α - β|=| - '| → ∞) one finds, by GGA-DFT, that the exchange-correlation term goes to zero and γαβ describes the interaction of two normalized spherical electronic densities, basically reducing to 1/|α - β |, thus, In the opposite case, for which the interatomic distance tends to zero (| α - β |=| - '| → 0), γαβ describes the electron-electron interaction within the atom α and can be related with the chemical hardness ηα,39 or Hubbard parameter γαα = 2ηα = Ua. Typically, the atomic hardness can be calculated using the difference between ionization potential Iα and electron affinity Aα of atom α: 2hα = Iα -Aα. Due to practical problems, in particular related to the non-existence of various anions and accordingly missing experimental validation of the electron affinity of the corresponding elements, it is more convenient to exploit DFT to obtain these parameters. Application of Janak's theorem40 relates the atomic hardness to the derivative of the HOMO energy with respect to the occupation number of the HOMO and hence the energy change with respect to electron change within the HOMO. This approach offers the possibility to treat the charge contribution shell- or even orbital-wise, which is important for the calculation of certain elements with sp and d bonding contributions, in particular for transition metals. Orbital hardness values have been reported in the literature for elements from H to Xe.41 In the following, we concentrate on the atomic SCC procedure, which implies that all sums over charges run over the atomic index α. For orbital-dependent SCC the summation index for the charge would run over the shell index ξ. Within the monopole approximation, Uα can be calculated, using a DFT procedure, as the second derivative of the total atomic energy of atom α with respect to its atomic charge: In order to obtain a well-defined and useful expression for systems in all scales, and still keep consistence with the afore approximations, an analytical expression was developed36 to approximate the density fluctuations with spherical electronic densities. In accordance with Slater-type orbitals (Gaussian-type orbitals can also be employed) used to solve the KS equations,42,43 it is assumed an exponential decay of the normalized spherical electronic density: Omitting the second-order contributions of Exc in equation 44 one obtains: Integration over ' gives: Setting R = |α - β|, after some coordinate transformations one gets where s is a short-range function with exponential decay, so that Once it was assumed that the second-order contribution can be approximated by the Hubbard parameter when R = 0, according to equation 46, the exponents of equation 51 are obtained: This result can be interpreted by noting that harder elements tend to have localized wave functions. The chemical hardness of a spin-depolarized atom is calculated by the energy derivative of the highest occupied atomic orbital with respect to its occupation number, equation 46, using a fully self-consistent ab initio method. Therefore, the influence of second-order contributions of the exchange-correlation energy is included in γαβ for short distances, where it is important. The fact that, within GGA, the exchange-correlation energy vanishes for large interatomic distances is taken into account. In the case of periodic systems, the long-range part can be calculated using the standard Ewald summation, whereas the short-range part s decays exponentially and can be summed over a small number of unit cells. Thus, equation 50 is a well-defined expression for extended and periodic systems. Finally, the total energy within SCC–DFTB is written as with γαβ = γαβ(Uα,Uβ,| α - β|). Here the contribution due to the Hamiltonian 0 is exactly the same as in the standard DFTB scheme. Note that the first term in equation 53 does only simplify to the sum of MO energies, the convenient notation for DFTB, if all charges are zero. Like in the non-self-consistent method, the wave functions yi are expanded in a LCAO model, equation 29, and equation 53 gives: The charge fluctuations are calculated by Mulliken population analysis:38 and secular equations similar to those in equation 30 can be obtained, with modified elements in the Hamiltonian matrix: The matrix elements and Sµν are identical to those defined in the standard DFTB method, in equation 31. Since the atomic charges depend on the monoatomic wave functions ψi it is necessary to use a self-consistent procedure. Once the elements Sµν extend to some neighboring atoms, multi-particle interactions are introduced. The second-order correction is achieved by introducing the elements , which depend on the Mulliken charges. Identically to the standard DFTB, the repulsive potential is fitted according to equation 27 using a suitable reference system. As the self-consistent charge correction allows for the explicit treatment of charge-transfer effects, the transferability of Erep is considerably better, in comparison with the non-self-consistent scheme. As in the standard DFTB, a simple analytic expression for the atomic forces can be derived accordingly: DFTB schemes have been successfully used in a wide range of applications, from molecular compounds22,44 to systems in solid state.19,45-47 Indeed, a symposium dedicated to the DFTB methods was held during the 232nd National Meeting of the American Chemical Society, from 10th to 14th of September, in 2006. A special section with contributions presented in this symposium was published in the Journal of Physical Chemistry A, issue 26 of 2007,48 presenting the actual development state of DFTB with respect to its formalism, implementation and applications. 7. Weak Forces: Dispersion-Corrected (SCC-)DFTB London interactions, also called dispersion forces, are defined as attractive forces between nonpolar molecules, due to their mutual polarizability.49 London dispersion forces are several orders of magnitude weaker than typical covalent or ionic interactions and also about 10 times weaker than hydrogen bridge interactions. Therefore, dispersion forces have negligible effect in short-range interactions and can be understood as the long-range component of van der Waals forces. Despite their weak nature, London interactions affect many fundamental processes in chemistry, physics, and biology. They influence the formation of molecular crystals, the structure of biological molecules such as proteins and DNA, adsorption processes, π–π stacking interactions, among others. However, as explained above, both the standard and self-consistent DFTB methods treat only short-range atomic potentials and terms with more than two centers are neglected. Therefore, the Hamiltonian matrix elements fall off quickly and become negligible at interatomic distances typically found in the region of the van der Waals minimum. Hence, DFTB completely disregards van der Waals interactions, especially dispersion forces. Two treatments meant to include dispersion interactions a posteriori have been proposed.50,51 In both cases the dispersion energy Edisp is calculated separately using empirical potentials and then added to the DFTB total energy expression. Since van der Waals forces are totally absent in DFTB, the addition of Edisp does not introduce any double-counting errors to the energy. Since both treatments are somewhat similar, we describe that used in the present work.51 This correction was implemented in an experimental version of the deMon code52 and makes use of the UFF force field,53 already available in deMon. The dispersion interaction Uαβ between atoms α and β at a distance R is given in Lennard-Jones-type form, which includes two parameters: van der Waals distance (Rαβ) and well depth (dαβ): The Rαβ and dαβ parameters are reported in the original paper53 and are available from H to Lw in the periodic table of elements. In UFF the van der Waals term is set to zero according to an adjacency criteria; however, this imposes an inflexible topology of the system, which is not desirable in a quantum-mechanical method. To overcome this problem, equation 58 is used only when Uαβ is attractive (London interactions are never repulsive), i.e. R < 2-1/6Rαβ. In addition, a short-range potential is derived using the polynomial where U0, U1, and U2 are calculated to make the interaction energy and its first and second derivatives match equation 58 at R = 2-1/6Rαβ. The best value suggested for n is 5, which gives the following U0, U1, and U2 parameters:51 Therefore, the dispersion potential for the DFTB method can be written as and the dispersion energy is given by This term is then added to the total DFTB energy calculated either using standard DFTB (section 5) or the SCC scheme (section 6). 8. Glycine in Aqueous Solution Glycine (aminoethanoic acid) is the simplest among the essential aminoacids. In solution an intramolecular proton transfer from the carboxylic group to the amino group takes place, establishing the zwitterionic equilibrium shown in Figure 3. The charge separation in the zwitterionic form is stabilized by the solvent, which must have large dielectric constant, as it is the case in water. Thus, the neutral species is favored in nonpolar solvents. In this work, Born-Oppenheimer molecular dynamics was carried out using the DC–SCC–DFTB method, as implemented in the deMon package.52 The glycine molecule was placed within a 16 Å periodic box containing 129 water molecules. The data were collected during a 100 ps simulation time with a 0.5 fs time step. The simulation was carried out after a thermalization time of 50 ps. It is important to emphasize that both glycine and water molecules were calculated within a full quantum-mechanical approach. The radial distribution function (RDF) of water with respect to the glycine center of mass is shown in Figure 4. The first solvation shell integrates to 22 water molecules. Table 1 shows the calculated geometrical properties of glycine in solution. The optimized geometric parameters are shown at the PBE/TZVP and DC–SCC–DFTB levels of theory. The estimated angles are in good agreement with the previously published results.54 The O–C=O angle presents the largest discrepancy for the zwitterionic form. The PBE/TZVP estimated O–C=O angle is 13 degrees larger than the value estimated with DC–SCC–DFTB. Furthermore, the O–C=O angle is expected to increase from the neutral to the zwitterionic form due to the deprotonation of the carboxyl group. However, DC–SCC–DFTB seems to be insensitive to the large charge on the deprotonated carboxyl and the angle remains similar to that in the neutral form. The mean values of the angles and dihedrals from MD (last column in Table 1) are close to the optimized values with a standard deviation of about 4 degrees, except for the O–C–C–N dihedral. This dihedral involves rotation around a single C–C bond, therefore, large standard deviation is indeed expected, explaining the apparent disagreement with the gas phase PBE/TZVP results. Wada et al.55 estimated the Gibbs free energy variation between the two glycine forms in aqueous solution to be about –7.0 kcal mol-1H = –10.3 kcal mol-1). The change of the expected value between the two forms from the NVE molecular dynamics (ΔENVE) was estimated to be about –25.5 kcal mol-1. We have also used the continuum model to estimate the ΔG of this reaction at the PBE/TZVP/PCM level of theory and a value of –23.4 kcal mol-1 was obtained. 9. Final Remarks DFTB is an approximate density-functional method which, in principle, does not employ any empirical parameter, in the sense that all quantities are calculated within DFT (Slater-Koster integrals) or they are calculated from reference structures by DFT calculations (Erep). It has been implemented in many different codes.56 Density Functional methods have along the time become a standard method for electronic structure calculations and substantially helped to unify organic chemistry, inorganic chemistry, surface chemistry, materials science and, more recently, biochemistry.4 With the advent of DFTB, the approximate DFT method, a plethora of challenging systems are now accessible for electronic structure calculations, enlarging the frontiers of the applicability of fundamentally well established theoretical tools. Nanostructured, self-assembled and nanoreactor systems are some of those for which DFTB can provide substantial help in the investigative work. The financial support of the Brazilian agencies CNPq and FAPEMIG are gratefully acknowledged. We also thank the joint PROBRAL action of CAPES (Brazil) and DAAD (Germany) for financial support. Hélio A. Duarte graduated in Chemical Engineering (1990) and received his MSc in Inorganic Chemistry (1993) from the Federal University of Minas Gerais-UFMG. He finished his PhD at the University of Montreal in 1997 under the supervision of Prof. Dennis R. Salahub, working with adsorption on metal surfaces using density functional methods-DFT. Currently, he is associate professor at the Department of Chemistry-UFMG. His research activities are centered on the development and applications of DFT (and approximate DFT) methods to investigate chemical speciation, inclusion compounds, sulphide minerals, nanostructured clay minerals and solid/liquid interface phenomena. ( Thomas Heine (PhD TU Dresden 1999). After pre- and postdoctoral stages at the Universities of Montreal, Exeter, Bologna and Geneva, he became Assistant Professor at TU Dresden in 2002, where he received his venia legendi in Physical Chemistry in 2006. He was appointed as Associate Professor for Theoretical Physics/Computational Materials Science at Jacobs University Bremen in 2008. His main research interests are the development of new quantum-mechanical methods and their implementation and application. He currently works on actual topics as storage of molecular hydrogen, design of new materials and nanoelectromechanics. Prof. Heine has more than 100 publications in peer-reviewed international journals, among them one Nature and two PNAS, and more than 1000 citations, leading to an h index of 25. ( Gotthard Seifert studied Chemistry at the TU Dresden where he received his diploma in 1975 and also graduated as Dr. rer. nat. (PhD) in 1979. He worked as a research assistant at the Institute of Theoretical Physics at TU Dresden from 1979 to 1992. He received his habilitation in theoretical physics in 1988. In 1989 and 1990, he worked as a visiting scientist and visiting professor at the International School for Advanced Studies (SISSA) in Trieste and at the EPFL. In 1991, he was a visiting scientist at the Forschungszentrum in Jülich. From 1992 to 1998, he was again at the Institute of Theoretical Physics at the TU Dresden as a lecturer/professor. He moved to the Universität Paderborn in 1998 and became a professor of Physical Chemistry at TU Dresden in 2001. His research interests are in the areas of quantum chemistry, cluster physics and chemistry and computational materials research. Augusto Faria Oliveira graduated in Chemistry at Federal University of Minas Gerais-UFMG (2001) and received his MSc in 2004. He completed his PhD in Quantum Chemistry (2008) under the supervision of Prof. Hélio A. Duarte at UFMG. During his PhD he spent one year in the group of Prof. Seifert at TU-Dresden. Currently he is a post-doctoral fellow at TU-Dresden in the same group, working with inorganic nanotubes. His current interests are the development and application of DFT and DFTB methods to investigate inorganic nanotubes and ion adsorption on minerals. Received: August 6, 2008 Web Release Date: May 29, 2009 • * • 1. Argaman, N.; Makov, G.; Am. J. Phys. 2000, 68, 69. • 2. Chermette, H.; Coord. Chem. Rev. 1998, 180, 699. • 3. Chermette, H.; J. Comput. Chem. 1999, 20, 129. • 4. Kohn, W.; Becke, A. D.; Parr, R. G.; J. Phys. Chem. 1996, 100, 12974. • 5. Ladeira, A. C. Q.; Ciminelli, V. S. T.; Duarte, H. A.; Alves, M. C. M.; Ramos, A. Y.; Geochim. Cosmochim. Acta 2001, 65, 1211. • 6. Sousa, S. F.; Fernandes, P. A.; Ramos, M. J.; J. Phys. Chem. A 2007, 111, 10439. • 7. Koch, W.; Holthausen, M. C.; A Chemist's Guide to Density Functional Theory; Wiley-VCH: New York, 2001. • 8. De Proft, F.; Geerlings, P.; Chem. Rev. 2001, 101, 1451. • 9. Geerlings, P.; De Proft, F.; Langenaeker, W.; Chem. Rev. 2003, 103, 1793. • 10. Duarte, H. A.; Quim. Nova 2001, 24, 501. • 11. Shao, Y.; Molnar, L. F.; Jung, Y.; Kussmann, J.; Ochsenfeld, C.; Brown, S. T.; Gilbert, A. T. B.; Slipchenko, L. V.; Levchenko, S. V.; O'Neill, D. P.; DiStasio, R. A.; Lochan, R. C.; Wang, T.; Beran, G. J. O.; Besley, N. A.; Herbert, J. M.; Lin, C. Y.; Van Voorhis, T.; Chien, S. H.; Sodt, A.; Steele, R. P.; Rassolov, V. A.; Maslen, P. E.; Korambath, P. P.; Adamson, R. D.; Austin, B.; Baker, J.; Byrd, E. F. C.; Dachsel, H.; Doerksen, R. J.; Dreuw, A.; Dunietz, B. D.; Dutoi, A. D.; Furlani, T. R.; Gwaltney, S. R.; Heyden, A.; Hirata, S.; Hsu, C. P.; Kedziora, G.; Khalliulin, R. Z.; Klunzinger, P.; Lee, A. M.; Lee, M. S.; Liang, W.; Lotan, I.; Nair, N.; Peters, B.; Proynov, E. I.; Pieniazek, P. A.; Rhee, Y. M.; Ritchie, J.; Rosta, E.; Sherrill, C. D.; Simmonett, A. C.; Subotnik, J. E.; Woodcock, H. L.; Zhang, W.; Bell, A. T.; Chakraborty, A. K.; Chipman, D. M.; Keil, F. J.; Warshel, A.; Hehre, W. J.; Schaefer, H. F.; Kong, J.; Krylov, A. I.; Gill, P. M. W.; Head-Gordon, M.; Phys. Chem. Chem. Phys. 2006, 8, 3172. • 12. Burke, K.; Werschnik, J.; Gross, E. K. U.; J. Chem. Phys. 2005, 123, 062206. • 13. Kaupp, M.; Bühl, M.; Malkin, V. G.; Calculation of NMR and EPR Parameters, Wiley-VCH Verlag GmbH & Co. KGaA, 2004. • 14. Dewar, M. J. S.; Zoebisch, E. G.; Healy, E. F.; Stewart, J. J. P.; J. Am. Chem. Soc. 1993, 115, 5348. • 15. Stewart, J. J. P.; J. Comput. Chem. 1989, 10, 209. • 16. Stewart, J. J. P.; J. Comput.-Aided Mol. Des. 1990, 4, 1. • 17. Stewart, J. J. P.; J. Comput. Chem. 1990, 11, 543. • 19. Frenzel, J.; Oliveira, A. F.; Duarte, H. A.; Heine, T.; Seifert, G.; Z. Anorg. Allg. Chem. 2005, 631, 1267. • 20. Frisch, M. J.; Trucks, G. W.; Schlegel, H. B.; Scuseria, G. E.; Robb, M. A.; Cheeseman, J. R.; Zakrzewski, V. G.; Montgomery, J. A.; Stratmann, R. E.; Burant, J. C.; Dapprich, S.; Millan, J. M.; Daniels, A. D.; Kudin, K. N.; Strain, M. C.; Farkas, O.; Tomasi, J.; Barone, V.; Cossi, M.; Cammi, R.; Mennucci, B.; Pomelli, C.; Adamo, C.; Clifford, S.; Ochterski, J.; Petersson, G. A.; Ayala, P. Y.; Cui, Q.; Morokuma, K.; Malick, D. K.; Rabuck, A. D.; Raghavachari, K.; Foresman, J. B.; Cioslowski, J.; Ortiz, J. V.; Baboul, A. G.; Stefanov, B. B.; Liu, G.; Liashenko, A.; Pikorz, P.; Komaromi, I.; Gomperts, R.; Martin, R. L.; Fox, D. J.; Keith, T.; Al-Laham, M. A.; Peng, C. Y.; Nanayakkara, A.; Gonzales, C.; Challacombe, M.; Gill, P. M. W.; Johnson, B.; Chen, W.; Wong, M. W.; Andreas, J. L.; Head-Gordon, M.; Reploge, E. S.; Pople, J. A.; Gaussian, Inc.: Pittsburg, PA, 1998. • 21. Hazebroucq, S.; Picard, G. S.; Adamo, C.; Heine, T.; Gemming, S.; Seifert, G.; J. Chem. Phys. 2005, 123, 134510. • 22. Heine, T.; dos Santos, H. F.; Patchkovskii, S.; Duarte, H. A.; J. Phys. Chem. A 2007, 111, 5648. • 23. Heine, T.; Seifert, G.; Fowler, P. W.; Zerbetto, F.; J. Phys. Chem. A 1999, 103, 8738. • 24. Ivanovskaya, V. V.; Heine, T.; Gemming, S.; Seifert, G.; Phys. Status Solidi B 2006, 243, 1757. • 25. Frauenheim, T.; Seifert, G.; Elstner, M.; Niehaus, T.; Kohler, C.; Amkreutz, M.; Sternberg, M.; Hajnal, Z.; Di Carlo, A.; Suhai, S.; J. Phys.: Condens. Matter 2002, 14, 3015. • 26. Heringer, D.; Niehaus, T. A.; Wanko, M.; Frauenheim, T.; J. Comput. Chem. 2007, 28, 2589. • 27. Niehaus, T. A.; Suhai, S.; Della Sala, F.; Lugli, P.; Elstner, M.; Seifert, G.; Frauenheim, T.; Phys. Rev. B: Condens. Matter Mater. Phys. 2001, 63, 085108. • 28. Parr, R. G.; Yang, W.; Density-Functional Theory of Atoms and Molecules; Oxford University Press, 1989. • 29. Hohenberg, P.; Kohn, W.; Phys. Rev. B: Condens. Matter Mater. Phys. 1964, 136, B864. • 30. Kohn, W.; Sham, L. J.; Phys. Rev. 1965, 140, 1133. • 31. Slater, J. C.; Koster, G. F.; Phys. Rev. 1954, 94, 1498. • 32. Goringe, C. M.; Bowler, D. R.; Hernandez, E.; Rep. Prog. Phys. 1997, 60, 1447. • 33. Froyen, S.; Harrison, W. A.; Phys. Rev. B: Condens. Matter Mater. Phys. 1979, 20, 2420. • 34. Chadi, D. J.; Phys. Rev. Lett. 1979, 43, 43. • 35. Foulkes, W. M. C.; Haydock, R.; Phys. Rev. B: Condens. Matter Mater. Phys. 1989, 39, 12520. • 36. Elstner, M.; Porezag, D.; Jungnickel, G.; Elsner, J.; Haugk, M.; Frauenheim, T.; Suhai, S.; Seifert, G.; Phys. Rev. B: Condens. Matter Mater. Phys. 1998, 58, 7260. • 37. Frauenheim, T.; Seifert, G.; Elstner, M.; Hajnal, Z.; Jungnickel, G.; Porezag, D.; Suhai, S.; Scholz, R.; Phys. Status Solidi B 2000, 217, 41. • 38. Mulliken, R. S.; J. Chem. Phys. 1955, 23, 1833. • 39. Parr, R. G.; Pearson, R. G.; J. Am. Chem. Soc. 1983, 105, 7512. • 40. Janak, J. F.; Phys. Rev. B: Condens. Matter Mater. Phys. 1978, 18, 7165. • 41. Mineva, T.; Heine, T.; Int. J. Quantum Chem. 2006, 106, 1396. • 42. Porezag, D.; Frauenheim, T.; Kohler, T.; Seifert, G.; Kaschner, R.; Phys. Rev. B: Condens. Matter Mater. Phys. 1995, 51, 12947. • 43. Seifert, G.; Porezag, D.; Frauenheim, T.; Int. J. Quantum Chem. 1996, 58, 185. • 44. Hu, H.; Lu, Z.; Elstner, M.; Hermans, J.; Yang, W.; J. Phys. Chem. A 2007, 111, 5685. • 45. Frenzel, J.; Joswig, J. O.; Seifert, G.; J. Phys. Chem. C 2007, 111, 10761. • 46. Kuc, A.; Enyashin, A.; Seifert, G.; J. Phys. Chem. B 2007, 111, 8179. • 47. Luschtinetz, R.; Oliveira, A. F.; Frenzel, J.; Joswig, J. O.; Seifert, G.; Duarte, H. A.; Surf. Sci. 2008, 602, 1347. • 48. Elstner, M.; Frauenheim, T.; McKelvey, J.; Seifert, G.; J. Phys. Chem. A 2007, 111, 5607. • 49. Muller, P.; Pure Appl. Chem. 1994, 66, 1077. • 50. Elstner, M.; Hobza, P.; Frauenheim, T.; Suhai, S.; Kaxiras, E.; J. Chem. Phys. 2001, 114, 5149. • 51. Zhechkov, L.; Heine, T.; Patchkovskii, S.; Seifert, G.; Duarte, H. A.; J. Chem. Theory Comput. 2005, 1, 841. • 52. Koester, A. M.; Flores, R.; Geudtner, G.; Goursot, A.; Heine, T.; Patchkovskii, S.; Reveles, J. U.; Vela, A.; Salahub, D. R.; deMon VS. 1.1; NRC: Ottawa, Canada, 2004. • 53. Rappe, A. K.; Casewit, C. J.; Colwell, K. S.; Goddard, W. A.; Skiff, W. M.; J. Am. Chem. Soc. 1992, 114, 10024. • 54. Tortonda, F. R.; Pascual-Ahuir, J. L.; Silla, E.; Tunon, I.; Ramirez, F. J.; J. Chem. Phys. 1998, 109, 592. • 55. Wada, G.; Tamura, E.; Okina, M.; Nakamura, M.; Bull. Chem. Soc. Jpn. 1982, 55, 3064. • 56, accessed in May 22, 2009. » link * e-mail:; Publication Dates • Publication in this collection 28 Aug 2009 • Date of issue • Received 06 Aug 2008 • Accepted 29 May 2009 Sociedade Brasileira de Química Instituto de Química - UNICAMP, Caixa Postal 6154, 13083-970 Campinas SP - Brazil, Tel./FAX.: +55 19 3521-3151 - São Paulo - SP - Brazil Accessibility / Report Error
11aa7d0522bf1a8a
Physics:Energy level From HandWiki Short description: Different states of quantum systems Energy levels for an electron in an atom: ground state and excited states. After absorbing energy, an electron may "jump" from the ground state to a higher energy excited state. A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles, which can have any amount of energy. The term is commonly used for the energy levels of the electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized. In chemistry and atomic physics, an electron shell, or principal energy level, may be thought of as the orbit of one or more electrons around an atom's nucleus. The closest shell to the nucleus is called the "1 shell" (also called "K shell"), followed by the "2 shell" (or "L shell"), then the "3 shell" (or "M shell"), and so on farther and farther from the nucleus. The shells correspond with the principal quantum numbers (n = 1, 2, 3, 4 ...) or are labeled alphabetically with letters used in the X-ray notation (K, L, M, N...). Each shell can contain only a fixed number of electrons: The first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2n2 electrons.[1] Since electrons are electrically attracted to the nucleus, an atom's electrons will generally occupy outer shells only if the more inner shells have already been completely filled by other electrons. However, this is not a strict requirement: atoms may have two or even three incomplete outer shells. (See Madelung rule for more details.) For an explanation of why electrons exist in these shells see electron configuration.[2] If the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention, then bound electron states have negative potential energy. If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. An energy level is regarded as degenerate if there is more than one measurable quantum mechanical state associated with it. Wavefunctions of a hydrogen atom, showing the probability of finding the electron in the space around the nucleus. Each stationary state defines a specific energy level of the atom. Quantized energy levels result from the wave behavior of particles, which gives a relationship between a particle's energy and its wavelength. For a confined particle such as an electron in an atom, the wave functions that have well defined energies have the form of a standing wave.[3] States having well-defined energies are called stationary states because they are the states that do not change in time. Informally, these states correspond to a whole number of wavelengths of the wavefunction along a closed path (a path that ends where it started), such as a circular orbit around an atom, where the number of wavelengths gives the type of atomic orbital (0 for s-orbitals, 1 for p-orbitals and so on). Elementary examples that show mathematically how energy levels come about are the particle in a box and the quantum harmonic oscillator. Any superposition (linear combination) of energy states is also a quantum state, but such states change with time and do not have well-defined energies. A measurement of the energy results in the collapse of the wavefunction, which results in a new state that consists of just a single energy state. Measurement of the possible energy levels of an object is called spectroscopy. The first evidence of quantization in atoms was the observation of spectral lines in light from the sun in the early 1800s by Joseph von Fraunhofer and William Hyde Wollaston. The notion of energy levels was proposed in 1913 by Danish physicist Niels Bohr in the Bohr theory of the atom. The modern quantum mechanical theory giving an explanation of these energy levels in terms of the Schrödinger equation was advanced by Erwin Schrödinger and Werner Heisenberg in 1926. Intrinsic energy levels In the formulas for energy of electrons at various levels given below in an atom, the zero point for energy is set when the electron in question has completely left the atom, i.e. when the electron's principal quantum number n = ∞. When the electron is bound to the atom in any closer value of n, the electron's energy is lower and is considered negative. Orbital state energy level: atom/ion with nucleus + one electron Assume there is one electron in a given atomic orbital in a hydrogen-like atom (ion). The energy of its state is mainly determined by the electrostatic interaction of the (negative) electron with the (positive) nucleus. The energy levels of an electron around a nucleus are given by : [math]\displaystyle{ E_n = - h c R_{\infty} \frac{Z^2}{n^2} }[/math] (typically between 1 eV and 103 eV), where R is the Rydberg constant, Z is the atomic number, n is the principal quantum number, h is Planck's constant, and c is the speed of light. For hydrogen-like atoms (ions) only, the Rydberg levels depend only on the principal quantum number n. This equation is obtained from combining the Rydberg formula for any hydrogen-like element (shown below) with E = h ν = h c / λ assuming that the principal quantum number n above = n1 in the Rydberg formula and n2 = ∞ (principal quantum number of the energy level the electron descends from, when emitting a photon). The Rydberg formula was derived from empirical spectroscopic emission data. [math]\displaystyle{ \frac{1}{\lambda} = RZ^2 \left(\frac{1}{n_1^2}-\frac{1}{n_2^2}\right) }[/math] An equivalent formula can be derived quantum mechanically from the time-independent Schrödinger equation with a kinetic energy Hamiltonian operator using a wave function as an eigenfunction to obtain the energy levels as eigenvalues, but the Rydberg constant would be replaced by other fundamental physics constants. Electron-electron interactions in atoms If there is more than one electron around the atom, electron-electron-interactions raise the energy level. These interactions are often neglected if the spatial overlap of the electron wavefunctions is low. For multi-electron atoms, interactions between electrons cause the preceding equation to be no longer accurate as stated simply with Z as the atomic number. A simple (though not complete) way to understand this is as a shielding effect, where the outer electrons see an effective nucleus of reduced charge, since the inner electrons are bound tightly to the nucleus and partially cancel its charge. This leads to an approximate correction where Z is substituted with an effective nuclear charge symbolized as Zeff that depends strongly on the principal quantum number. [math]\displaystyle{ E_{n,\ell} = - h c R_{\infty} \frac{{Z_{\rm eff}}^2}{n^2} }[/math] In such cases, the orbital types (determined by the azimuthal quantum number ) as well as their levels within the molecule affect Zeff and therefore also affect the various atomic electron energy levels. The Aufbau principle of filling an atom with electrons for an electron configuration takes these differing energy levels into account. For filling an atom with electrons in the ground state, the lowest energy levels are filled first and consistent with the Pauli exclusion principle, the Aufbau principle, and Hund's rule. Fine structure splitting Fine structure arises from relativistic kinetic energy corrections, spin–orbit coupling (an electrodynamic interaction between the electron's spin and motion and the nucleus's electric field) and the Darwin term (contact term interaction of s shell electrons inside the nucleus). These affect the levels by a typical order of magnitude of 10−3 eV. Hyperfine structure Main page: Physics:Hyperfine structure This even finer structure is due to electron–nucleus spin–spin interaction, resulting in a typical change in the energy levels by a typical order of magnitude of 10−4 eV. Energy levels due to external fields Zeeman effect Main page: Physics:Zeeman effect There is an interaction energy associated with the magnetic dipole moment, μL, arising from the electronic orbital angular momentum, L, given by [math]\displaystyle{ U = -\boldsymbol{\mu}_L\cdot\mathbf{B} }[/math] [math]\displaystyle{ -\boldsymbol{\mu}_L = \dfrac{e\hbar}{2m}\mathbf{L} = \mu_B\mathbf{L} }[/math]. Additionally taking into account the magnetic momentum arising from the electron spin. Due to relativistic effects (Dirac equation), there is a magnetic momentum, μS, arising from the electron spin [math]\displaystyle{ -\boldsymbol{\mu}_S = -\mu_B g_S \mathbf{S} }[/math], with gS the electron-spin g-factor (about 2), resulting in a total magnetic moment, μ, [math]\displaystyle{ \boldsymbol{\mu} = \boldsymbol{\mu}_L + \boldsymbol{\mu}_S }[/math]. The interaction energy therefore becomes [math]\displaystyle{ U_B = -\boldsymbol{\mu}\cdot\mathbf{B} = \mu_B B (M_L + g_S M_S) }[/math]. Stark effect Main page: Physics:Stark effect Chemical bonds between atoms in a molecule form because they make the situation more stable for the involved atoms, which generally means the sum energy level for the involved atoms in the molecule is lower than if the atoms were not so bonded. As separate atoms approach each other to covalently bond, their orbitals affect each other's energy levels to form bonding and antibonding molecular orbitals. The energy level of the bonding orbitals is lower, and the energy level of the antibonding orbitals is higher. For the bond in the molecule to be stable, the covalent bonding electrons occupy the lower energy bonding orbital, which may be signified by such symbols as σ or π depending on the situation. Corresponding anti-bonding orbitals can be signified by adding an asterisk to get σ* or π* orbitals. A non-bonding orbital in a molecule is an orbital with electrons in outer shells which do not participate in bonding and its energy level is the same as that of the constituent atom. Such orbitals can be designated as n orbitals. The electrons in an n orbital are typically lone pairs. [4] In polyatomic molecules, different vibrational and rotational energy levels are also involved. Roughly speaking, a molecular energy state, i.e. an eigenstate of the molecular Hamiltonian, is the sum of the electronic, vibrational, rotational, nuclear, and translational components, such that: [math]\displaystyle{ E = E_{\text{electronic}} + E_{\text{vibrational}} + E_{\text{rotational}} + E_{\text{nuclear}} + E_{\text{translational}} }[/math] where Eelectronic is an eigenvalue of the electronic molecular Hamiltonian (the value of the potential energy surface) at the equilibrium geometry of the molecule. The molecular energy levels are labelled by the molecular term symbols. The specific energies of these components vary with the specific energy state and the substance. Energy level diagrams There are various types of energy level diagrams for bonds between atoms in a molecule. Molecular orbital diagrams, Jablonski diagrams, and Franck–Condon diagrams. Energy level transitions An increase in energy level from E1 to E2 resulting from absorption of a photon represented by the red squiggly arrow, and whose energy is h ν A decrease in energy level from E2 to E1 resulting in emission of a photon represented by the red squiggly arrow, and whose energy is h ν Electrons in atoms and molecules can change (make transitions in) energy levels by emitting or absorbing a photon (of electromagnetic radiation), whose energy must be exactly equal to the energy difference between the two levels. Electrons can also be completely removed from a chemical species such as an atom, molecule, or ion. Complete removal of an electron from an atom can be a form of ionization, which is effectively moving the electron out to an orbital with an infinite principal quantum number, in effect so far away so as to have practically no more effect on the remaining atom (ion). For various types of atoms, there are 1st, 2nd, 3rd, etc. ionization energies for removing the 1st, then the 2nd, then the 3rd, etc. of the highest energy electrons, respectively, from the atom originally in the ground state. Energy in corresponding opposite quantities can also be released, sometimes in the form of photon energy, when electrons are added to positively charged ions or sometimes atoms. Molecules can also undergo transitions in their vibrational or rotational energy levels. Energy level transitions can also be nonradiative, meaning emission or absorption of a photon is not involved. If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. Such a species can be excited to a higher energy level by absorbing a photon whose energy is equal to the energy difference between the levels. Conversely, an excited species can go to a lower energy level by spontaneously emitting a photon equal to the energy difference. A photon's energy is equal to Planck's constant (h) times its frequency (f) and thus is proportional to its frequency, or inversely to its wavelength (λ).[4] ΔE = h f = h c / λ, since c, the speed of light, equals to f λ[4] Correspondingly, many kinds of spectroscopy are based on detecting the frequency or wavelength of the emitted or absorbed photons to provide information on the material analyzed, including information on the energy levels and electronic structure of materials obtained by analyzing the spectrum. An asterisk is commonly used to designate an excited state. An electron transition in a molecule's bond from a ground state to an excited state may have a designation such as σ → σ*, π → π*, or n → π* meaning excitation of an electron from a σ bonding to a σ antibonding orbital, from a π bonding to a π antibonding orbital, or from an n non-bonding to a π antibonding orbital.[4][5] Reverse electron transitions for all these types of excited molecules are also possible to return to their ground states, which can be designated as σ* → σ, π* → π, or π* → n. A transition in an energy level of an electron in a molecule may be combined with a vibrational transition and called a vibronic transition. A vibrational and rotational transition may be combined by rovibrational coupling. In rovibronic coupling, electron transitions are simultaneously combined with both vibrational and rotational transitions. Photons involved in transitions may have energy of various ranges in the electromagnetic spectrum, such as X-ray, ultraviolet, visible light, infrared, or microwave radiation, depending on the type of transition. In a very general way, energy level differences between electronic states are larger, differences between vibrational levels are intermediate, and differences between rotational levels are smaller, although there can be overlap. Translational energy levels are practically continuous and can be calculated as kinetic energy using classical mechanics. Higher temperature causes fluid atoms and molecules to move faster increasing their translational energy, and thermally excites molecules to higher average amplitudes of vibrational and rotational modes (excites the molecules to higher internal energy levels). This means that as temperature rises, translational, vibrational, and rotational contributions to molecular heat capacity let molecules absorb heat and hold more internal energy. Conduction of heat typically occurs as molecules or atoms collide transferring the heat between each other. At even higher temperatures, electrons can be thermally excited to higher energy orbitals in atoms or molecules. A subsequent drop of an electron to a lower energy level can release a photon, causing a possibly colored glow. An electron farther from the nucleus has higher potential energy than an electron closer to the nucleus, thus it becomes less bound to the nucleus, since its potential energy is negative and inversely dependent on its distance from the nucleus.[6] Crystalline materials Crystalline solids are found to have energy bands, instead of or in addition to energy levels. Electrons can take on any energy within an unfilled band. At first this appears to be an exception to the requirement for energy levels. However, as shown in band theory, energy bands are actually made up of many discrete energy levels which are too close together to resolve. Within a band the number of levels is of the order of the number of atoms in the crystal, so although electrons are actually restricted to these energies, they appear to be able to take on a continuum of values. The important energy levels in a crystal are the top of the valence band, the bottom of the conduction band, the Fermi level, the vacuum level, and the energy levels of any defect states in the crystal. See also pl:Powłoka elektronowa
36aa2516d85e8770
Trending News Home / Technology / AI & IT / Metamaterial based Analog computers can provide extremely high computational power with low energy consumption High speed Supercomputers enable advanced computational modeling and data analytics applicable to all areas of science and engineering. They are being widely used in applications like Astrophysics, to understand stellar structure, planetary formation, galactic evolution and other interactions; Material Sciences to understand the structure and properties of materials and creation of new high-performance materials; Sophisticated climate models, which capture the effects of greenhouse gases, deforestation and other planetary changes, that have been key to understanding the effects of human behavior on the weather and climate change. However, such enormous processing power comes at a cost. Sunway TaihuLight at the National Supercomputing Center in Wuxi, Chinavers a whopping 93 petaflops (one petaflop equals a quadrillion floating-point operations per second). But it requires 10.649.600 processing units, so-called cores, that consume 15.371 megawatts – an amount of electricity that could power a small city of about 16.000 inhabitants based on an average energy consumption equal to that of San Francisco. In contrast human brain’s processing power is estimated at about 38 petaflops, about two-fifths of that of TaihuLight. But all it needs to operate is about 20 watts of energy. Watts, not megawatts! And yet it performs tasks that no machine has ever been able to execute. It is simply “programmed” by the interconnections between its active components, mostly so-called neurons. Electronic computers are extremely powerful at performing a high number of operations sequentially at very high speeds. However, they struggle with combinatorial tasks that can be solved faster if many operations are performed in parallel for example in cryptography and mathematical optimisation, which require the computer to test a large number of different solutions. Tomorrow’s applications demand stronger computing powers at much lower energy consumption levels. But digital computers simply can’t provide this out of the box. Therefore many alternative approaches are being pursued by the researchers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: Quantum computation and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Analog computer can be described as a model for a certain problem that can then be used to solve that very problem by means of simulating it. Typically such analogs are based on analog electronic circuits such as summers, integrators and multipliers. But they can also be implemented using digital components in which case they are called digital differential analyzers. There is no stored program that controls the operation of such a computer. Instead, you program it by changing the interconnection between its many computing elements – kind of like a brain. All of the machine’s computing elements work in complete parallelism with no central or distributed memory to access and to wait for. Such analog computers reach extremely high computational power for certain problem classes. Among others, they are unsurpassed for tackling problems based in differential equations and systems thereof – which applies to many if not most of today’s most relevant problems in science and technology. For instance, in a 2005 paper, Glenn E. R. Cowan described a Very-Large-Scale-Integrated Analog Computer (VLSI), i.e. an analog computer on a chip, so to speak. This chip delivered whopping 21 gigaflops per watt for a certain class of differential equations, which is better than today’s most power-efficient system in the Green500-list. Another proposed approach is Hybrid Approach. That is instead of full analog computer developing modern analog co-processors that take off the load of solving complex differential equations from traditional computers. The result would be so-called hybrid computers. Metamaterials based Analog Computers One of the technologies that is being used to create analog computers is metamaterials. Metamaterials are synthetic, compound materials that are structured in ways that give them specific properties — such as a negative refractive index – that are rare or absent in natural materials. To design optical metamaterials, researchers often rely on a branch of mathematics called transformation optics, which transforms the coordinates of space to control the path of light through a material. A famous example is the invisibility cloak whereby transformation optics is used to control the refraction of light in the cloak so that incident light travels smoothly around the cloaked object rather than scattering off it. The result is that an observer will conclude that the cloaked object is not present. In 2014, researchers led by Nader Engheta of the University of Pennsylvania proposed another possible use for transformation optics. They pointed out that electromagnetic waves encode mathematical functions in their amplitudes and phases – both of which can be transformed by metamaterials. This led the team to suggest that metamaterials could perform mathematical operations on these functions. Now Metamaterials have been used by researchers in the US to solve mathematical problems by transforming data that are encoded into electromagnetic waves. The researchers believe their new analogue computing paradigm offers several advantages over conventional digital computers and are now working to make it compatible with traditional silicon photonics devices. Compact analogue computer based on an acoustic metamaterial has been proposed by Farzad Zangeneh-Nejad and Romain Fleury at the Federal Institute of Technology (EPFL) in Lausanne, Switzerland. They have shown that the system should be capable of rapid differentiation, integration, and instantaneous image processing, and the duo believe it could achieve yet more impressive feats in the future. Metamaterial computer solves integral equations encoded in electromagnetic waves Now, Engheta and colleagues have designed a metamaterial that not only performs mathematical operations but can also find solutions to an important class of equations called integral equations. “In almost any field of science and engineering you can describe the numerical values of the phenomena that you are after using integral equations,” explains Engheta. Solving these equations is therefore vital to modelling a wide range of phenomena. Algebraic solutions are often impossible, however, so researchers often must rely on computational analysis. This involves rearranging the equation so that the unknown solution appears on both sides. Starting from an arbitrary point, the calculation is then run repeatedly in a feedback loop until the correct solution is reached. At this point, performing the mathematical operation described by the equation does not change the value, so the solution remains stable. “That takes time,” explains Engheta, which is why finding numerical simulations can often require significant computational resources. Speed of light The researchers believed metamaterials could offer several important advantages over this conventional digital process. One benefit is that the computational process could be extremely fast because electromagnetic waves pass through metamaterials at the speed of light. Also, the same metamaterial can process multiple waves simultaneously: “Waves can pass through each other, giving you a parallel system,” explains Engheta. To test their ideas, the researchers designed metamaterials from carefully-patterned dielectrics to perform mathematical transformations related to three different integral equations. Computational modelling of how electromagnetic waves interact with the metamaterials suggests that the solutions provided by the hypothetical systems should agree very well the solutions obtained from traditional numerical methods. Furthermore, the computational modelling suggests that the metamaterial systems can reach the correct solutions very quickly. The team also created a metamaterial in the lab for one of the integral equations (see figure). It was made from patterned low-loss polystyrene and is designed for use with microwaves. The team found that its performance was in very good agreement with computational predictions. In future, the researchers aim to build their metamaterials from a silica dielectric, which would make integration with standard silicon photonics devices easier. A silica dielectric metamaterial would also allow infrared light at telecom wavelengths to be used to perform calculations. This means that future devices could be much smaller than the microwave prototype. The team also hopes that in the future reconfigurable metameterials could be developed, effectively creating a kind of reprogrammable analogue computer. Nevertheless, stresses Engheta, the present platform does not offer the prospect of an alternative to the conditional logic of a true computer, in which one computation depends on the outcome of another: “We don’t have any optical logic here,” he says. Andrea Alù at the City University of New York was involved in the 2014 research and continues to work independently on computing based electromagnetic waves. He praises Engheta and colleagues for turning the original idea into reality. “I find it interesting because it’s not at all trivial that this can be worked out, especially given all the tolerances present.” Analogue computer could use sound to make rapid calculations Analogue computers use interactions involving physical entities such as light, electrical current or a mechanical system to perform specific calculations. Some of the most sophisticated analogue computers were developed in the early to mid-20th century to help guide artillery and aerial bombing strikes. While the advent of digital computers made these computers obsolete, they are now enjoying a resurgence thanks to ongoing research into artificial materials called metamaterials. These materials can be engineered to manipulate the light or sound waves passing through them in new ways – opening the door to new types of analogue computer.  Subtle engineering “Metamaterials are artificial structures composed of periodic subwavelength inclusions, which can be subtly engineered to provide the desired macroscopic characteristics of the overall material,” explains Zangeneh-Nejad. Metamaterials have already been used to create analogue computers that manipulate electromagnetic waves to perform mathematical operations. Zangeneh-Nejad and Fleury set out to design a device comparable to these optical computers, but using sound waves. However, the distinctive properties of sound waves meant that the researchers first needed to carefully consider how to design their metamaterial. “Usually, when sound is incident on a hard wall, it reflects without being subject to any particular transformation, and the only thing that happens is the direction of propagation changes,” says Fleury. “Our metamaterial is capable of performing complex signal processing tasks on sound waves when they are reflected, directly on the fly and without delay. It can achieve this instantaneously without converting [sound] into electrical signals.”  Through their calculations, the physicists uncovered the physical properties required of their metamaterial. “It requires a very special acoustic property that does not exist in nature: an acoustic refractive index larger than that of air,” explains Fleury. No transform required An important feature of the proposed device is that it performs operations directly in the spatial domain. Previous metamaterial-based computers have worked in the frequency, or Fourier domain, requiring bulky Fourier transform sub-blocks to convert signals into the spatial domain. The new metamaterial has no need for these additional elements. “In our computing system, the mathematical operator of choice is directly performed in the spatial domain using a metamaterial known as a high-index acoustic slab waveguide,” Zangeneh-Nejad explains. The duo have shown how their device could perform differentiation and integration, as well as instantaneous image detection. Writing in a preprint on arXiv, they explain how future generations of their design could be used to solve more complex differential equations, such as the Schrödinger equation. “We showed how more complex operators such as second order differentiator can be constructed simply by cascading more and more slab waveguides,” says Zangeneh-Nejad. Importantly, the researchers have worked-out that computing devices made from acoustic metamaterials could be entirely compatible with current computing infrastructure. “Our system is free of any Fourier bulk lens, highly miniaturized and potentially integratable in compact architectures, and can be implemented easily in practice.” The physicists will now further explore the capability of their waveguide to perform calculations at faster rates than conventional computers. “We are investigating applications of our metamaterial in compressive sensing, ultrafast equation solving, neural networks, and a large variety of other applications necessitating real-time and continuous signal processing,” Fleury explains. Their device also has the potential for exploring the dynamics of complex biological systems, allowing for new advances in medicine. As Zangeneh-Nejad adds, “our system could explore the computation processes in human brains, and many other natural systems like DNA, membranes and protein-protein interactions”. About Rajesh Uppal Check Also Software Defined Radio (SDR) technology Leave a Reply error: Content is protected !!
30c5829582dcc48f
Thursday, May 20, 2010 Quantum Mechanics and the Brain Quantum Mechanics and the Brain -- A NeuroQuantologic Perspective by Sultan Tarlaci, M.D. [Dr. S. Tarlaci ( is a practicing Neurologist of Turkiye spearheading the new branch of science he calls NeuroQuantology. It is a marriage between (Cognitive) Neuroscience and Quantum Mechanics. Dr. Tarlaci has been kind to let me highlight a few selected paragraphs from one of his latest articles (over 10,000 words) titled “A Historical View of the Relation Between Quantum Mechanics and the Brain: A NeuroQuantologic Perspective” published in the June 2010 issue of the peer reviewed Journal NeuroQuantology recently made available online. ( -- ramesam] Quantum Mechanics and the Brain: A NeuroQuantologic Perspective By Sultan Tarlaci, M.D. ABSTRAT: Over the past decade, discussions of the roles that quantum mechanics might or might not play in the theory of consciousness/mind have become increasingly sharp. One side of this debate stand conventional neuroscientists who assert that brain science must look to the neuron for understanding, and on the other side are certain physicists, suggesting that the rules of quantum theory might influence the dynamics of consciousness/mind. However, consciousness and mind are not separate from matter. Submicroscopic world of the human brain give rise to consciousness and mind. We are never able to make a sharp separation between mind and matter. Thus, ultimately there is no “mind” that can be separated from “matter” and no “matter” that can be separated from “mind”. The brain as a mixed physical system composed of the macroscopic neuron system and an additional microscopic system. The former consists of pathway conduction of neural impulses. The latter is assumed to be a quantum mechanical many-body system interacting with the macroscopic neuron system. [Select Paras from the article are reproduced below} We do not know what the glue is that binds neural activity to sub-cellular molecular mechanisms, and the mind as a whole to the brain, but at the same time, in physics we more or less know the nature of gluons, which hold matter together. Neurobiologists treat the brain and its parts as classical objects, and when they progress to smaller scales, give no importance to quantum mechanical effects. In this way, classical physics remains without mind or consciousness. With the rise of quantum mechanics in the 1900s, the search in physics for a place for “something else” alongside matter began, and unfortunately, the searchers were physicists and not neuroscientists. Consciousness, which at first entered into the philosophical interpretations of quantum mechanics, was eventually incorporated into the equations. Classical physics contradicts the idea of free will, and connections were sought with quantum mechanics, which made random choices. In 1963 computer scientist James Culbertson, in line with a long tradition of “panpsychism”, proposed that consciousness is an aspect of space-time, and all objects are to some extent conscious. According to relativity, our lives are in a region of space-time. Our brains show us a film of matter changing in time. All space-time events are consciousness and are in the consciousness of other space-time events. Evan Harris Walker presented a model of synaptic tunneling between nerve cells [in] 1970. [B]rain surgeon and researcher Karl Pribram (Pribram, 1971) and physicist David Bohm proposed that the brain worked like a hologram. In 1977, the neuroscientist John C. Eccles suggested that the regions between the nerve cells of the cortex might operate in a quantum mechanical fashion. According to Eccles, the interaction between mind and brain “is not by energy, but as if in a flow of information.” [Penrose, 1989] claims that consciousness is created by quantum mechanical operations carried out in the brain cells by means of objective reduction. According to Penrose, the place in the brain where quantum mechanical operations take place is the microtubules found in concentration in the brain cells. Hameroff devoted a large part of the next ten years to understanding how the microtubules could act like a computer network inside the brain cells (2001). Penrose-Hameroff theory became one of the main foundations of the quantum mechanical theory of consciousness. [I]t was postulated that conventional synaptic activity influences and is influenced by quantum state activity in the microtubules. This part of the process is referred to as 'orchestration' hence the theory is called Orchestrated Objective Reduction. Stapp’s [1995] quantum model of consciousness has three bases. 1. The Schrödinger process, which is mechanical and deterministic, and predicts the state of the system; 2. Heisenberg’s process, which is a choice made consciously. According to the theory of quantum mechanics, we know a thing when we ask a question of nature. We affect the universe with the question. 3. The Dirac process is that an answer must be given to the question which we asked. The answer is totally random. Yasue tried to prove that quantum mechanical effects had a function in recording memory, and that consciousness arose from an electromagnetic field interacting with the electric dipole field of water and protein molecules. Physical Brain does Operate Quantum Mechanics Quantum mechanics however turns man from an automaton into a personality with a mind that has an active role to play in wave function collapse. Quantum mechanics must be brought into the working of the brain and human behaviour because they are related to ionic nerve transmitters and atomic operations. For example, when neural electrical stimuli reach a junction between nerve cells, calcium ions enter the cell and cause the release of neurotransmitters. Ions and ion channels have very small dimensions. The opening of the channels and the movement of ions, as with other movements of ionic atoms, is a quantum mechanical event. Thus, the ions that enter may or may not cause the release of neurotransmitters from the vesicles in the nerve cells. The released neurotransmitters may or may not affect the sensors. This behaviour can only be described in terms of quantum probabilities. Such a quantum effect at a single nerve ending may not be important, but when this happens in a brain with 1015 synapses between nerve cells, classical physics is incapable of explaining it. For today, rather than hoping to find new molecules and brain structures to explain the working of the brain and consciousness, we need new ideas on the interaction of molecules that will help us more. In this sense, the quantum mechanical approach may open up new avenues. If we can define the oscillation of neurotransmitters in the synapses as being quantum mechanical, the sum total of synaptic activity in the brain may give an integrated brain wave function. At any moment in time, the potential state of observed events may be subject to superposition. That is, in the brain all alternative choices exist together at any one time, which Gordon Globus called a “plenum of possibilia.” Physical Brain does not Operate Quantum Mechanics Quantum mechanics and the word “quantum” have been added to many money-making enterprises. Quantum mechanics is necessary to understand the atoms of the brain, it is needed to understand the atoms of a stone in just the same way, but there is no need to make inferences using quantum mechanics about a stone’s consciousness. The idea underlying these statements is that ‘inexplicable’ events are somehow connected to ‘inexplicable’ quantum mechanics. Today, materialism has been replaced by psychology, and reductionism by a holistic view. [Q]uantum mechanics works without involving consciousness; it fits in with all observations and all the principles of physics (Song, 2008). However, this is unfortunately ignored in the popular press, because it does not support their preference for mystical nonsense. Quantum events according to the Schrödinger equation are linear. The nervous system however shows non-linear events at all levels. The brain is not a closed system containing energy and information, it is an open system relating to meaning and thought. The bottom line is that consciousness has been inserted into quantum mechanics, and this is an unnecessary complication. But it doesn’t end there. Afterwards, the place of consciousness becomes assured by creating answers to the wrong questions, whereas in fact quantum mechanics has nothing to say about the relationship between consciousness and matter. The new quantum holism feeds our obsessions and tells us we are a part of the non-living cosmic mind. In this way, traditional religions are being modernized. A mystical physics is basically a wrong understanding of Hindu and Buddhist philosophy. The main argument against the quantum mind proposition is that quantum states would decohere too quickly to be relevant to neural processing. Possibly the scientist most often-quoted in relation to this criticism is Max Tegmark. Based on his calculations, Tegmark concluded that quantum systems in the brain decohere quickly and cannot control brain function (Tegmark, 2000). Since 2003, neuroscience and quantum physics have been growing together by examining two main topics. One of these is the problem of measurement in quantum mechanics. The measurement problem has brought many other still unanswered questions in its train. The other main topic of NeuroQuantology is quantum neurobiology: NeuroQuantology provides the motivation to break down this resistance and open a new door to quantum neurobiology (Tarlaci, 2010). Any new information that we have gained about consciousness and the brain will open up even bigger questions. If there is one thing we have learned from the course of science up until today, it is that in understanding completely our brain and consciousness, we cannot jump over our own shadows. Steven Baughman said... Agree with your comments. Quantum mechanics is useful for understanding matter at very small sizes. But not for consciousness. Yet physics, not just quantum mechanics, but any physics which seeks to understand the essence of matter, is important for Hinduism and specially Advaita. "The importance of Advaita Vedanta is that it makes the claim that at the ultimate level, the universe will be seen to have as its origin, not discrete, multiple particles, but a single homogenous structure beyond time and space." Advaita Yoga and Quantum physics. So science does have relevance for Hinduism, and can prove or disprove its theories. Dr Anudeep Reddy k said... Physics whether empirical or quantum both limited by time n space. But advaitha is beyond it not limited by time n space... the sum total of energy is constant ... which is present in ur beyond ur senses mind n intellect but it cause of survival of all those when it goes away of ur body u body is died but not energy..its always first understand advaitha before speaking on it
0945e41ead2ae4ee
Density Matrix (redirected from Density state) Density matrix A matrix which is constructed as the most general statistical description of the states of a many-particle quantum-mechanical system. The state of a quantum system is described by a normalized wave function ψ(x, t) [where x stands for all coordinates of the system, and t for the time], which satisfies the Schrödinger equation (1), where H is the hamiltonian of the system, and &hstrok; is Planck's constant divided by 2π. Furthermore, ψ(x, t) may be expanded in terms of a complete orthonormal set {&phiv;(x)}, as in Eq. (2). Then, the density matrix is defined by Eq. (3), and this density matrix describes a pure state. Examples of pure states are a beam of polarized electrons and the photons in a coherent beam emitted from a laser. See Laser, Quantum mechanics In quantum statistics, one deals with an ensemble of N systems which have the same hamiltonian. If the αth member of the ensemble is in the state ψα in Eq. (4), the density matrix is defined as the ensemble average, Eq. (5). In general, this density matrix describes a mixed state, for example, a beam of unpolarized electrons or the photons emitted from an incoherent source such as an incandescent lamp. The pure state is a special case of the mixed state when all members of the ensemble are in the same state. See Statistical mechanics Density Matrix an operator by means of which it is possible to calculate the average value of any physical quantity in quantum statistical mechanics and, in particular, in quantum mechanics. A density matrix describes a system’s state based on an incomplete set (incomplete in terms of quantum mechanics) of data on the system (seeMIXED STATE). density matrix [′den·səd·ē ′mā·triks] (quantum mechanics) A matrix ρ mn describing an ensemble of quantum-mechanical systems in a representation based on an orthonormal set of functions φ n; for any operator G with representation Gmn, the ensemble average of the expectation value of G is the trace of ρ G. References in periodicals archive ? The experimental observations of the present paper encourage us to interpret the existence of events with the Gaussian pseudorapidity distributions of produced particles in central relativistic proton-nucleus interactions as a result of a proton-tube collisions and subsequent formation in the course of an interaction of a droplet of hadronic matter, the quark-gluon plasma, i.e., the primordial high density state, whose expansion and cooling lead to its decay with production of final state particles. Four different traffic attributes such as distance, traffic density state, travel time and number of intersections were considered for this problem of study. Each density state was simulated 10 times and the fundamental diagram for CD model and RF model with different values of parameter [gamma] could be obtained, as shown in Figure 2. The role of little clay within the mix is unlikely significant if the sand particles are in full contact or interlocked at the density state described. In the following paper the impact is determined by a randomly occurring "false negative" detection error on the quality of the density state estimation of counting systems during large-scale pedestrian events. In order to further analyze the influence of the stress on ferromagnetic characteristics, the total electron density states are calculated under different stresses. In this work, a variety of soil density states were also studied. Yet, based on the criteria of FTEs per thousand farms (Figure 4, Panel B), Nevada (NV), Arizonia (AZ), and Idaho (ID) from the Mountain region are among the highest extension density states. Other high extension density states include North Dakota (ND) and Nebraska (NE) from the Northern Plains; Louisiana (LA) from the Delta region; West Virgina (WV) from the Appalachian region; and Vermont (VT), New Hampshire (NH), Connecticut (CT), and Rhode Island (RI) from the Northeast region. Therefore, CSSM is a powerful tool able to explain the behaviour of soil at various density states. It is a globally recognised framework that the critical states for sand and clay are both well established. This means that the diatomic orientational relaxation takes place outside the Markovian limit, being hindered by an anisotropic potential which presents a certain temporal coherence, producing effects like the apparition in the central part of the spectral density between the P- and R-branches, the so called Q-branch [6], which in the high density states of this study is associated with the long-lived diatomic-solvent spatial correlations and in the low density limit, it leads to the traditional Van der Waals complexes [16]. The bill provides a $14.1 billion package of tax incentives for energy production and conservation and includes a provision that requires high-traffic density states, like New York and California, to use ethanol as a gas additive. Full browser ?
03604e247a3ff324
The celebrated phenomenon of Bloch oscillations1, 2 (BOs) was originally proposed for electrons in crystals in the presence of homogeneous electric fields which give rise to a potential that varies linearly in the field direction. After a long lasting debate about the actual existence of BOs, see, e.g., refs 3, 4, rigorous upper bounds for the interband tunelling rates could be established and the effective Hamiltonians that lead to BOs and their frequency-domain counterpart the Wannier-Stark ladder could be justified, see, e.g., ref. 5 and references therein. In the early 1990s BOs were first observed experimentally in electrically-biased semiconductor superlattices using optical interband excitation with femtosecond laser pulses6, 7. A few years later, also for atoms in optical lattices8 and for coupled waveguides9 BOs have been realized. This proves that BOs can be considered in a broader context as a fundamental effect that may occur in systems which support wave propagation in media with periodically-varying parameters and with a linear potential. The physical understanding of BOs comes from the band-gap structure of the underlying periodic linear potential and can be viewed as a Bloch mode “motion” along the dispersion curve1, 10. In addition to the existence of the band-gap structure such an interpretation requires the linear gradient to be small (otherwise it cannot be accounted in terms of the adiabatic theorem and must be considered in leading order). Thus by its nature BO is a linear phenomenon and it is common belief that nonlinearity plays a destructive role which makes it impossible to observe BOs at long times (or propagation distances, depending on the particular physical context) even without dephasing processes. This was first reported in ref. 11 and later on confirmed experimentally in optics using arrays of Kerr-type waveguides12 and furthermore in Bose-Einstein condensates (BECs) loaded in optical lattices13,14,15, where only a few oscillations were detected. The main reason which suppresses long-living nonlinear BOs was discussed in ref. 16 and originates from the modulation instability of Bloch waves at different edges of the band gap, where the effective mass (effective dispersion) changes its sign: Bloch waves are stable and unstable at the opposite edges provided the nonlinearity remains constant17. This understanding has led to several suggestions of rather complicated spatial18, 19 and temporal20, 21 nonlinear management techniques which could support long-lived BOs. All of such proposals are based on the idea of changing the sign of the effective nonlinearity synchronized with the change of the sign of the effective mass in a way that their product remains of the same sign during the evolutions. This requires controlled modification of the system’s properties. Considering BOs as a broader concept, namely as the periodic evolution of systems obeying a discrete translational invariance and being subject to a linear gradient, they exist even in strongly nonlinear systems and in the presence of an arbitrary large gradient, if the system is exactly integrable. This has been obtained analytically22, 23 and numerically24 for integrable discrete nonlinear Schrödinger equations (known also as the Ablwitz-Ladik model25), as well as for its integrable generalizations26. While the mathematical reason for the exact periodic motion of such systems consists in their exact integrability, the physical explanation relies on the property of a specific nonlocal nonlinearity in such models which leads to stable Bloch modes at both band edges16. In the integrable models the phenomenon of BOs is not restricted to small amplitudes of the linear potential. When the potential strength becomes large enough the pulses become practically localized in space since the amplitude of BOs can become less than the width of the pulse. A similar non-spreading behavior of wave packets can be also observed in non-integrable models at large nonlinearities27. On the other hand, when the strength of the nonlinearity increases, the non-integrable models show other types of behavior like the transient phenomenon of single-site trapping followed by explosive spreading and subdiffusion of the wave packet28. So far, no non-integrable systems with a constant nonlinearity coefficient, which support long-living BOs, have been proposed. Here, we fill this gap and introduce and analyze a physically-relevant non-integrable model which does show BO dynamics persisting for long times at considerable nonlinearities and linear gradients. As it is demonstrated below, balance between the effects of the nonlinearity and the dispersion can be achieved in systems that contain an additional dimension besides the dimension corresponding to the direction of the linear gradient. This balance may result in the existence of very stable oscillatory motion of discrete-continuous soliton-like wave packets. Model and Linear Dynamics We consider an array of coupled one-dimensional nonlinear waveguides which are subject to a linear potential. Thus our system is effectively two-dimensional with one discrete and one continuous spatial variables. It is described by the coupled nonlinear Schrödinger equations which in dimensionless variables read $$i\frac{\partial {u}_{n}}{\partial t}+\alpha \frac{{\partial }^{2}{u}_{n}}{\partial {x}^{2}}+\kappa ({u}_{n-1}+{u}_{n+1}-2{u}_{n})+\gamma n{u}_{n}+g{|{u}_{n}|}^{2}{u}_{n}=0.$$ Here u n (t, x) is the nonlinear field, κ is the coupling between neighbour waveguides, α is the continuous diffraction coefficient, γ is the strength of the linear gradient, and g is the nonlinearity which is considered to be attractive (or focusing, depending on the physical context), i.e., g ≥ 0. Equation (1) describes the light propagation in an array of coupled optical fibers29 in the presence of a linear gradient of the waveguide effective index. In this case, u n is the dimensionless electric field, the evolution coordinate t needs to be interpreted as the spatial coordinate along the fiber, and x will be the normalized retarded time. Thus Eq. (1) properly describes the evolution of an optical pulse experiencing continuous dispersion together with discrete diffraction in presence of Kerr nonlinearity and a linear gradient of the waveguide effective index. The model defined by Eq. (1) is even more generic. In addition, it also describes an array of coupled quasi-one-dimensional BECs, where u n stands for the dimensionless order parameter in n-th trap minimum. In the experiment the respective traps can be created by deep periodic optical lattices, see, e.g., refs 30, 31. In such a statement the discrete index n numbers the successive minima of the optical lattice and κ characterizes coupling due to the tunneling of atoms between neighbor minima. Such a model can be viewed as extensions of a previous study32 of two BEC array created by a double-well potential, to the case of a trap created by an optical lattice. Since BOs were discovered and are usually considered to be purely linear phenomenon, where in one-dimensional settings the nonlinearity plays a destructive role, we start with the linear case and set α = 0.5 and g = 0. In this limit the Cauchy problem defined by Eq. (1) supplied by the initial condition \({u}_{m}^{\mathrm{(0)}}(x)={u}_{m}(x,t=\mathrm{0)}\), can readily be solved explicitly (with tilde we denote the linear limit): $${\tilde{u}}_{n}=\frac{1-i}{2\sqrt{\pi t}}\sum _{m}\,{(-\mathrm{1)}}^{n-m}{e}^{i(\gamma m-2\kappa )t}{J}_{n-m}(\frac{2\kappa }{\gamma }){\int }_{-\infty }^{\infty }\,\exp [i\frac{{(x-\xi )}^{2}}{2t}]{u}_{m}^{\mathrm{(0)}}(\xi )d\xi ,$$ where J n (·) is the n-th order Bessel function. For the sake of definiteness, in all numerical simulations presented below (except Fig. 6 as it is explictly indicated below) we consider initial conditions having a Gaussian envelope with respect to n and sech-like profiles with respect to x: $${u}_{n}^{\mathrm{(0)}}(x)=\frac{A(n)}{\cosh [A(n)x]},\,{\rm{where}}\,A(n)={a}_{0}{e}^{-{n}^{2}/{w}^{2}},$$ where w is the characteristic width of the initial wave packet along n-direction, and a 0 characterizes the wave packet amplitude. In Fig. 1 we illustrate the dynamical evolution of the linear solution according to Eqs. (2) and (3). Panel (a) illustrates the oscillations of the wave envelope along the discrete coordinate with the amplitude and the frequency given by the analytic formulas. The significant decrease of the intensity of the field is clearly seen and is explained by the spreading of the envelope along x coordinate [see Fig. 1(b)]. Figure 1 figure 1 Propagation of a wave packet in a linear system, i.e., for g = 0. Panels (a) and (b) show the evolution of the wave packet in the n − t and in the x − t planes, respectively. The parameters are chosen as α = 0.5, κ = 2 and γ = 0.1. The initial condition is given by Eq. (3) with a 0 = 0.15 and w = 100. The initial condition is chosen to be wide along n to make this case close to typical BOs. Hereafter we display the modulus of the field- |u n |. In order to characterize the dynamics of the wave-packet both in linear and (below) nonlinear cases we define the average of an arbitrary function f n (x, t) by the formula \(\langle f\rangle =\frac{1}{P}{\int }_{-\infty }^{\infty }\,{\sum }_{n}\,{f}_{n}(x,t){|{u}_{n}(x,t)|}^{2}dx\), where \(P={\sum }_{n}\,{\int }_{-\infty }^{\infty }\,{|{u}_{n}(x,t)|}^{2}dx\). This allows us to explore the average positions of the wave along x and n directions, i.e., 〈x〉 and 〈n〉, respectively. Furthermore, we define the deformation parameter characterizing “combined” changes of the wave packet width of the wave packet during the evolution: $${\rm{\Delta }}(t)=\sqrt{{[N(t)-N\mathrm{(0)]}}^{2}+{[X(t)-X(0)]}^{2}}.$$ Here \(N(t)=\sqrt{\langle {(n-\langle n\rangle )}^{2}\rangle }\) and \(X(t)=\sqrt{\langle {(x-\langle x\rangle )}^{2}\rangle }\) are the average widths of the wave packet in n and x directions. If deformations with respect to n and x are strongly asymmetric, the parameter Δ is the estimate of the largest deformation of the wave envelope. For the ideal case of totally robust BOs, Δ(t) would be time independent. A growing or decreasing deformation parameter Δ(t) corresponds to increasing deformations of the initial wave packet. The introduced deformation parameter is shown in Fig. 2(a,b). In particular, in full agreement with the evolution shown Fig. 1, the red dashed line in Fig. 2(a) illustrates the very rapid increase of Δ(t) in the linear case corresponding to the absence of long-lived BOs in this regime. Figure 2 figure 2 (a) Shows the temporal evolution of the overall spread of the wave packet Δ(t) for the linear case pertaining to Fig. 1 and to the nonlinear case shown in Fig. 3(a,b). (b) Δ(t = 3000), i.e., the spread after a long time evolution, as function of the nonlinearity g for different gradient coefficients: γ = 0.01, 0.1, and 0.5. For (a,b) we set α = 0.5. (c) Optimal values of the nonlinear coefficient g for various values of the continuous diffraction coefficient α. The gradient strength is set to γ = 0.1. When a focusing nonlinearity is present, an obvious expectation is that it may compensate the diffraction leading to a slower spreading of the wave packet along the x-direction or eventually even to stationary propagation. Thus the nonlinearity would prevent the decay of the beam amplitude. On the other hand, one also expects the destruction of the BOs in the n-direction in the presence of a nonlinearity. However, since the reasons for the decay of BOs in (weakly) nonlinear one-dimensional systems arise from the change of the effective diffraction (effective mass, using in solid state terminology) when a beam moves between the two opposite edges of a band, one may expect that adding an additional direction may weaken this effect and thus stabilize BOs. Indeed, the dispersion relation associated with the linear case of Eq. (1) at g = 0 is obtained by the ansatz \({u}_{n}(x,t)\sim {e}^{i(\omega t+qn+kx)}\) and reads \(\omega =-{k}^{2}/2+4\kappa \,{\sin }^{2}\,(q/2)\). Thus near the center and the boundary of the Brillouin zone, i.e., at \(|q|\ll 1\) and \(q=\pi +\tilde{q}\) with \(|\tilde{q}|\ll 1\), respectively, the dispersion relation is given by \(\omega \approx -{k}^{2}/2+\kappa {q}^{2}\) and by \(\omega \approx -{k}^{2}/2+4\kappa -\kappa {\tilde{q}}^{2}\). So, at the boundary of the Brillouin zone for a focusing nonlinearity the wave packet will be compressed along both directions since both curvatures are negative. In a continuous homogeneous medium with the parabolic dispersion relation −k 2/2 − κq 2 and Kerr nonlinearity there exists only the unstable Townes soliton and hence the discreteness preventing the collapse plays a stabilizing role. On the other hand, at the center of the Brillouin zone the curvatures along k and q directions (\({\partial }_{x}^{2}\omega \) and \({\partial }_{q}^{2}\omega \), correspondingly) are of opposite signs. The one associated with discrete variable is positive and results in an effective dispersion tending to destroy the localized wave packet. The amplitude of this dispersive wave packet, however, does not decay as fast as it would happen in the x-independent case, since now the compression of the wave packet along the x-direction may compensate the decay of the wave packet amplitude due to the dispersion. These simple qualitative arguments allow us to suggest that the interplay of the nonlinearity with the discreteness of the model Eq. (1) may enhance the stability of nonlinear BOs allowing them to become long-lived. Such robustization is indeed shown in Fig. 3(a,b) which displays the evolution of the wave packet for the same input parameters as shown in Fig. 1 except that now the nonlinear coefficient of g = 0.9 is taken into account. Comparing Fig. 3(a,b) to Fig. 1(a,b) clearly demonstrates that the nonlinearity on the one hand prevents the spreading of the wave packet in x-direction and on the other hand leads to the existence of long-lived BOs in the n-direction. Such evolution can qualitatively be understood to arise from the above explained compensation effect. For our parameters, corresponding to the strongly nonlinear case, the period of the BOs is still very well approximated by the formula T = 2π/γ derived for the linear case. In particular, for γ = 0.1 the obtained period of the oscillations of the nonlinear wave packet is ≈62.8 which is very close to the oscillations period of the linear case. The dynamics displayed in Fig. 3 corresponds to almost 50 BO periods over which the wave packet is not significantly distorted. Figure 3 figure 3 (a,b) Display the long-time evolution of the wave packet in a nonlinear system in the n − t and in the x − t planes, respectively. The system parameters are the same as in Fig. 1 except for the optimal nonlinearty of g = 0.9 considered here. (c,d) Show the destruction of robustness of the BOs in the presence of much stronger nonlinearity g = 1.6 than the optimal one. The parameters are chosen as α = 0.5, κ = 2 and γ = 0.1. The initial condition is given by Eq. (3) with a 0 = 0.15 and w = 100. All the other parameters are identical to those in Fig. 1 for both cases. The robustness of the BOs is also confirmed by Fig. 2(a) which shows that in the presence of the nonlinearity the deformation parameter Δ(t) grows with time very slowly. The long-lived BOs require a certain balance of the system parameters to achieve the underlying compensation between diffraction and focusing. The competing effects of the nonlinearity, strength of linear potential and dispersion, are analyzed in Fig. 2(b) where we study the spread of the wave packet Δ(t) after sufficiently long evolution time, more specifically at t = 3000, for fixed linear gradients γ as function of the strength of the nonlinearity g. For each of the studied γ we observe clear minima at respective values of the nonlinearity. These minima correspond to the optimal relation between the nonlinearity and the linear potential resulting in robust BOs. In order to get the direct numerical proof of the main result of our paper – the stabilizing effect of the additional dimension – we performed the study of the BOs at different values of the diffraction coefficient α. Indeed, the limit α → 0 meaning negligible diffraction, returns us to the effectively 1D discrete lattice. Since in this limit the nonlinearity has a destructive effect on BOs, it is natural to expect that the optimal parameter Δ(t) for smaller α is achieved at smaller nonlinearity g, and g → 0 at α → 0. This is exactly what we observe on Fig. 2(c). We observe that increasing α results in an almost linear increase of the optimal g, clearly demonstrating that the most robust oscillating regime is achieved when the nonlinearity is balanced by the dispersion. This phenomenon is known to be in the basis of soliton creation in nonlinear systems, and thus allowing us to conjecture that our oscillating object can be viewed as a soliton-like wave packet (see also (6) and the related discussion). Let us now take a closer look at the compromise between the X-component and N-component of the deformation parameter Δ(t). To this end we fix the system parameters optimized for α = 0.5 and compare the dynamics of Δ(t), N(t) and X(t) for these optimal case with the cases where α = 0.4 and α = 0.6, i.e. for the evolution at non-optimal diffraction. The results are presented in Fig. 4. Blue curves show the indicators pertaining to the optimal value of nonlinear coefficient g = 0.9 for the particular value of α = 0.5. Two different mechanisms affect each of the components. By increasing the diffraction coefficient α = 0.6, while maintaining the nonlinear coefficient g = 0.9, we observe the expected reducing of the wavepacket dispersion N in the “discrete” direction with simultaneous strong increase of the wave packet width X along the continuous directions. Respectively, decreasing the diffraction coefficient α = 0.4 leads to improvement of the mean square width X with simultaneous increase of N. We also note that despite the fact that red and blue curves do not represent optimal cases for the nonlinear regime they still pertain to very robust propagation for a long distance. Figure 4 figure 4 (a) Temporal evolution of the overall spread of the wave packet Δ, (b) N-component of the overall spread of the wave packet Δ(t), (c) X-component of the overall spread of the wave packet Δ. System parameters are set to γ = 0.1 and g = 0.9. Figure 5 shows the uncertainty parameter evolution as a function of the evolution time for five different values of nonlinearity parameter g. As we can clearly see all the oscillations are synchronous and this enables us to conclude that results are consistent with evolution to a certain degree. However closer look at the red g = 1 and blue curves (g = 0.9 is the optimum) show that for some temporal snapshots the red curve outperform the blue one. This means that strictly speaking we do not have a single point as the optimum but rather some small parameter range where system basically shows optimal behavior. For example the graph presented in Fig. 2(c) would experience very minor fluctuations of its optimal points positions if the integration will be stopped at different time point than t = 3000. For the sake of experimental realization having broad range of parameters with performance close to optimal is rather advantageous. Figure 5 figure 5 Temporal evolution of the overall spread of the wave packet Δ for different values of nonlinearity coefficient g with the continuous diffraction coefficient α = 0.5. The inset shows in details evolution from t = 2500 to t = 3000. Gradient strength is set to γ = 0.1. We have also performed additional simulations with input different from that defined by Eq. (3), considering the product of two Gaussians where w is the characteristic width of the initial wave packet along the n-direction, w x is the characteristic width of the initial wave packet along the x-direction, and a 0 characterizes the wave packet amplitude. Figure 6 illustrates the evolution with such an input and demonstrates the convergence to a robust propagation regime after initial radiation emission. Figure 6 figure 6 (a,b) Display the long-time evolution of the wave packet in a nonlinear system in the nt and in the xt planes, respectively. The system parameters are the same as in Fig. 1 except for the nonlinearty parameter used is g = 1. The initial condition is given by Eq. (5) with a 0 = 0.15, w x  = 10 and w = 100. Except for the smallest considered gradient of γ = 0.01 we obtain quite broad minima of Δ(t = 3000) as function of g which demonstrates a remarkable robustness of the nonlinear stabilization with respect to change of the nonlinearity. Returning to Fig. 3 increase of the nonlinearity above the optimal value, however, may lead to a breakup of the wave packet with a simultaneous compression in the x-direction. Such a situation is shown in Fig. 3(c,d) with the nonlinear parameter taken g = 1.6, that is much higher than g = 0.9 (the optimal value of the nonlinearity coefficient for the given value of the gradient strength). The results reported up to here were obtained for relatively moderate γ when the qualitative description could be based on the band-gap structure of the spectrum which results from the underlying linear lattice. Meantime, as an alternative view on BOs, a term with the linear gradient strength in a lattice can be transformed in periodically varying coupling coefficients, by a simple gauge transformation, i.e. by the ansatz of \({u}_{n}(x,t)\propto \exp (i\gamma nt)\) 23. Since such a transformation is not directly related to the zone spectrum, it is natural to explore the possibility of obtaining long-lived nonlinear BOs in the case of a relatively large gradient. Figure 7 clearly demonstrates that it is indeed possible to achieve long-lived BOs in the case of a considerable gradient of γ = 3. Figure 7 figure 7 Panels (a,b) show the evolution of the wave packet for a large gradient of γ = 3 in the n-t and in the x-t planes, respectively. The shape of the input is as in Eq. (3) and simulation parameters are κ = 2 and a 0 = 0.25. The insets demonstrate the stable dynamics within few periods from t = 380 to t = 400. The upper insets (a 1 ,b 1 ) are obtained from direct numerical solutions of Eq. (1) and are just zoomed to illustrate the dynamics shown in (a,b), respectively. The lower insets (a 2 ,b 2 ) visualize the approximate analytical solution, i.e., Eq. (6), shown in the same intervals. As we mentioned above the wave packet with the optimized parameters, whose dynamics is shown in Fig. 7 and which manifests remarkable stability, can be viewed as a hybrid of a Bloch oscillating wave and a quasi-soliton. This interpretation is supported by an approximate analytical solution of Eq. (1). Such approximate solution is obtained by applying the gauge transformation mentioned above for a wave packet that is smooth as function of n allowing to approximate the differences u n±1 − u n by their Taylor expansion. It reads $${u}_{n}=\frac{1}{\sqrt{g}}\,\exp [in\gamma t-2i\kappa (t+\frac{\sin (\gamma t)}{\gamma })]\frac{A(n+{n}_{0}(t))\,\exp [\tfrac{i}{2}A(n+{n}_{0}(t))t]}{\cosh \,[A(n+{n}_{0}(t))x]},$$ where \({n}_{0}(t)=(2\kappa /\gamma )[1-\cos (\gamma t)]\) defines the location of the center of the wave packet and A(n) describes the wave packet envelope. Comparing the analytical approximate solution Eq. (6) for the Eq. (1), see Fig. 7(a2,b2), with the numerical results, see Fig. 7(a1,b1), reveals an excellent overlap for a significant number of oscillation periods. For example, an integral characteristics such as the average width of the oscillations predicted by Eq. (6) differs only by about 5% up to t = 100 (corresponding to about 50 oscillation periods) and the precision drops to a difference of about 40 percent at t = 400. The position of the wavepacket is slightly shifted from the center in the course of the evolution in the n-plane, see Fig. 7(a1), but remains fixed in the x-plane, see Fig. 7(b1). The oscillation period predicted by Eq. (5) is similar to that obtained from the direct numerical solution of Eq. (1) as the comparison of the upper and lower insets of Fig. 7(a) demonstrates. To conclude, we have shown that nolinearity is able to support Bloch oscillations when the system is effectively two-dimensional, being discrete, in one dimension and continuous in the orthogonal direction. We have discovered that there exists an optimal relation between nonlinearity and linear gradient strengths allowing for extremely long lived Bloch oscillations (persisting for dozens of oscillation periods with relative deformation of the pulse shape of only a few percents). Such oscillations can be observed even for moderate nonlinearities and large enough values of linear potential, when the band-gap picture of the underlying linear lattice is not applicable anymore. The robust evolution of wave packets in such regime described by an approximate analytical formula in excellent agreement the direct numerical results. The formula describes an object with hybrid features of typical Bloch oscillating wave and soliton. For future investigations, it would be interesting to analyze a number of points which have not been in the focus of the present study, e.g., regarding the interplay between dispersive spreading and decay of the initial pulse in a set of quasi-soliton pulse propagating along the continuous coordinate and the possibilities of observing chaotic regimes, and achieving asymptotic regimes.
f27099ac30090a4c
Atomic Orbitals — Background Go back to main page. What exactly do the images show? In Quantum mechanics, the state of any physical system (in this case, a single electron bound to a charged nucleus) is completely described by its wave function. The wave function is obtained as a solution of a differential equation known as the Schrödinger equation. The set of possible orbital states of the electron (the “orbitals,” as they are commonly called) is simply the set of solutions to the Schrödinger equation for the given system. On the quantum scale, a particle does not usually have anything like a precisely determined position in space. Rather, from the square of the wave function, we obtain a probability density function. Simply put, this function associates with every location in space a certain probability that we may find the particle in that location, at any instant in time. The behavior of the particle is not completely random: In some regions of space, it is much more likely to show up than in others. What is shown in the images are isosurfaces of the probability density functions for each orbital state. That is, every point on the surface of the “blobs” we see has equal probability of the electron being observed at that point in space. The surfaces are determined such that half of the total probability is enclosed by them. In other words, if we could precisely measure the position of the electron at any point in time, about half of the time we would find it to be inside the volume enclosed by the “blobs,” and half of the time we would find it at a position outside. How were the images generated? They were generated using a custom-made ray tracer written in the Ada programming language. It is based upon the analytical solution of the Schrödinger equation for the special case of a “hydrogen-like atom,” i.e. an atom with a single electron orbiting the nucleus. Implementing a ray tracer then boils down to two main problems: Find the point where a ray intersects with the surface we want to display, and determine the normal of the surface at that point. How this is done exactly depends on what kind of mathematical description we have for the object we are rendering. In this case, the description is the isosurface of a real-valued function, and we treat this function as a “black box” with no other (known or assumed) properties besides that we can compute its value at any desired point in 3-dimensional space. The program then applies some rather naïve numerical methods to solve the aforementioned two problems. Once we have that in place, building a ray tracer is no rocket science: For each pixel, we apply the Phong reflection model to determine its color based on the surface normal, the position of the light source and a couple of configurable parameters. In order to give the scene a “smooth” and natural look, some kind of anti-aliasing is required. My implementation uses a simple adaptive supersampling technique, which gives acceptable visual results at a relatively low computational cost. A description of the exact algorithm can be found here. Apart from the basics, the software supports two additional features, both of which are pretty straight-forward to implement once you have the basics in place: What was the motivation for the project? It was a pretty instructive exercise, and it was a fun thing to do. Besides, I consider the images to be of relatively high quality (at least, compared to the simple means with which they were generated), so looking at them gives me a certain feel of achievement. It may not be the most useful achievement, but it's still an achievement. :-) From a practical perspective, I'm pretty sure the same results could have been obtained using POV-Ray or other existing ray tracers, with much smaller effort. Is the software available for download? Currently, no. I'd love to publish it, but for now, I chose not to. The main reasons for this decision are the rather mediocre quality of the code as well as my lack of time to give support to users and answer questions about the software. I hope that at least code quality will improve some day, so stay tuned. If you have a certain application in mind and need to generate customized images (like, other colors or resolutions), feel free to contact me, and I will try to provide the images you need, when time allows. How are the images licensed? The images provided here are in the public domain. Feel free to use them for whatever you like, for fun and profit. However, if you use the images for any purpose and you find them useful (or even just delightful), I'll be very happy if you drop me a note. Go back to main page. Author: Jan Andres <>
a6f54b0b68cd577a
Laplace operator From Wikipedia, the free encyclopedia Jump to navigation Jump to search The Laplace operator is named after the French mathematician Pierre-Simon de Laplace (1749–1827), who first applied the operator to the study of celestial mechanics: the Laplacian of the gravitational potential due to a given mass density distribution is a constant multiple of that density distribution. Solutions of Laplace's equation Δf = 0 are called harmonic functions and represent the possible gravitational potentials in regions of vacuum. The Laplacian occurs in many differential equations describing physical phenomena. Poisson's equation describes electric and gravitational potentials; the diffusion equation describes heat and fluid flow, the wave equation describes wave propagation, and the Schrödinger equation in quantum mechanics. In image processing and computer vision, the Laplacian operator has been used for various tasks, such as blob and edge detection. The Laplacian is the simplest elliptic operator and is at the core of Hodge theory as well as the results of de Rham cohomology. The Laplace operator is a second-order differential operator in the n-dimensional Euclidean space, defined as the divergence () of the gradient (). Thus if is a twice-differentiable real-valued function, then the Laplacian of is the real-valued function defined by: where the latter notations derive from formally writing: Explicitly, the Laplacian of f is thus the sum of all the unmixed second partial derivatives in the Cartesian coordinates xi: As a second-order differential operator, the Laplace operator maps Ck functions to Ck−2 functions for k ≥ 2. It is a linear operator Δ : Ck(Rn) → Ck−2(Rn), or more generally, an operator Δ : Ck(Ω) → Ck−2(Ω) for any open set Ω ⊆ Rn. In the physical theory of diffusion, the Laplace operator arises naturally in the mathematical description of equilibrium.[1] Specifically, if u is the density at equilibrium of some quantity such as a chemical concentration, then the net flux of u through the boundary V of any smooth region V is zero, provided there is no source or sink within V: Since this holds for all smooth regions V, one can show that it implies: The left-hand side of this equation is the Laplace operator, and the entire equation Δu = 0 is known as Laplace's equation. Solutions of the Laplace equation, i.e. functions whose Laplacian is identically zero, thus represent possible equilibrium densities under diffusion. The Laplace operator itself has a physical interpretation for non-equilibrium diffusion as the extent to which a point represents a source or sink of chemical concentration, in a sense made precise by the diffusion equation. This interpretation of the Laplacian is also explained by the following fact about averages. Given a twice continuously differentiable function , a point and a real number , we let be the average value of over the ball with radius centered at , and be the average value of over the sphere (the boundary of a ball) with radius centered at . Then we have:[2] Density associated with a potential[edit] If φ denotes the electrostatic potential associated to a charge distribution q, then the charge distribution itself is given by the negative of the Laplacian of φ: where ε0 is the electric constant. This is a consequence of Gauss's law. Indeed, if V is any smooth region with boundary V, then by Gauss's law the flux of the electrostatic field E across the boundary is proportional to the charge enclosed: where the first equality is due to the divergence theorem. Since the electrostatic field is the (negative) gradient of the potential, this gives: Since this holds for all regions V, we must have Energy minimization[edit] Another motivation for the Laplacian appearing in physics is that solutions to Δf = 0 in a region U are functions that make the Dirichlet energy functional stationary: To see this, suppose f : UR is a function, and u : UR is a function that vanishes on the boundary of U. Then: where the last equality follows using Green's first identity. This calculation shows that if Δf = 0, then E is stationary around f. Conversely, if E is stationary around f, then Δf = 0 by the fundamental lemma of calculus of variations. Coordinate expressions[edit] Two dimensions[edit] The Laplace operator in two dimensions is given by: In Cartesian coordinates, In polar coordinates, where r represents the radial distance and θ the angle. Three dimensions[edit] In Cartesian coordinates, In cylindrical coordinates, where represents the radial distance, φ the azimuth angle and z the height. In spherical coordinates: where φ represents the azimuthal angle and θ the zenith angle or co-latitude. In general curvilinear coordinates (ξ1, ξ2, ξ3): where summation over the repeated indices is implied, gmn is the inverse metric tensor and Γl mn are the Christoffel symbols for the selected coordinates. N dimensions[edit] In arbitrary curvilinear coordinates in N dimensions (ξ1, …, ξN), we can write the Laplacian in terms of the inverse metric tensor, : from the Voss-Weyl formula[3] for the divergence. In spherical coordinates in N dimensions, with the parametrization x = RN with r representing a positive real radius and θ an element of the unit sphere SN−1, where ΔSN−1 is the Laplace–Beltrami operator on the (N − 1)-sphere, known as the spherical Laplacian. The two radial derivative terms can be equivalently rewritten as: As a consequence, the spherical Laplacian of a function defined on SN−1RN can be computed as the ordinary Laplacian of the function extended to RN∖{0} so that it is constant along rays, i.e., homogeneous of degree zero. Euclidean invariance[edit] The Laplacian is invariant under all Euclidean transformations: rotations and translations. In two dimensions, for example, this means that: for all θ, a, and b. In arbitrary dimensions, whenever ρ is a rotation, and likewise: whenever τ is a translation. (More generally, this remains true when ρ is an orthogonal transformation such as a reflection.) In fact, the algebra of all scalar linear differential operators, with constant coefficients, that commute with all Euclidean transformations, is the polynomial algebra generated by the Laplace operator. Spectral theory[edit] The spectrum of the Laplace operator consists of all eigenvalues λ for which there is a corresponding eigenfunction f with: This is known as the Helmholtz equation. If Ω is a bounded domain in Rn, then the eigenfunctions of the Laplacian are an orthonormal basis for the Hilbert space L2(Ω). This result essentially follows from the spectral theorem on compact self-adjoint operators, applied to the inverse of the Laplacian (which is compact, by the Poincaré inequality and the Rellich–Kondrachov theorem).[4] It can also be shown that the eigenfunctions are infinitely differentiable functions.[5] More generally, these results hold for the Laplace–Beltrami operator on any compact Riemannian manifold with boundary, or indeed for the Dirichlet eigenvalue problem of any elliptic operator with smooth coefficients on a bounded domain. When Ω is the n-sphere, the eigenfunctions of the Laplacian are the spherical harmonics. Vector Laplacian[edit] The vector Laplace operator, also denoted by , is a differential operator defined over a vector field.[6] The vector Laplacian is similar to the scalar Laplacian; whereas the scalar Laplacian applies to a scalar field and returns a scalar quantity, the vector Laplacian applies to a vector field, returning a vector quantity. When computed in orthonormal Cartesian coordinates, the returned vector field is equal to the vector field of the scalar Laplacian applied to each vector component. The vector Laplacian of a vector field is defined as In Cartesian coordinates, this reduces to the much simpler form as where , , and are the components of the vector field , and just on the left of each vector field component is the (scalar) Laplace operator. This can be seen to be a special case of Lagrange's formula; see Vector triple product. For expressions of the vector Laplacian in other coordinate systems see Del in cylindrical and spherical coordinates. The Laplacian of any tensor field ("tensor" includes scalar and vector) is defined as the divergence of the gradient of the tensor: For the special case where is a scalar (a tensor of degree zero), the Laplacian takes on the familiar form. If is a vector (a tensor of first degree), the gradient is a covariant derivative which results in a tensor of second degree, and the divergence of this is again a vector. The formula for the vector Laplacian above may be used to avoid tensor math and may be shown to be equivalent to the divergence of the Jacobian matrix shown below for the gradient of a vector: And, in the same manner, a dot product, which evaluates to a vector, of a vector by the gradient of another vector (a tensor of 2nd degree) can be seen as a product of matrices: This identity is a coordinate dependent result, and is not general. Use in physics[edit] An example of the usage of the vector Laplacian is the Navier-Stokes equations for a Newtonian incompressible flow: where the term with the vector Laplacian of the velocity field represents the viscous stresses in the fluid. Another example is the wave equation for the electric field that can be derived from Maxwell's equations in the absence of charges and currents: This equation can also be written as: is the D'Alembertian, used in the Klein–Gordon equation. A version of the Laplacian can be defined wherever the Dirichlet energy functional makes sense, which is the theory of Dirichlet forms. For spaces with additional structure, one can give more explicit descriptions of the Laplacian, as follows. Laplace–Beltrami operator[edit] The Laplacian also can be generalized to an elliptic operator called the Laplace–Beltrami operator defined on a Riemannian manifold. The Laplace–Beltrami operator, when applied to a function, is the trace (tr) of the function's Hessian: Another generalization of the Laplace operator that is available on pseudo-Riemannian manifolds uses the exterior derivative, in terms of which the "geometer's Laplacian" is expressed as Here δ is the codifferential, which can also be expressed in terms of the Hodge star and the exterior derivative. This operator differs in sign from the "analyst's Laplacian" defined above. More generally, the "Hodge" Laplacian is defined on differential forms α by In Minkowski space the Laplace–Beltrami operator becomes the D'Alembert operator or D'Alembertian: It is the generalisation of the Laplace operator in the sense that it is the differential operator which is invariant under the isometry group of the underlying space and it reduces to the Laplace operator if restricted to time-independent functions. The overall sign of the metric here is chosen such that the spatial parts of the operator admit a negative sign, which is the usual convention in high-energy particle physics. The D'Alembert operator is also known as the wave operator because it is the differential operator appearing in the wave equations, and it is also part of the Klein–Gordon equation, which reduces to the wave equation in the massless case. See also[edit] 1. ^ Evans 1998, §2.2 2. ^ Ovall, Jeffrey S. (2016-03-01). "The Laplacian and Mean and Extreme Values" (PDF). The American Mathematical Monthly. 123 (3): 287–291. 3. ^ Archived at Ghostarchive and the Wayback Machine: Grinfeld, Pavel. "The Voss-Weyl Formula". Retrieved 9 January 2018. 4. ^ Gilbarg & Trudinger 2001, Theorem 8.6 5. ^ Gilbarg & Trudinger 2001, Corollary 8.11 6. ^ MathWorld. "Vector Laplacian". Further reading[edit] External links[edit]
344db8c6ebe6857c
Skip to main content Super-radiance reveals infinite-range dipole interactions through a nanofiber Atoms interact with each other through the electromagnetic field, creating collective states that can radiate faster or slower than a single atom, i.e., super- and sub-radiance. When the field is confined to one dimension it enables infinite-range atom–atom interactions. Here we present the first report of infinite-range interactions between macroscopically separated atomic dipoles mediated by an optical waveguide. We use cold 87Rb atoms in the vicinity of a single-mode optical nanofiber (ONF) that coherently exchange evanescently coupled photons through the ONF mode. In particular, we observe super-radiance of a few atoms separated by hundreds of resonant wavelengths. The same platform allows us to measure sub-radiance, a rarely observed effect, presenting a unique tool for quantum optics. This result constitutes a proof of principle for collective behavior of macroscopically delocalized atomic states, a crucial element for new proposals in quantum information and many-body physics. A new class of quantum technologies exploits the interfaces between propagating photons and cold atoms1,2,3,4,5,6,7,8,9,10. Recent realizations using optical nanofibers (ONFs) platforms include optical isolators, switches, memories, and reflectors11. These devices guide the electromagnetic field, a feature that could allow engineering and control a collective time evolution of macroscopically separated subsystems. States that evolve as a whole with dynamics different to that of the independent subsystems are called collective states. These states emerge from atoms interacting via a common mode of the electromagnetic field, and their generation and control can enable adittional tools for atomic-based technologies12,13,14,15,16,17,18 and the study of many-body physics19, 20. For an ensemble of N two-level atoms, in the single excitation limit, $$\left| {{\mathrm{\Psi }}_\alpha (t)} \right\rangle \propto e^{ - \frac{1}{2}\left( {\gamma _\alpha + i{\mathrm{\Omega }}_\alpha } \right)t}\mathop {\sum}\limits_{j = 1}^N c_{\alpha j}\left| {g_1g_2 \cdot \cdot \cdot e_j \cdot \cdot \cdot g_N} \right\rangle$$ represents the α-th collective state of the system, where γ α and Ω α are its collective decay and frequency shift, respectively, and \(\mathop {\sum}\nolimits_{j = 1}^N \left| {c_{\alpha j}} \right|^2e^{ - \gamma _\alpha t}\) is the probability of having an excitation in the atoms. When γ α is larger (shorter) than the natural radiative decay time γ 0, the system is super- (sub-)radiant21, 22. For free space coupling, collective states emerge for atom–atom separations smaller than a few wavelengths23. By externally exciting the atoms, super-radiant states are readily observed, but because sub-radiant states are decoupled from the electromagnetic vacuum field, they are challenging to produce24. The master equation that describes the dynamics of an ensemble of atomic dipoles, of density matrix ρ, coupled through the electromagnetic field is given by ref. 25 $$\dot \rho = - i\left[ {H_{{\mathrm{eff}}},\rho } \right] + {\cal L}[\rho ].$$ The effective Hamiltonian H eff of the dipolar interaction between atoms and the Lindblad super operator \({\cal L}\) in Eq. (2) modify two atomic properties: the resonance frequency and the spontaneous decay rate, respectively. They are given by $$H_{{\mathrm{eff}}} = \frac{1}{2}\mathop {\sum}\limits_{i,j} \hbar {\mathrm{\Omega }}_{ij}\sigma _i^\dagger \sigma _j,$$ $${\cal L}[\rho ] = \frac{1}{2}\mathop {\sum}\limits_{i,j} \hbar \gamma _{ij}\left( {2\sigma _j\rho \sigma _i^\dagger - \sigma _i^\dagger \sigma _j\rho - \rho \sigma _i^\dagger \sigma _j} \right),$$ with σ i \(\left( {\sigma _i^\dagger } \right)\) being the atomic lowering (raising) operator for an excitation of the i-th atom. Ω ij is the rate of photons exchanged between atoms and γ ij is the term responsible for collective radiative decays, where γ ii is the single atom decay rate. The decay of an excitation in such a system, that leads to a collective state as in Eq. (1), depends on the coupling amplitudes and relative phase between the atoms given by γ ij . When atoms are far apart in free space, their interaction is mediated by a propagating field with an expanding wavefront, and a separation of few wavelengths is enough to make the interaction negligible. As atoms get closer together, Ω ij in Eq. (3) diverges, reducing the coherence of a system with more than two atoms. These constraints can be circumvented by using longer wavelengths with larger atomic dipole moments, such as Rydberg atoms26, or long-range phonon modes, implemented with trapped ions27, 28. However, these techniques are limited to subwavelength distances. When the field is confined to one dimension, it enables infinite-range interactions. This has been observed for atoms in an optical cavity29, 30. Waveguides offer an alternative by confining the mediating field, where the extent of the interactions is not limited by the cavity size and the field can propagate unaltered for a broad range of frequencies31, 32, facilitating the coupling of atoms separated by many wavelengths (see Fig. 1). Dipole–dipole interactions, given by Ω ij , are finite for atoms along the waveguide, removing a practical limit for creating super-radiant states of a large number of atoms. Super-radiance of atoms around a waveguide has been observed7, but its long-range interaction feature has not been proven or explored. Such effect has been implemented with superconducting waveguides and two artificial atoms one wavelength apart33, but has not been realized for many atoms at multi-wavelength distances in the optical regime. Fig. 1 Position-dependent atom–atom coupling along the optical nanofiber. a Schematic of an ONF as a platform for generating single photon collective atomic states, excited from the side by a weak probe of polarization V or H. When two atoms are close together, the dipolar interaction is mostly mediated by the modes of the electromagnetic field radiating outside the nanofiber. This is a limited-range interaction that decays inversely with distance. When the atoms are widely separated, the guided mode of an ideal ONF mediates the interaction for arbitrary distances. b, c Show the atom–atom interaction rate γ 12 (see Eq. (4)) experienced by an atom around the fiber given another atom at the position denoted by the white cross (see “Methods” section for the details of the calculation). Its amplitude is shown for a longitudinal and a transversal cut (specified by dashed black lines). Both plots share the color scale, but in b the interaction rate is normalized by the single atom total decay rate γ 0 and in c by the decay rate into the guided mode γ 1D. Along the z-axis, the interaction among atoms through free space radiation modes decreases as \(\gamma _{{\mathrm{12}}}^{{\mathrm{(rad)}}} \propto {\mathrm{sin}}\left( {k\left| {{\mathrm{\Delta }} z} \right|} \right){\mathrm{/}}k{\mathrm{\Delta }} z\) (with k being the wavenumber and Δz the separation between two atoms). The infinite interaction through the ONF-guided mode changes as \(\gamma _{{\mathrm{12}}}^{{\mathrm{(1D)}}} \propto \rm{cos}\left( {\beta _0{\mathrm{\Delta }}\it z} \right)\rm{cos}({\mathrm{\Delta }}\phi )\) (with β 0 being the propagation constant of the resonant-guided mode and Δϕ the angle difference in cylindrical coordinates). The wavelength λ sets the scale in b, c We present the implementation of collective atomic states through infinite-range interactions via a one-dimensional nanophotonic waveguide. We use a few atoms evanescently coupled to a single-mode ONF, observing super- and sub-radiant radiative decays of a single excitation in the system, evidence of collective behavior. Atoms around the ONF interact at short and long distances (see Fig. 1a), the latter mediated by the ONF-guided mode. The dipolar interaction that leads to a collective decay is separated into two contributions of the electromagnetic field: from modes radiating outside the ONF, \(\gamma _{{\mathrm{12}}}^{{\mathrm{(rad)}}}\), and from the guided mode, \(\gamma _{{\mathrm{12}}}^{{\mathrm{(1D)}}}\) 25 (see Fig. 1b, c). In particular, we observe sub-radiant decay rates of proximal atoms interacting through the radiated modes and super-radiant decay rates of atoms interacting through the guided mode over distances of hundreds of resonant wavelength. Experimental setup We overlap a cold atomic cloud of 87Rb atoms from a magneto-optical trap (MOT) with a 240 nm radius ONF. This ONF is single mode at the D2 resonant wavelength of 780 nm. After the MOT is turned off, the atoms form a cold thermal gas around the ONF. They are prepared in the F = 1 ground level by an external free propagating beam. A repumper beam driving the F = 1 → F = 2 transition propagates through the nanofiber, leaving in the F = 2 ground state-only atoms that interact with the ONF-guided mode. By detuning the repumper below resonance, we address atoms near the nanofiber (whose levels have been shifted by van der Waals interactions) such that the atomic density distribution peaks at ~30 nm away from the surface. A weak free space probe pulse, propagating perpendicular to the fiber, excites atoms for 50 ns using the F = 2 → F′ = 3 transition. After the probe turns off (extinction ratio better than 1:2 × 103 in one atomic natural lifetime), we collect photons spontaneously emitted into the ONF mode to measure the decay time using time-correlated single photon counting. Collective states can be tailored by positioning the atoms in a particular arrangement. This kind of control has been challenging to implement for atoms trapped close enough to the ONF (tens of nanometers) to ensure significant mode coupling. However, collective states are still observed when atoms from a MOT are free to go near the ONF. Their random positioning leads to probabilistic super- or sub-radiant states on each experimental realization. Sub-radiant states have lifetimes much longer than most other processes, favoring their observation. Super-radiance can be measured as an enhanced decay rate at short times. Both effects can provide quantitative experimental evidence of collective states. Observation of super- and sub-radiance Figure 2 shows a typical signal of the atomic decay as measured through the ONF. Its time dependence can be described by two distinct exponential decays. The slow decay (green dashed line in Fig. 2a) corresponds to an average of sub-radiant decays due to pairs of atoms located within a wavelength, i.e., free space interaction (Fig. 1b). Infinite-range interactions also produce sub-radiant decay rates. However, these events are obscured by the dominant signal of slower decays produced from free space interactions. In our case γ 1D ≈ 0.13γ 0, so sub-radiance from infinite-range interactions is limited to γ 0 − γ 1D ≈ 0.87γ 0. This is a factor of six faster than the observed sub-radiant rates (green dashed line in Fig. 2a). Sub-radiance of atoms interacting in free space has been observed in a very optically dense cloud of atoms24, but we can observe it even for optical densities (OD) as small as 0.3. The fast decay rate (red dashed line in Fig. 2a) is larger than the natural decay rate, showing the presence of super-radiant initial states. Fig. 2 Measured super- and sub-radiant decay of excited atoms near the optical nanofiber. a Normalized rate of photons detected through the ONF mode (blue circles in a logarithmic scale) as a function of time in units of natural lifetime (τ 0 = 1/γ 0 = 26.24 ns) with 5 ns bins. The signal is taken after a probe beam polarized along the nanofiber turns off. In this realization OD = 0.66 ± 0.05. The individual statistical error bars are not plotted but they are taken into account for the normalized residuals in b. The number of counts at t = 0 exceeds 106. We see two distinct slopes (red and green), at short and long times. The initial slope (red) deviates toward decay rates faster than γ 0, a signature of super-radiance. The second slope (green) comes from the natural post-selection of purely sub-radiant states. The red dashed (green dashed) line is the best fit to a pure exponential decay of the initial (final) decay. The decay rate of the fit at short times is 1.10 ± 0.02 γ 0, and 0.13 ± 0.01 γ 0 for the fit at longer times, with one-sigma error. The one-sigma fractional systematic errors are ±0.01. The full description of the measured temporal evolution of the system involves averaging over many different decay rates through Monte Carlo methods (explained in “Methods” section). The solid black line is a simulation of 7 atoms along the ONF, with reduced χ 2 of 1.60. b The red circles, green circles, and black diamonds are the normalized residuals of the exponential fits to the initial decay, final decay, and the theoretical model. c Shows two different decay signals from an excitation driving the atoms with light polarized along (cyan rectangles) and perpendicular (blue triangles) to the ONF for 25 ns bins. When the driving field is polarized along the ONF, we observe super- and sub-radiance, and when it is polarized perpendicular to the ONF the super-radiance increases and the sub-radiance decreases. This feature is qualitatively captured by the theoretical model A full description of the temporal evolution of the entire data sample requires numerical (Monte Carlo) methods, as the solid black line in Fig. 2 shows. We use the average number of atoms (N) as the only free parameter for this simulation, allowing for variations of the background up to one sigma. The two-sigma deviation between simulation and data (see Fig. 2b from 7 to 15 τ 0) could come from otherwise a longer living sub-radiant state that gets prematurely destroyed because atoms fall onto the ONF, emitting the excitation into the guided mode. The initial state preparation—the polarization of the incoming pulse that produces the collective one-photon state—can favor super- or sub-radiant states, as Fig. 2c shows. In general, the free space atom–atom coupling is larger for dipoles driven along the ONF (z in the direction set in Fig. 1b), favoring sub-radiance, and the ONF-mediated coupling is larger for dipoles driven perpendicular to the ONF, favoring super-radiance. An important difference between sub- and super-radiant decay rates in ONF is that the latter increases as a function of N. We can vary N from one to six by changing the MOT density, and quantify it through the OD of the ONF mode. n effOD =  1D/γ 0, where n eff is the mode effective refractive index, and in our case n eff ≈ 1.15. We measure the transmission spectrum through the ONF to extract the OD. The decay rate increases with N, as shown by the blue circles in Fig. 3, indicating super-radiance. The gray region represents the one-sigma confidence bands of a linear fit to the data showing a linear dependence of the super-radiant decay rate for increasing N. The theoretical model implemented for the fit shown in Fig. 2 (solid black line) also predicts a linear dependence on N of the decay rate γ at short times. The red dashed line in Fig. 3a shows this prediction, corroborating the theory with the experiment. Fig. 3 Super-radiant decay as a function of atom number including separated clouds. a Relationship of the decays as a function of average number of atoms (OD) along the optical nanofiber. The normalized fast decay rates are plotted as a function of the OD (lower abscissa) and N (upper abscissa) measured through the ONF-guided mode. The blue circles correspond to the signals from a single cloud of atoms. We split the atomic cloud in two (as shown in b). The dashed light and dotted dark green diamonds, and the solid red square correspond to the right, left, and the combination of both atomic clouds, respectively. The systematic errors (not shown) are estimated to be 1% for the decay rates and smaller than 20% for the atom number. The plotted error bars represent the statistical uncertainty of the fitting to an exponential decay. The gray region represents the one-sigma confidence band of a linear fit to the data. The red dashed line is the theoretical prediction, and the red shaded region represents a confidence interval set by a fractional error of 1%. The curve goes below γ/γ 0 = 1 because the natural decay rate is modified given the geometry of the ONF and the alignment of the atomic dipoles (Purcell effect)36. b Separated atom clouds show long-range interactions. The top of the figure shows in black and white a fluorescence image of a split MOT. The white dotted line represents the ONF location. The fluorescence signal of the split MOT along the nanofiber is plotted as a function of position. The dashed light (dotted dark) green dashed lines is the intensity distribution of the right (left) atomic cloud when the other one is blocked. The solid red line is the intensity distribution when both clouds are present. The separation between the center of both clouds is 318 ± 1 μm, given by standard error of the mean of a Gaussian fit. This distance is equivalent to 408 wavelengths Evidence of infinite-range interactions The average spacing between atoms is larger than a wavelength for most of the realizations, meaning that infinite-range interactions are always present. However, to provide an unambiguous proof of infinite-range interactions, we split the atomic cloud in two (see Fig. 3b). We see that two atomic clouds separated by more than 400 wavelengths present the same super-radiant collective behavior as a function of the OD as a single atomic cloud. This shows that the relevant parameter is the total OD (or N) along the ONF mode, regardless the separation between atoms. Optically guided modes can be used to mediate atom–atom interactions, creating macroscopically delocalized collective atomic states. We use the super-radiant behavior of distant atoms as evidence of infinite-range interaction, but other interesting collective quantum properties remain to be tested. The practical limits of infinite-range interactions are an open question, since in principle optical fibers can be easily connected and rerouted along several meters. An intriguing next step is the study of quantum systems beyond the Markov approximation, coupling atoms at distance greater than what light travels in an atomic lifetime. Moreover, by achieving fine control on the positioning of the interacting particles, and/or using the directional coupling produced by chiral atom–light interaction10, one can engineer desired states tailored to address specific applications. The implementation of infinite-range interactions opens new possibilities for quantum technologies and many-body physics. Given the application of one-dimensional waveguides in photonic-based quantum technologies, we envision infinite-range interactions as the natural next step toward interconnecting quantum systems on scales suitable for practical applications. Experimental methods A tapered single mode ONF, with waist of 240 ± 20 nm radius and 7 mm length, is inside an ultrahigh vacuum (UHV) chamber, where it overlaps with a cloud of cold 87Rb atoms (less than half a millimeter width) created from a MOT. The MOT is loaded from a background gas produced by a 87Rb dispenser. Acousto optic modulators (AOMs) control the amplitude and frequencies of the MOT beams. After the atomic cloud loading reaches steady state, the MOT beams are extinguished. A free space propagating depump beam, resonant with the F = 2 → F′ = 2 transition (150 μs duration) prepares all atoms in the cloud in the F = 1 ground state. A 0.4 nW fiber-repump beam, detuned below resonance by 15 MHz to the F = 1 → F′ = 2 transition, propagates through the ONF during the entire cycle. It pumps back to the F = 2 ground state only those atoms close enough to the ONF to interact with the guided mode. This detuning repumps only those atoms close enough to the ONF surface to experience an energy shift due to the van der Waals interaction with the dielectric body. This produces a narrow density distribution of atoms of 5 nm width centered around 30 nm away from the surface. We wait 300 μs until the AOMs reach maximum extinction. The atomic cloud free falls and expands around the ONF for 2.5 ms creating a cold thermal gas (~150 μK), where each atom interacts with the nanofiber mode for ~1.5 μs34. The atomic density reduction due to the cloud expansion limits the probing time of the cycle. The atoms are excited by pulses of a weak probe beam incident perpendicularly to the nanofiber (see Fig. 1a) and linearly polarized along the ONF for the data set shown in Fig. 3. The pulses are resonant with the F = 2 → F′ = 3 transition of the D2 line and created with a double-passed Pockels cell (Conoptics 350–160), with a pulse extinction ratio better than 1:2000 in one atomic natural lifetime that remains at least an order of magnitude below the atomic decay signal for more than 20 lifetimes. The on–off stage of the light pulses is controlled with an electronic pulse generator (Stanford Research Systems DG645). The probe power is kept low, i.e., saturation parameter s < 0.1, to ensure a single photon excitation while staying in the limit of low excitation and avoiding photon pileup effects. Only those atoms that interact with the ONF-guided mode are in the F = 2 ground state and will be excited by the probe beam. During the probing time, we send a train of 50 ns probe pulses every 1 μs. The probe is a 7 mm 1/e 2 diameter collimated beam. After 2 ms of probing (~2000 pulses), the probe beam is turned off and the MOT beams are turned back on. During the probing time, the atomic density remains constant. We wait 20 ms after the MOT reloads and repeat the cycle. The average acquisition time for an experimental realization is around 5 h, giving a total of about 1 × 109 probe pulses. The photons emitted into the nanofiber and those emitted into free space are independently collected with avalanche photodiodes (APDs, laser components COUNT-250C-FC, with less than 250 dark counts per second). The TTL pulses created from photons detected by APD are processed with a PC time-stamp card (Becker and Hickl DPC-230) and time stamped relative to a trigger signal coming from the pulse generator. We use time-correlated single photon counting35 to extract the decay rate of a single excitation in the system, eliminating after-pulsing events from the record. When atoms are around the nanofiber, they tend to adhere due to van der Waals forces. After a few seconds of having the ONF exposed to rubidium atoms it gets coated, suppressing light propagation. To prevent this, we use 500 μW of 750 nm blue-detuned light (Coherent Ti:Sapph 899) during the MOT-on stage to create a repulsive potential that keeps the atoms away from the ONF surface. This is intense enough to heat the ONF and accelerate the atomic desorption from the surface. The blue-detuned beam is turned off at the same time as the MOT beams, so the probed atoms are free to get close to the nanofiber. Photons from the probe beam can be scattered multiple times by the atoms producing a signal that looks like a long decay, an effect known as radiation trapping. This effect can obscure sub-radiant signals. However, the small ODs involved in the experiment allow us to neglect contributions from radiation trapping. We confirm this assumption by observing the same temporal evolution of the signal at constant OD for several detunings of the probe beam in a range of ±3 linewidths24. The atomic lifetime can also be altered by modification of the electromagnetic environment of the atoms in the presence of an ONF, i.e., the Purcell effect. However, this effect is characterized separately36 and well understood. More importantly, it does not depend on the number of atoms, in contrast with the super-radiant behavior. Further evidence of collective states can be found in the resonance spectrum of the system (see Eqs. (2) and (3)). The dispersive part of the interaction modifies the resonance frequencies of the system, due to avoiding crossing of otherwise degenerate levels. This effect is in principle visible in the transmission spectrum. In our particular case, the frequency splitting is a small percentage of the linewidth. Broadening mechanisms and other systematic errors prevent us from clearly observing such signal. However, a line-shape dependence on N can be inferred from the statistical analysis of the fit of the spectrum to a Lorentzian. This effect might enable the exploration of features of collective states in the spectral domain. ONFs can provide chiral atom–light coupling10. Even though this is a promising feature of the platform, it requires a particular positioning of the atoms and a preparation of their internal state. This first exploration of infinite-range interactions involves detecting only on one end of the ONF and azimuthally averaging the atomic position, preventing studies of chiral effects that we do not consider crucial to our measurements. Theoretical model We follow the work of Svidzinsky and Chang37 to implement the theoretical simulations of the experiment. Consider the Hamiltonian of N atoms interacting with an electromagnetic field in the rotating-wave approximation $$\hat H_{{\mathrm{int}}} = - \mathop {\sum}\limits_k \mathop {\sum}\limits_{j = 1}^N \hbar G_{kj}\left[ {\hat \sigma _j\hat a_{\mathbf{k}}^\dagger e^{{\mathbf{i}}\left( {\omega - \omega _0} \right)t} + h.c.} \right]$$ where \(\hat \sigma _j\) is the lowering operator for atom j; \(\hat a_{\mathbf{k}}^\dagger\) is the photon creation operator in the mode k-th; ω 0 and ω are the frequencies of atomic resonance and k-th mode of the field, respectively. This is a general expression for the Hamiltonian, which leads to the master equation in Eq. (2) after some approximations. The sum on j is done over the atoms and the sum on k goes over the electromagnetic field modes, guided into the nanofiber and radiated outside. These modes can be found in the work of Le Kien et al.25. The sum over the guided modes is \(\mathop {\sum}\nolimits_\mu = \mathop {\sum}\nolimits_{f,p} {\int}_0^\infty \rm{d}\omega\), where f and p are the propagation direction and polarization in the circular basis (plus or minus) of the guided mode, respectively, and μ stands for modes with different parameters (ω, f, p). The sum over the radiated modes is \(\mathop {\sum}\nolimits_\nu = \mathop {\sum}\nolimits_{m,p} {\int}_0^\infty \rm{d}\omega {\int}_{ - k}^k \rm{d}\beta\); where m is the mode order, k is the wavenumber, β is the projection of the wave vector along the fiber or propagation constant, and ν stands for modes with different parameters (ω, β, m, p). Then the total sum is \(\mathop {\sum}\nolimits_k = \mathop {\sum}\nolimits_\mu + \mathop {\sum}\nolimits_\nu\). The electromagnetic field modes and their relative coupling strength have been previously studied25. The coupling frequencies G kj for the guided and radiated modes can be written as: $$G_{\mu j} = \sqrt {\frac{{\omega \beta {\prime}}}{{4\pi \epsilon _0\hbar }}} \left[ {{\bf{d}}_j \cdot {\bf{e}}^{(\mu )}\left( {r_j,\phi _j} \right)} \right]e^{{\mathbf{i}}\left( {f\beta z_j + p\phi _j} \right)}$$ $$G_{\nu j} = \sqrt {\frac{\omega }{{4\pi \epsilon _0\hbar }}} \left[ {{\bf{d}}_j \cdot {\bf{e}}^{(\nu )}\left( {r_j,\phi _j} \right)} \right]e^{{\mathbf{i}}\left( {\beta z_j + m\phi _i} \right)}$$ where β′ = dβ/dω, d j is the dipole moment of the j-th atom, and e (μ,ν) are the electric field profile function (or spatial dependence of the amplitude) of the guided and radiated modes (μ and ν). Atoms interact with each other mediated by the electromagnetic field. The interaction between the atomic dipoles is proportional to the product of the atom–light coupling frequencies of the form G ki G kj , where k labels the mediating field mode (the repetition of the letter implies summation if there is more than one mode) and i and j label the i-th and j-th atom. It is possible to identify two contributions from the coupling of atoms to the dynamics of the system, a dispersive and a dissipative one, as shown in Eq. (2). The dispersive part contributes to the unitary evolution of the system (see Eq. (3)), and it can be decomposed as \({\mathrm{\Omega }}_{ij} = {\mathrm{\Omega }}_{ij}^{({\mathrm{rad}})} + {\mathrm{\Omega }}_{ij}^{({\mathrm{1D}})}\), where \({\mathrm{\Omega }}_{ij}^{({\mathrm{rad}})}\) and \({\mathrm{\Omega }}_{ij}^{({\mathrm{1D}})}\) come from the interaction of the i-th and j-th atoms mediated by the radiated and guided modes, respectively. Ω ij is usually called the dipole–dipole coupling frequency. The dissipative part contributes to the decay of the system (see Eq. (4)), and it can be decomposed as \(\gamma _{ij} = \gamma _{ij}^{({\mathrm{rad}})} + \gamma _{ij}^{({\mathrm{1D}})}\), where \(\gamma _{ij}^{({\mathrm{rad}})}\) and \(\gamma _{ij}^{({\mathrm{1D}})}\) come from the interaction of the i-th and j-th atoms mediated by the radiated and guided modes, respectively. For simplicity, here we focus only on the case where atoms are regarded as two-level systems prepared in an initial state with induced atomic dipoles aligned along the ONF (z-axis). This is a reasonable approximation for atoms weakly driven by an external probe polarized along z. In a realistic scenario, the light scattered by the fiber and by the multi-level internal structure of the atoms can mix the light polarization. The computation of such a system becomes cumbersome and only contributes to correction to the dominant effect. A description given by two-level atoms aligned along the z-axis allows us to quantitatively capture the physical phenomena while keeping the mathematical description simple. For atoms placed in the position r i  = (r i , ϕ i , z i ) with reduced dipole moment d i , we obtain $$\gamma _{ij}^{{\mathrm{(1D)}}} = \frac{{2\omega _0\beta _0^\prime }}{{\epsilon _0\hbar }}d_id_je_z^{\left( {\mu _0} \right)}\left( {r_i} \right)e_z^{*\left( {\mu _0} \right)}\left( {r_j} \right){\mathrm{cos}}\left( {\phi _i - \phi _j} \right){\mathrm{cos}}\,\beta _0\left( {z_i - z_j} \right)$$ $$\gamma _{ij}^{{\mathrm{(rad)}}} = \frac{{2\omega _0}}{{\epsilon _0\hbar }}d_id_j\mathop {\sum}\limits_m {\int}_0^{k_0} \mathrm{d}\beta e_z^{(\nu )}\left( {r_i} \right)e_z^{*(\nu )}\left( {r_j} \right) \times {\mathrm{cos}}\,m\left( {\phi _i - \phi _j} \right){\mathrm{cos}}\,\beta \left( {z_i - z_j} \right)$$ $${\mathrm{\Omega }}_{ij}^{{\mathrm{(1D)}}} \approx \frac{{\omega _0\beta _0^\prime }}{{\epsilon _0\hbar }}d_id_je_z^{\left( {\mu _0} \right)}\left( {r_i} \right)e_z^{*\left( {\mu _0} \right)}\left( {r_j} \right){\mathrm{cos}}\left( {\phi _i - \phi _j} \right){\mathrm{sin}}\,\beta _0\left( {z_i - z_j} \right)$$ where μ 0 parametrizes the guided modes on resonance. The dispersive component of the interaction given by the radiated modes as \({\mathrm{\Omega }}_{ij}^{({\mathrm{rad}})}\) is a complicated expression and hard to solve even numerically. We follow the work of Le Kien et al.38 and use the free space value of \({\mathrm{\Omega }}_{ij}^{({\mathrm{rad}})}\) throughout the calculation as a reasonable approximation. γ ii  = γ 0 with γ 0 the single atom natural decay rate. \(\gamma _{12}^{({\mathrm{rad}})}\) and \(\gamma _{12}^{({\mathrm{1D}})}\) are plotted in Fig. 1b, c, respectively, for an atom fixed at r 1 = (240 + 30) nm, 0, 0) (240 nm being the ONF radius and 30 nm the distance of the atom to the surface). When atoms are too close to each other, the radiated terms \({\mathrm{\Omega }}_{ij}^{({\mathrm{rad}})}\) and \(\gamma _{ij}^{({\mathrm{rad}})}\) dominate over the guided ones (\({\mathrm{\Omega }}_{ij}^{({\mathrm{1D}})}\) and \(\gamma _{ij}^{({\mathrm{1D}})}\)), with \({\mathrm{\Omega }}_{ij}^{({\mathrm{rad}})}\) diverging and \(\gamma _{ij}^{({\mathrm{rad}})}\) approaching the total decay rate. With a low number of atoms randomly distributed along the ONF, the effects of short-range interaction are small but still observable. For simplicity, we are interested in the decay of only one excitation in a system of two-level atoms, however, generalizations to multi-level atoms can be found in the literature39. Such system is represented by the state $$\left| {\mathrm{\Psi }} \right\rangle = \mathop {\sum}\limits_{{\mathbf{k}}_\mu ,{\mathbf{k}}_\nu } b_{\mathbf{k}}^{(g)}(t)\left| {g_1g_2 \cdot \cdot \cdot g_N} \right\rangle \left| 1_{\mathbf{k}} \right\rangle + \mathop {\sum}\limits_{j = 1}^N b_j^{(e)}(t)\left| {g_1g_2 \cdot \cdot \cdot e_j \cdot \cdot \cdot g_N} \right\rangle \left| 0 \right\rangle$$ where k μ ( ν ) is the sum over the guided (radiated) modes, \(b_{\mathbf{k}}^{(g)}\) is the probability amplitude of all the atoms being in the ground state and one excitation in the k-th mode of the field, and \(b_j^{(e)}\) is the probability amplitude of having zero excitation in the field and an excitation in the i-th atom. Assuming that we start the cycle with the excitation in the atoms, i.e., \(b_{\mathbf{k}}^{(g)}(0) = 0\), we can write the Schrödinger equation in the Markov approximation for the coefficients \(b_i^{(e)}(t)\) in a matrix form as ref. 37 $${\dot{\bf B}}(t) = - {\mathrm{\Gamma }}{\bf{B}}(t)$$ where B(t) is a vector with entries given by the \(b_i^{(e)}(t)\), and Γ is a non-hermitian symmetric matrix with entries 2Γ ij  = γ ij  + iΩ ij , representing the couplings between the i-th and j-th atoms calculated from the optical nanofiber modes, radiated and guided. The eigenvalues η α of Eq. (12) give the possible decay rates of the system. These are the collective sates mentioned in Eq. (1). The eigenvectors form a basis \(\left\{ {\left| {B_\alpha } \right\rangle } \right\}\) that allows us to write the state of the system as $$\left| {\mathrm{\Psi }} \right\rangle = \mathop {\sum}\limits_{{\mathbf{k}}_\mu ,{\mathbf{k}}_\nu } b_{\mathbf{k}}^{(g)}(t)\left| {g_1g_2 \cdot \cdot \cdot g_N} \right\rangle \left| {1_{\mathbf{k}}} \right\rangle + \mathop {\sum}\limits_{\alpha = 1}^N c_\alpha e^{ - \eta _\alpha t}\left| {B_\alpha } \right\rangle \left| 0 \right\rangle$$ where the coefficients c α are given by the initial state. In contrast with Eq. (1), here we have also included the states with one excitation in the field. Following this approach, the many-body problem, of calculating the decay of one excitation distributed among N interacting atoms, becomes an eigenvalue problem in a Hilbert space of dimension N 2 instead of 22N. This speeds the calculations, allowing us to compute the decay rate of the system with Monte Carlo simulations for a large N in random positions. The electromagnetic field operator for the guided modes is ref. 25 $${\hat{\mathbf E}}_{{\mathrm{guided}}}^{( + )} ={ i\mathop {\sum}\limits_{fp} {\int}_0^\infty \rm{d}\omega \sqrt {\frac{{\hbar \omega \beta {\prime}}}{{4\pi \varepsilon _0}}}} \,\, \hat a_\mu {\mathbf{e}}^{(\mu )}e^{ - i(\omega t - f\beta z - p\phi )}{\kern 1pt} .$$ The formal solution of the Heisenberg equation for \(\hat a_\mu (t)\) in the Markov and rotating-wave approximation is $$\hat a_\mu (t) = \hat a_\mu \left( {t_0} \right) + 2\pi \mathop {\sum}\limits_j G_{\mu j}^*\delta \left( {\omega - \omega _0} \right)\hat \sigma _j(t),$$ The substitution of this expression into Eq. (14) gives the guided field operator as a function of the dipole operators. Assuming that the guided modes are initially empty and that all the dipoles are oriented in the z direction and at the same distance from the ONF, the intensity of the guided field as a function of the atomic dipole operators is $$\left\langle {{\hat{\mathbf{E}}}_{{\mathrm{guided}}}^{( - )}{\hat{\mathbf{E}}}_{{\mathrm{guided}}}^{( + )}} \right\rangle = \left| {{\cal E}(r)} \right|^2\left| {{\rm{d}}(t)} \right|^2,$$ $${\rm{d}}(t) = \mathop {\sum}\limits_j e^{i\left( {\beta z_j + \phi _j} \right)}b_j^{(e)},$$ $$\left| {{\cal E}(r)} \right|^2 = \frac{{2\hbar \omega _0}}{{n_{{\mathrm{eff}}}c\varepsilon _0}}\frac{{\gamma _{{\mathrm{1D}}}(r)}}{{A_{{\mathrm{eff}}}(r)}},$$ considering \(\gamma _{{\mathrm{1D}}}(r) = \gamma _{ii}^{{\mathrm{(1D)}}}(r)\) from Eq. (8) and \(A_{{\mathrm{eff}}(z)}(r) = \left| {n_{{\mathrm{eff}}}e_z^{\left( {\mu _0} \right)}(r)} \right|^{ - 2}\) to be the effective mode area of the z component of the electric field25. Equation (18) relates the total radiated power into the waveguide with the energy radiated per unit time, i.e., I(r)A eff(z)(r) = ħω 0 γ 1D(r), where I(r) is the intensity of the radiated field. Equation (16) shows that the measured intensity corresponds to the one produced by N classical dipoles with different phases, different positions, and amplitudes given by the probability of being in the excited state \(b_j^{(e)}\) 40. Theoretical methods We use Monte Carlo simulations, randomly positioning N atoms around the ONF. The position of each atom is given in cylindrical coordinates by r i  = (r 0, ϕ i , z i ), where r 0 = (240 + 30) nm, ϕ i [0, 2π], and z i is obtained from a Gaussian distribution with a FWHM of 200 μm, determined by the atomic cloud size. The radial position of the atoms is fixed, determined by the experimental procedure of repumping the atoms close to the nanofiber surface. In our case, all the atoms are at a constant radial position of 30 nm away from the surface of an ONF of 240 nm radius, with γ 1D/γ 0 ≈ 0.13. This is a good approximation given the narrow radial distribution of the atoms (~5 nm), as explained in the experimental methods. The initial state will depend on the amplitude and phase of the excitation beam. We assume that the initial state corresponds to a superposition of all the atoms in the ground state except one with an induced atomic dipole. The initial phase between the atoms depends on their position; assuming an excitation pulse with a wave vector perpendicular to the fiber, each atom initial phase can be calculated from its coordinates. For each random realization, we solve Eq. (12) and calculate the intensity of the guided field, Eq. (16). We use these results to take the mean of the intensity of the guided field as a function of time. Typically, 100,000 realizations are required to converge to a level of precision higher than what it is visible in Figs. 2 and 3. If the mean of the intensity guided field is normalized, there is no dependence on the amplitude of the initial induced dipole in the weak excitation limit. There is a correspondence between super-radiance (sub-radiance) configurations and constructive (destructive) interference of the field emitted by the dipoles into the ONF (see Eq. (17)); meaning that super-radiant configurations contributes more than sub-radiant configurations when taking the mean over all the realizations for an electric field detected through the ONF (Eq. (16)). The theoretical model prediction for different dipole moment orientations relative to the ONF25 qualitatively agrees with the observed experimental behavior: The long-term sub-radiance disappears on our signal-to-background-ratio window when exciting with vertically polarized light (see of Fig. 2c). A sensitivity analysis to the ONF radius shows no significant changes in the predictions up to a ±10 nm variation. Data availability 1. 1. Thompson, J. D. et al. Coupling a single trapped atom to a nanoscale optical cavity. Science 340, 1202–1205 (2013). ADS  CAS  Article  PubMed  Google Scholar  2. 2. Sayrin, C. et al. Nanophotonic optical isolator controlled by the internal state of cold atoms. Phys. Rev. X 5, 041036 (2015). Google Scholar  3. 3. Tiecke, T. G. et al. Nanophotonic quantum phase switch with a single atom. Nature 508, 241–244 (2014). ADS  CAS  Article  PubMed  Google Scholar  4. 4. ADS  CAS  Article  PubMed  Google Scholar  5. 5. Volz, J., Scheucher, M., Junge, C. & Rauschenbeutel, A. Nonlinear π phase shift for single fibre-guided photons interacting with a single resonator-enhanced atom. Nat. Photonics 8, 965–970 (2014). ADS  CAS  Article  Google Scholar  6. 6. Hood, J. D. et al. Atom-atom interactions around the band edge of a photonic crystal waveguide. Proc. Natl Acad. Sci. 113, 10507–10512 (2016). ADS  CAS  Article  PubMed  PubMed Central  Google Scholar  7. 7. Goban, A. et al. Superradiance for atoms trapped along a photonic crystal waveguide. Phys. Rev. Lett. 115, 063601 (2015). ADS  CAS  Article  PubMed  Google Scholar  8. 8. Gouraud, B., Maxein, D., Nicolas, A., Morin, O. & Laurat, J. Demonstration of a memory for tightly guided light in an optical nanofiber. Phys. Rev. Lett. 114, 180503 (2015). ADS  CAS  Article  PubMed  Google Scholar  9. 9. Sayrin, C., Clausen, C., Albrecht, B., Schneeweiss, P. & Rauschenbeutel, A. Storage of fiber-guided light in a nanofiber-trapped ensemble of cold atoms. Optica 2, 353–356 (2015). CAS  Article  Google Scholar  10. 10. Lodahl, P. et al. Chiral quantum optics. Nature 541, 473–480 (2017). ADS  CAS  Article  PubMed  Google Scholar  11. 11. Solano, P. et al. in Advances In Atomic, Molecular, and Optical Physics Vol. 66 (eds Arimondo, E., Lin, C. C. & Yelin, S. F.) 439–505 (Academic Press, New York, 2017). 12. 12. ADS  CAS  Article  PubMed  Google Scholar  13. 13. Scully, M. O. Single photon subradiance: quantum control of spontaneous emission and ultrafast readout. Phys. Rev. Lett. 115, 243602 (2015). ADS  Article  PubMed  Google Scholar  14. 14. Asenjo-Garcia, A., Moreno-Cardoner, M., Albrecht, A., Kimble, H. J. & Chang, D. E. Exponential improvement in photon storage fidelities using subradiance and selective radiance in atomic arrays. Phys. Rev. X 7, 031024 (2017). Google Scholar  15. 15. Ruostekoski, J. & Javanainen, J. Emergence of correlated optics in one-dimensional waveguides for classical and quantum atomic gases. Phys. Rev. Lett. 117, 143602 (2016). ADS  Article  PubMed  Google Scholar  16. 16. Ruostekoski, J. & Javanainen, J. Arrays of strongly coupled atoms in a one-dimensional waveguide. Phys. Rev. A 96, 033857 (2017). ADS  Article  Google Scholar  17. 17. Bettles, R. J., Gardiner, S. A. & Adams, C. S. Cooperative eigenmodes and scattering in one-dimensional atomic arrays. Phys. Rev. A 94, 043844 (2016). ADS  Article  Google Scholar  18. 18. Le Kien, F. & Hakuta, K. Cooperative enhancement of channeling of emission from atoms into a nanofiber. Phys. Rev. A 77, 013801 (2008). ADS  Article  Google Scholar  19. 19. Hung, C.-L., González-Tudela, A., Cirac, J. I. & Kimble, H. J. Quantum spin dynamics with pairwise-tunable, long-range interactions. Proc. Natl Acad. Sci. 113, E4946–E4955 (2016). MathSciNet  CAS  Article  PubMed  PubMed Central  MATH  Google Scholar  20. 20. Douglas, J. S. et al. Quantum many-body models with cold atoms coupled to photonic crystals. Nat. Photonics 9, 326–331 (2015). ADS  CAS  Article  Google Scholar  21. 21. Dicke, R. H. Coherence in spontaneous radiation processes. Phys. Rev. 93, 99 (1954). ADS  CAS  Article  MATH  Google Scholar  22. 22. Scully, M. O. & Svidzinsky, A. A. The super of superradiance. Science 325, 1510–1511 (2009). CAS  Article  PubMed  Google Scholar  23. 23. DeVoe, R. G. & Brewer, R. G. Observation of superradiant and subradiant spontaneous emission of two trapped ions. Phys. Rev. Lett. 76, 2049 (1996). ADS  CAS  Article  PubMed  Google Scholar  24. 24. Guerin, W., Araújo, M. O. & Kaiser, R. Subradiance in a large cloud of cold atoms. Phys. Rev. Lett. 116, 083601 (2016). ADS  Article  PubMed  Google Scholar  25. 25. Le Kien, F., Gupta, S. D., Nayak, K. P. & Hakuta, K. Nanofiber-mediated radiative transfer between two distant atoms. Phys. Rev. A 72, 063815 (2005). ADS  Article  Google Scholar  26. 26. Bendkowsky, V. et al. Observation of ultralong-range Rydberg molecules. Nature 458, 1005–1008 (2009). ADS  CAS  Article  PubMed  Google Scholar  27. 27. Richerme, P. et al. Non-local propagation of correlations in quantum systems with long-range interactions. Nature 511, 198–201 (2014). ADS  CAS  Article  PubMed  Google Scholar  28. 28. Bohnet, J. G. et al. Quantum spin dynamics and entanglement generation with hundreds of trapped ions. Science 352, 1297–1301 (2016). ADS  MathSciNet  CAS  Article  PubMed  MATH  Google Scholar  29. 29. Baumann, K., Guerlin, C., Brennecke, F. & Esslinger, T. Dicke quantum phase transition with a superfluid gas in an optical cavity. Nature 464, 1301–1306 (2010). ADS  CAS  Article  PubMed  Google Scholar  30. 30. Bohnet, J. G. et al. A steady-state superradiant laser with less than one intracavity photon. Nature 484, 78–81 (2012). ADS  CAS  Article  PubMed  Google Scholar  31. 31. Shahmoon, E., Grišins, P., Stimming, H. P., Mazets, I. & Kurizki, G. Highly nonlocal optical nonlinearities in atoms trapped near a waveguide. Optica 3, 725–733 (2016). CAS  Article  Google Scholar  32. 32. Chang, D. E., Jiang, L., Gorshkov, A. V. & Kimble, H. J. Cavity QED with atomic mirrors. New J. Phys. 14, 063003 (2012). ADS  Article  Google Scholar  33. 33. van Loo, A. F. et al. Photon-mediated interactions between distant artificial atoms. Science 342, 1494–1496 (2013). ADS  Article  PubMed  Google Scholar  34. 34. Grover, J. A., Solano, P., Orozco, L. A. & Rolston, S. L. Photon-correlation measurements of atomic-cloud temperature using an optical nanofiber. Phys. Rev. A 92, 013850 (2015). ADS  Article  Google Scholar  35. 35. O’Connor, D. & Phillips, D. Time-Correlated Single Photon Counting (Academic Press, London, 1984). Google Scholar  36. 36. Solano, P. et al. Alignment-dependent decay rate of an atomic dipole near an optical nanofiber. Preprint at (2017) 37. 37. Svidzinsky, A. & Chang, J.-T. Cooperative spontaneous emission as a many-body eigenvalue problem. Phys. Rev. A 77, 043833 (2008). ADS  Article  Google Scholar  38. 38. Le Kien, F. & Rauschenbeutel, A. Nanofiber-mediated chiral radiative coupling between two atoms. Phys. Rev. A 95, 023838 (2017). ADS  Article  Google Scholar  39. 39. Hebenstreit, M., Kraus, B., Ostermann, L. & Ritsch, H. Subradiance via entanglement in atoms with several independent decay channels. Phys. Rev. Lett. 118, 143602 (2017). ADS  Article  PubMed  Google Scholar  40. 40. Araújo, M. O., Guerin, W. & Kaiser, R. Decay dynamics in the coupled-dipole model. J. Modern Opt. (2017). Download references We are grateful to A. Asenjo-Garcia, H. J. Carmichael, D.E. Chang, J. P. Clemens, M. Foss-Feig, M. Hafezi, B.D. Patterson, W.D. Phillips, and P.R. Rice for the useful discussions. We give special thanks to P. Zoller who besides discussing the topic of the paper helped us improve the manuscript. This research is supported by the National Science Foundation of the United States (NSF) (PHY-1307416); NSF Physics Frontier Center at the Joint Quantum Institute (PHY-1430094); the USDOC, NIST, Joint Quantum Institute (70NANB16H168); and the Office of the Secretary of Defense of the United States, Quantum Science and Engineering Program. Author information P.S., F.K.F., L.A.O. and S.L.R. conceived the project. P.S. realized the measurements. P.B.-B. and P.S. developed the theoretical model. All authors discussed the results, contributed to the data analysis, and worked together on the manuscript. Corresponding author Correspondence to P. Solano. Ethics declarations Competing interests The authors declare no competing financial interests. Additional information Electronic supplementary material Rights and permissions Reprints and Permissions About this article Verify currency and authenticity via CrossMark Cite this article Solano, P., Barberis-Blostein, P., Fatemi, F.K. et al. Super-radiance reveals infinite-range dipole interactions through a nanofiber. Nat Commun 8, 1857 (2017). Download citation Further reading Quick links Nature Briefing
261c78140380ff40
Discrete-Time Controllability for Feedback Quantum Dynamics Francesca Albertini and Francesco Ticozzi Dipartimento di Matematica Pura ed Applicata, Università di Padova, via Trieste 63, 35131 Padova, Italy, Dipartimento di Ingegneria dell’Informazione, Università di Padova, via Gradenigo 6/B, 35131 Padova, Italy, Controllability properties for discrete-time, Markovian quantum dynamics are investigated. We find that, while in general the controlled system is not finite-time controllable, feedback control allows for arbitrary asymptotic state-to-state transitions. Under further assumption on the form of the measurement, we show that finite-time controllability can be achieved in a time that scales linearly with the dimension of the system, and we provide an iterative procedure to design the unitary control actions. 1 Introduction For any controlled system, an in-depth study of its controllability properties under the available control capabilities is the necessary premise to the design of effective controls addressing some given task. For quantum systems, in particular, controllability properties have been studied mostly considering continuous-time models in the presence of open-loop, coherent controls [10, 1, 21, 3, 4, 5, 12]. In this setting, the evolution is deterministic and the problem can be studied with the tools of geometric control theory. Indeed, for classical deterministic systems it makes little sense to distinguish open-loop and feedback controllability: the fact that the control law can benefit from partial or complete information on the system trajectory does not modify the reachable set from a given initial state. In the quantum case, however, the introduction of measurements alone modifies the dynamical model by introducing a stochastic behavior, which has to be carefully taken into account. Considering the “open-loop” effect of measurements is not enough: the ability of conditioning the control choice on the measurement outcomes changes significantly the controllability properties, and in particular the set of reachable density operators as it will be argued later. Continuous-time controllability of open-loop quantum dynamical semigroups have been studied in [4, 5, 12]. Some preliminary ideas about discrete-time, open-system controllability have been also previously explored in [29]. In that case, however, no reference to a specific set of control capabilities has been made (open-loop,closed-loop, coherent, incoherent, measurement-based control,…), the main focus being on the existence of general open-system dynamics connecting any given pair of states. In this paper we investigate the controllability properties of controlled, Markovian discrete-time quantum dynamics in open and closed loop. As a preliminary step, we will argue that a discrete-time system obtained by sampling inherits the open-loop controllablity properties from the continuous time underlying mode by resorting to previous results by Sontag [22, 23]. Open-loop controllability is a generic property for closed quantum systems, and this motivates our assumption of unitary controllability of the discrete-time systems we consider next. On the other hand, by introducing generalized measurements and closing the loop with conditional control actions, the dynamics drastically changes and our main results shall focus on this setting. We will present three simple examples illustrating how: (i) conditioning the control action on the outcome of a measurement influences the reachable sets of a controlled open-system evolution; however, in general (ii) feedback control does not in general ensure finite-time, state-to-state controllability; and (iii) feedback control does not allow for engineering of arbitrary dynamics. Next, we will prove that, under generic condition on the chosen measurement, feedback allows for asymptotic state-to-state controllability. Lastly, we will study a particular, yet not so restrictive in practice, class of controlled dynamics that exhibit finite-time feedback state-to-state controllability. As a byproduct of the proof of finite-time controllability, an explicit way to construct the sequence of control actions is provided. Remarkably, the (maximum) number of feedback steps needed to obtain any desired state-to-state transition scales linearly with the dimension, namely it is twice the size of the system’s Hilbert space. The paper is structured as follows: after recalling the essential features of quantum systems in Section 2.1 and the relevant notions of controllability in Section 2.2, in Section 3 we argue that samples dynamics inherits open-loop controllability from the underlying continuous-time model. Beside being of interest by itself, the ability of enacting arbitrary control actions in finite-time is also a key assumption in Section 4, where we establish under which conditions feedback control ensures asymptotic state-to-state controllability. After presenting the general results on feedback approximate controllability in Section 4, Section 5 will describe a particular class of dynamics, proving that in this case finite-time state-to-state controllability can be achieved. 2 Discrete-time Quantum Dynamics and Controllability Notions 2.1 Quantum Systems In this paper we will consider finite-dimensional quantum systems. Let us introduce some basic notation: to the quantum system of interest is associated an Hilbert space In Dirac’s notation (see e.g. [20]), vectors in are denoted by kets, , while the linear functionals on live on the dual space and are denoted by bra, Inner products are then represented by (bra(c)kets). denotes the set of linear operators on . Consider : its action on bras and kets is defined by where denotes the adjoint of operators (and consistently the transpose-conjugate for their matrix representations). Self-adjoint (Hermitian) operators are denoted by and are associated to observable variables for the system. In a quantum statistical framework, a state for the system is associated to trace-one, self-adjoint and positive-semidefinite operator Let us denote by the set of states or density operators. The subset denotes the set of rank-one orthogonal projectors, the pure states. is a convex set, whose extreme points are the pure states and its border contains all the states that are not full rank. In this paper we will consider generalized measurements, with a finite number of possible outcomes labeled by an index . Assume a system is in the state A generalized measurement is associated to a decomposition of the identity that allows to compute the probability of measuring the -th outcome as: and the conditioned state after the measurement as: A particular case is represented by direct measurements of observables, or projective measurements: consider an observable with spectral representation The eigenvalues correspond to the possible outcomes of the measurement, labeled by , and the probabilities and conditioned states can be computed by the formulas above with We shall consider dynamics in the so-called Schrödinger picture, where the state is evolving while the observables are time-invariant. It follows from Schrödinger’s equation (see next section, equation (5)) that an isolated, closed quantum system evolves unitarily: in discrete time, this means that for a sequence of times with time-intervals normalized to one, we have with for all ’s (here denotes the subset of unitary operators). In the open quantum system setting, general physically admissible evolutions are described by linear, Completely Positive and Trace Preserving (CPTP) maps [19, 9]. Any CPTP map via the Kraus-Stinespring theorem [16] admits explicit representations of the form also known as Operator-Sum Representation (OSR) of , where is a density operator and a family of operators such that the completeness relation is satisfied. We refer the reader to e.g. [2, 19, 9] for a detailed discussions of the properties of quantum operations and the physical meaning of the complete-positivity property. We recall that maps in the form (3) preserving the identity, , are called unital. It is well known that the OSR of a given CPTP map is not unique: in fact the following holds (see [19], Theorem 8.2): Theorem 2.1 (Unitary freedom in the OSR) Assume and be OSRs of quantum operations and respectively. If append zero operators to the shortest list so that Then if and only if there exist a unitary matrix such that: In the rest of the paper, however, open-system dynamics will be obtained as averages over states conditioned on a given measurement, followed by unitary control. By averaging over the possible outcomes of a generalized measurement we get: which is a CPTP map, and physically represents the expected effect of a measurement on the state, when the outcome is not known. The fact that is a remarkable difference with respect to classical probability. Also notice how, of all the possible OSR associated to the unconditional only one, will corresponds to the correct conditional states via (1). This means that when considering feedback protocols based on the conditional states (as we do in Section 4), different OSRs are not equivalent, and we have to consider the fixed OSR associated to the underlying measurement. 2.2 Notions of Controllability When dealing with dynamics depending on external controls, it is of physical interest to know whether or not these controls can be chosen so as to drive the state of our model between two given configurations, either exactly or approximately. Different notions of controllability can be given depending on which is the relevant state for the dynamics. As an example, when dealing with multilevel quantum mechanical systems evolving in continuous time we may look at the evolutions on the complex unitary sphere, on the unitary operations, or on the density matrix operator. More precisely, denoting by the Hamiltonian including the controls, and considering the system isolated, we can study controllability of the Schrödinger equation, describing the evolution on the complex unitary sphere associated to pure states, or the corresponding equation acting on the propagator, or again the Landau-von Neumann equation for the evolution on the density operators. Thus, according to the problem we are looking at, we may be interested to the action of the same Hamiltonian to either or . Of course, the controllability properties are connected: notice that if denotes the solution of the second equation with initial condition , then we have and . These relations provide some correlations among the different types of controllability. In this paper we will deal instead with controlled open quantum models evolving in discrete time on the set of density operators. The dynamics will be generically described by: with and , and where is the set of controls. Later we will be precise about the set of controls and the form of the map . In particular, we will deal with the case where comes from sampling a continuos time model evolving according the Landau-von Neumann equation (see (9)), and with the case where is a CPTP map emerging from measurement and feedback unitary control (see (10)). Since the subset of the pure states have a special physical meaning, we introduce the following different definitions of controllability properties. Pure state to Pure state Controllable (PPC) in steps: if for every pure initial state there exist a sequence of controlled dynamical maps such that any other pure can be reached at finite time . Density operator to Density operator Controllable (DDC) in steps: if for every initial state there exist a choice of controls such that any other can be reached in finite time . Analogous definition can be given for Pure state to Density operator Controllable (PDC) and Density operator to Pure state Controllable (DPC). Clearly, being , it holds that: Weaker (approximate) versions of the same controllability properties are of particular interest when dealing with discrete-time systems coming from sampling continuous time models. In fact, for these models, there are results correlating the continuos time controllability with the discrete time one, see Section 3 below. It is also possible to think to some notions of dynamical propagator controllability, where, instead of looking at the problem of steering a given initial state to a fix final one, we look at the possibility of realizing some given dynamical maps. We say that a system is: Unitary controllable (UC) in steps: if given any there exist a choice of controls that realizes the unitary evolution given by equation (2) as a composition of evolutions , i.e. (where ). Kraus map controllable (KC) in steps: if given any (see equation (3)) exists a choice of controls such that . Some immediate relationships between the notions are: The first implication has also been highlighted in [29]. It can be easily derived considering a constant mapping from to being the target state. This map can be extended to a linear CPTP map on , and hence it admits an OSR (by Kraus-Stinespring theorem [16, 19]). 3 On open-loop discrete-time controllability Controllability results for open-loop, coherent control are well established, and they are essentially based on the Lie-algebra rank condition. Moreover, the ability of realizing arbitrary unitary operators in finite time will be key to the results on feedback controllability. This section is devoted to discuss under which conditions this can attained, at least approximately. Consider the controlled Landau-von Neumann equation (6), the system is then controllable in continuous time if the Jurdievic-Sussman Lie-algebraic rank condition is satisfied [10]. Theorem 3.1 The system is controllable if and only if the Lie-algebra generated by the Hamiltonians, is the full . This condition is generic even with a single control field, that is, almost every pair of drift and control Hamiltonian, and , ensures that the associated control Lie algebra is the full [3]. Let us introduce the discrete-time model by forcing the control functions to be piece-wise constant on intervals long (in the terminology of [23], they are sampled control functions), and considering the associated evolution: where and denotes the formal, or time-ordered, exponential. Let us call the set of states reachable from by sampled control functions in steps, and We say that the system (6) is sampled controllable (either sampled PPC, PDC, DPC or DDC) if for every pair of states (in or according to the type of controllability considered) there is a sample time such that is in while it is approximately sampled controllable if for every pair of states there is a sample time such that is contained in the closure of Sontag proved the following results on the relationship between (continuous-time) controllability and sampled controllability [22, 23]. Theorem 3.2 If a dynamical system on a simply connected group is controllable (in continuous time), then it is sampled controllable. Theorem 3.3 If a dynamical system is controllable (in continuous time), then it is approximately sampled controllable. In our setting, considering the dynamical equation (8), Theorem 3.2 ensures that if the system is continuous-time controllable, we can obtain any unitary operator in a finite number of discrete set by sufficiently fast sampled control. An open problem concerns establishing estimates of the time needed to realize a given unitary transformation, and how the sample time may depend on the degree of the accuracy we require for approximate sampled controllability. 4 Results on Feedback Controllability 4.1 Discrete-time feedback control and background We introduce here a discrete-time, Markovian feedback control scheme [6, 18, 14], that has been recently studied in depth in [7] focusing on stabilization problems. Assume that we can: • Enact a fixed, given generalized measurement associated to an OSR • Engineer a set of arbitrary unitary control action at each time choosing when the -th outcome of the measurement is obtained. Thus, if the state at time was the state at time conditioned to the -th outcome of the generalized measurement is: Hence, averaging over the possible outcomes we get: We next recall a characterization of the OSRs that can be realized by exploiting these control capabilities [7]. This and the following results heavily rely on a canonical form of the QR decomposition that is recalled in Appendix A. Proposition 4.1 A measurement with associated operators can be simulated by a certain choice of unitary controls from a measurement if and only if there exist a reordering of the first integers such that: where returns the canonical factor of the argument, as described in Appendix A. The potential of the feedback strategy for pure state preparation is established by the following [8, 7]. Theorem 4.1 Consider a subspace orthogonal decomposition , and a given generalized measurement associated to Kraus operators Let be the canonical -factors associated to in a basis consistent with the Hilbert space decomposition above. The task of achieving global asymptotic stability of by a feedback unitary control policy is feasible if and only if there exists a such that: Notice that if a pure state is globally asymptotically stabilizable, it means that it belongs to the closure of for any initial state . In the next sections we will use this fact to link the feedback stabilization problem to feedback controllability problems. 4.2 Three examples We here present three examples that will provide motivation for the study of feedback controllability, and counterexamples to generic, finite-time DPC (and hence DDC) and KC properties. Yet, they will suggest some natural questions about weaker controllability properties. Let us agree that denotes the reachable set from . Example 1: Feedback-controllability is different from open-loop controllability. An extreme example is the following: Consider a completely depolarizing channel for a two-level system, with This means that for every . On the other hand, if conditional controls are allowed, it is easy to see that choosing e.g. we get Hence, at least the set of isospectral to the initial condition is in the reachable set: However, even the feedback control strategy we are considering has its limitations. A key one is the time needed to reach the desired state, in particular pure states. Example 2: Feedback purification cannot in general be obtained in finite time. Consider a full rank state Assume that the generalized measurement we consider has OSR with at least is full rank. Then for any control choice we have that: while Hence, being a sum of a strictly positive operator and a positive semidefinte one, By iterating the above reasoning, we get that for any Thus, no state on the border of can be reached in finite time from a generic state. The following generalization of this example is in fact immediate: Proposition 4.2 Consider a feedback controlled system as in (10). If at least one of the s in the OSR has full rank, then no state on is reachable in finite time from One is then lead to ask: is the controlled system at least asymptotically DPC? Is there a set of conditions under which the system can be rendered DPC in finite time? We will prove that feedback discrete-time quantum dynamics are generically asymptotically (or approximately, in the definition given in Section 3) controllable. In Section 5 we provide some conditions on the measurement OSR that ensure that feedback system is both DPC and PDC in finite time. Example 3: Feedback control does not ensure Kraus-map controllability. Consider two CPTP maps on a two-level system, with OSRs with Note that the first OSRs elements are scalar multiples of unitaries, and hence they both have scalar matrices as canonical -factors, while the second OSR is already in canonical form. Assume we want generate the CPTP map associated with by feedback control as in (10). The canonical -factors being different, the only hope is to feedback-enact an OSR that is equivalent to However, it is immediate to see that for any the dynamical map remains unital, while the one associated to is not. At a first look, this may seem in contrast with previous results: for example, the main result in [18] shows how to feedback engineer arbitrary measurements on the system of interest by using an ingenuous combination of ancillary systems, simple interaction Hamiltonians, projective measurements and fast-pulse control. The attained result is a weaker KC property, that needs more general control capabilities, including (essentially) the ability of changing the measurement action, and ensures that the enacted dynamics corresponds in general to the desired one only at lower-order (in time). What is the class of CPTP maps one can realize via feedback? A partial answer, of course, is given by Proposition 4.1. However, due to the non uniqueness of the OSR, the fact that the target CPTP map has an OSR that in canonical form is the same of the measurement used in the feedback loop is only sufficient for its realizability by means of a feedback protocol. Providing conditions for exact KC, or characterizing the reachable set of propagators are, to the best of our knowledge, open problems. 4.3 Generic asymptotic controllability Enforcing generalized measurements on the system, one does not lose pure state controllability. Lemma 4.1 Assume that the controlled system dynamics is described by (10). Then the system is PPC in one step. Proof: Consider a pure initial state and a target The state conditioned on the -th outcome of the measurement step is then with Then to reach is sufficient to consider a set of control actions such that for each . Can we always prepare a given pure state starting from an arbitrary density matrix? The answer is generically positive, at least asymptotically, if we allow for feedback control. Theorem 4.2 Assume the system dynamics to be described by (10), with a fixed measurement with OSR and arbitrary conditional control actions . Then the system is approximately DPC if and only if there is a such for every and Proof: As a first step, by properly constructing a basis and invoking Theorem 4.1, we will first show that a pure state is stabilizable if This condition implies that the corresponding canonical -factor is Let us consider two cases: A) If at least one of the canonical factor is not diagonal, i.e. there exists an element with reorder the basis so that the -th basis vector becomes the first, and the -th is the second. Since the two corresponding columns in were not orthogonal, they will remain so after the change of basis. Hence, when computing again the canonical -factor, the upper-right block will be in the form with According to Theorem 4.1, the state in the new basis, can be made globally asymptotically stable. B) If all the s are diagonal, but at least one is not a scalar matrix, we can find a reordering of the basis so that the upper-right block of the one is in the form with . Let us consider a further unitary change of basis (acting on the right of , that modifies the upper left block of : Now, being ,its first two columns become not orthogonal. If we compute the canonical R-factor of its first two columns become not orthogonal, and hence with Notice that the construction above works also for Thus the state in the new basis can be made asymptotically stable by feedback control. To conclude the “if” implication, assume we reach a -neighborhood (in trace distance) of at some time . Then on the step we can apply a different set of unitary control actions, as in Lemma 4.1, which realize the one-step transition to and since CPTP maps are trace norm contractions we end up in a -neighborhood (in trace distance) of with . On the other hand, assume that Then the feedback dynamics (10) becomes: A map of this form can only reach states in the convex hull of the set isospectral to Hence if is in the interior of the closure of the reachable set cannot contain any pure state. It is worth remarking that: (i) the proof is constructive, since it implicitly uses the constructive result of [7]; (ii) relying on a stabilization procedure, the control strategy is robust with respect to uncertainty on the initial state ; (iii) the time needed for approximately reaching an -neighborhood of the target state can be estimated by computing the slowest eigenvalue of the feedback-controlled map. (iv) the condition for some is generic, and it fails only for probabilistic average of unitary effects. In other words, the class of measurements that do not allow for DPC are those that are associated to an average over the conditonial states of the form: Furthermore, this corollary of Proposition 4.2 comes at no cost: Corollary 4.1 Assume that we can control the system as in Theorem 4.2 above. Then asymptotic feedback purification of the state can be achieved if and only if there is a such for every and The results above in turn imply that feedback makes the system DDC, provided that we can randomly choose the unitary controls in a finite set with given probabilities: Corollary 4.2 Assume that we can control the system as in Theorem 4.2 above, and in addition we can pick a control action at random from a finite set with an arbitrary probability distribution . Then the system is approximately DDC if and only if there is a such with and Proof: By Theorem 4.2, there exists a finite time so that we can get arbitrarily close to a pure state Assume the target state is and define the control actions so that Than at some time it suffices to extract at random a with probability so that the average dynamics (disregarding which has been extracted, gives Notice that, up to the last step, the choice of the unitary control actions is time-independent, that is, at each iteration the average dynamics is represented by the same OSR: 5 Sufficient conditions for finite-time state controllability Assume that a certain generalized measurement has only two outcomes, and associated operators and such that: Moreover, assume: 1. Both matrices are diagonal; 2. Both matrices are singular. Assumption 1) is not restrictive under feedback control assumptions, as it shown in the following lemma. Lemma 5.1 Consider two generic that satisfy (13). Then there exist a unitaries such that is diagonal for Proof: By appropriately choosing the reference basis through a unitary and a enacting a (feedback) unitary we can diagonalize any by e.g. singular value decomposition Then must be diagonal, since (13) holds, and hence it admits a diagonal square root of the form Given assumption 1)-2), without loss of generality, the two matrices have then the following form with respect to a reference basis : where, to satisfy (13), we must have and, for , . It is immediate to see that a measurement in this form is able to distinguish with certainty at least the first two orthogonal states of the basis in which have the form (14). We can now prove that the feedback controlled dynamics is finite-time DPC. Proposition 5.1 There exists a choice and , such that for any , is a pure state. Proof: Let be the unit vector such that the target state is equal to . For , define the two matrices and as the permutations matrices defined by the following relationships: and let and be any two unitary matrices such that We first prove by induction on , that is of the following type: that is, at step the state has support only on the subspace generated by the first basis vectors, For the statement is trivial, so assume that (17) holds for , then We have, for : Moreover it holds: Using the same argument and exchanging with , we get: Thus equation (17) holds for with: Using (17) for , and letting omitting for simplicity the index , we have: with . Now we have: and, analogously, Thus, summing up, we obtain: as desired. The converse is also true, that is, the system is finite-time PDC. Proposition 5.2 Assume that is a pure state, then for any , there exists a sequence of controls of length that steers to . Proof: The explicit construction of a set of effective controls can be done following the procedure detailed below. First step: Prepare an appropriate pure state. Assume that . If , then let be any unitary matrix, if , then let be any unitary matrix such that Then, is again a pure state and we have Second Step: Preparing the first element. Let be any unitary matrix such that , and be any unitary matrix such that With this choice we have: Successive steps. Notice that and are orthogonal. Let be such that: and let be such that: , and Now we have: Iterating this construction, after steps, we will get: Final two steps: finalizing the construction. Now, letting , we obtain:
1e80301e69b65573
@inproceedings{2746, abstract = {We consider random Schrödinger equations on Rd or Zd for d ≥ 3 with uncorrelated, identically distributed random potential. Denote by λ the coupling constant and ψt the solution with initial data ψ0.}, author = {László Erdös and Salmhofer, Manfred and Yau, Horng-Tzer}, pages = {233 -- 257}, publisher = {World Scientific Publishing}, title = {{Towards the quantum Brownian motion}}, doi = {10.1007/3-540-34273-7_18}, volume = {690}, year = {2006}, } @article{2747, abstract = {Consider a system of N bosons on the three-dimensional unit torus interacting via a pair potential N 2V(N(x i - x j)) where x = (x i, . . ., x N) denotes the positions of the particles. Suppose that the initial data ψ N,0 satisfies the condition 〈ψ N,0, H 2 Nψ N,0) ≤ C N 2 where H N is the Hamiltonian of the Bose system. This condition is satisfied if ψ N,0 = W Nφ N,t where W N is an approximate ground state to H N and φ N,0 is regular. Let ψ N,t denote the solution to the Schrödinger equation with Hamiltonian H N. Gross and Pitaevskii proposed to model the dynamics of such a system by a nonlinear Schrödinger equation, the Gross-Pitaevskii (GP) equation. The GP hierarchy is an infinite BBGKY hierarchy of equations so that if u t solves the GP equation, then the family of k-particle density matrices ⊗ k |u t?〉 〈 t | solves the GP hierarchy. We prove that as N → ∞ the limit points of the k-particle density matrices of ψ N,t are solutions of the GP hierarchy. Our analysis requires that the N-boson dynamics be described by a modified Hamiltonian that cuts off the pair interactions whenever at least three particles come into a region with diameter much smaller than the typical interparticle distance. Our proof can be extended to a modified Hamiltonian that only forbids at least n particles from coming close together for any fixed n.}, author = {László Erdös and Schlein, Benjamin and Yau, Horng-Tzer}, journal = {Communications on Pure and Applied Mathematics}, number = {12}, pages = {1659 -- 1741}, publisher = {Wiley-Blackwell}, title = {{Derivation of the Gross-Pitaevskii hierarchy for the dynamics of Bose-Einstein condensate}}, doi = {10.1002/cpa.20123}, volume = {59}, year = {2006}, } @article{2791, abstract = {Generally, the motion of fluids is smooth and laminar at low speeds but becomes highly disordered and turbulent as the velocity increases. The transition from laminar to turbulent flow can involve a sequence of instabilities in which the system realizes progressively more complicated states, or it can occur suddenly. Once the transition has taken place, it is generally assumed that, under steady conditions, the turbulent state will persist indefinitely. The flow of a fluid down a straight pipe provides a ubiquitous example of a shear flow undergoing a sudden transition from laminar to turbulent motion. Extensive calculations and experimental studies have shown that, at relatively low flow rates, turbulence in pipes is transient, and is characterized by an exponential distribution of lifetimes. They also suggest that for Reynolds numbers exceeding a critical value the lifetime diverges (that is, becomes infinitely large), marking a change from transient to persistent turbulence. Here we present experimental data and numerical calculations covering more than two decades of lifetimes, showing that the lifetime does not in fact diverge but rather increases exponentially with the Reynolds number. This implies that turbulence in pipes is only a transient event (contrary to the commonly accepted view), and that the turbulent and laminar states remain dynamically connected, suggesting avenues for turbulence control.}, author = {Björn Hof and Westerweel, Jerry and Schneider, Tobias M and Eckhardt, Bruno}, journal = {Nature}, number = {7107}, pages = {59 -- 62}, publisher = {Nature Publishing Group}, title = {{Finite lifetime of turbulence in shear flows}}, doi = {10.1038/nature05089}, volume = {443}, year = {2006}, } @article{2792, abstract = {Transition to turbulence in pipe flow has posed a riddle in fluid dynamics since the pioneering experiments of Reynolds[1]. Although the laminar flow is linearly stable for all flow rates, practical pipe flows become turbulent at large enough flow speeds. Turbulence arises suddenly and fully without distinct steps and without a clear critical point. The complexity of this problem has puzzled mathematicians, physicists and engineers for more than a century and no satisfactory explanation of this problem has been given. In a very recent theoretical approach it has been suggested that unstable solutions of the Navier Stokes equations may hold the key to understanding this problem. In numerical studies such unstable states have been identified as exact solutions for the idealized case of a pipe with periodic boundary conditions[2, 3]. These solutions have the form of waves extending through the entire pipe and travelling in the streamwise direction at a phase speed close to the bulk velocity of the fluid. With the aid of a recently developed high-speed stereoscopic Particle Image Velocimetry (PIV) system, we were able to observe transients of such unstable solutions in turbulent pipe flow[4].}, author = {Björn Hof and van Doorne, Casimir W and Westerweel, Jerry and Nieuwstadt, Frans T}, journal = {Fluid Mechanics and its Applications}, pages = {109 -- 114}, publisher = {Springer}, title = {{Observation of nonlinear travelling waves in turbulent pipe flow}}, doi = {10.1007/1-4020-4159-4_11}, volume = {78}, year = {2006}, } @article{2894, abstract = {IL-10 is a potent anti-inflammatory and immunomodulatory cytokine, exerting major effects in the degree and quality of the immune response. Using a newly generated IL-10 reporter mouse model, which easily allows the study of IL-10 expression from each allele in a single cell, we report here for the first time that IL-10 is predominantly monoallelic expressed in CD4+ T cells. Furthermore, we have compelling evidence that this expression pattern is not due to parental imprinting, allelic exclusion, or strong allelic bias. Instead, our results support a stochastic regulation mechanism, in which the probability to initiate allelic transcription depends on the strength of TCR signaling and subsequent capacity to overcome restrictions imposed by chromatin hypoacetylation. In vivo Ag-experienced T cells show a higher basal probability to transcribe IL-10 when compared with naive cells, yet still show mostly monoallelic IL-10 expression. Finally, statistical analysis on allelic expression data shows transcriptional independence between both alleles. We conclude that CD4+ T cells have a low probability for IL-10 allelic activation resulting in a predominantly monoallelic expression pattern, and that IL-10 expression appears to be stochastically regulated by controlling the frequency of expressing cells, rather than absolute protein levels per cell.}, author = {Calado, Dinis P and Tiago Paixao and Holmberg, Dan and Haury, Matthias}, journal = {Journal of Immunology}, number = {8}, pages = {5358 -- 5364}, publisher = {American Association of Immunologists}, title = {{Stochastic Monoallelic Expression of IL 10 in T Cells}}, doi = {10.4049/jimmunol.177.8.5358 }, volume = {177}, year = {2006}, } @inbook{2921, abstract = {Most binocular stereo algorithms assume that all scene elements are visible from both cameras. Scene elements that are visible from only one camera, known as occlusions, pose an important challenge for stereo. Occlusions are important for segmentation, because they appear near discontinuities. However, stereo algorithms tend to ignore occlusions because of their difficulty. One reason is that occlusions require the input images to be treated symmetrically, which complicates the problem formulation. Worse, certain depth maps imply physically impossible scene configurations, and must be excluded from the output. In this chapter we approach the problem of binocular stereo with occlusions from an energy minimization viewpoint. We begin by reviewing traditional stereo methods that do not handle occlusions. If occlusions are ignored, it is easy to formulate the stereo problem as a pixel labeling problem, which leads to an energy function that is common in early vision. This kind of energy function can he minimized using graph cuts, which is a combinatorial optimization technique that has proven to be very effective for low-level vision problems. Motivated by this, we have designed two graph cut stereo algorithms that are designed to handle occlusions. These algorithms produce promising experimental results on real data with ground truth.}, author = {Vladimir Kolmogorov and Zabih, Ramin}, booktitle = {Handbook of Mathematical Models in Computer Vision}, pages = {423 -- 427}, publisher = {Springer}, title = {{Graph cut algorithms for binocular stereo with occlusions}}, doi = {10.1007/0-387-28831-7_26}, year = {2006}, } @article{8488, abstract = {We demonstrate for different protein samples that three-dimensional HNCO and HNCA correlation spectra may be recorded in a few minutes acquisition time using the band-selective excitation short-transient sequences presented here. This opens new perspectives for the NMR structural investigation of unstable protein samples and real-time site-resolved studies of protein kinetics.}, author = {Schanda, Paul and Van Melckebeke, Hélène and Brutscher, Bernhard}, issn = {0002-7863}, journal = {Journal of the American Chemical Society}, keywords = {Colloid and Surface Chemistry, Biochemistry, General Chemistry, Catalysis}, number = {28}, pages = {9042--9043}, publisher = {American Chemical Society}, title = {{Speeding up three-dimensional protein NMR experiments to a few minutes}}, doi = {10.1021/ja062025p}, volume = {128}, year = {2006}, } @article{8489, abstract = {Structure elucidation of proteins by either NMR or X‐ray crystallography often requires the screening of a large number of samples for promising protein constructs and optimum solution conditions. For large‐scale screening of protein samples in solution, robust methods are needed that allow a rapid assessment of the folding of a polypeptide under diverse sample conditions. Here we present HET‐SOFAST NMR, a highly sensitive new method for semi‐quantitative characterization of the structural compactness and heterogeneity of polypeptide chains in solution. On the basis of one‐dimensional 1H HET‐SOFAST NMR data, obtained on well‐folded, molten globular, partially‐ and completely unfolded proteins, we define empirical thresholds that can be used as quantitative benchmarks for protein compactness. For 15N‐enriched protein samples, two‐dimensional 1H‐15N HET‐SOFAST correlation spectra provide site‐specific information about the structural heterogeneity along the polypeptide chain.}, author = {Schanda, Paul and Forge, Vincent and Brutscher, Bernhard}, issn = {0749-1581}, journal = {Magnetic Resonance in Chemistry}, number = {S1}, pages = {S177--S184}, publisher = {Wiley}, title = {{HET-SOFAST NMR for fast detection of structural compactness and heterogeneity along polypeptide chains}}, doi = {10.1002/mrc.1825}, volume = {44}, year = {2006}, } @article{8490, abstract = {We demonstrate the feasibility of recording 1H–15N correlation spectra of proteins in only one second of acquisition time. The experiment combines recently proposed SOFAST-HMQC with Hadamard-type 15N frequency encoding. This allows site-resolved real-time NMR studies of kinetic processes in proteins with an increased time resolution. The sensitivity of the experiment is sufficient to be applicable to a wide range of molecular systems available at millimolar concentration on a high magnetic field spectrometer.}, author = {Schanda, Paul and Brutscher, Bernhard}, issn = {1090-7807}, journal = {Journal of Magnetic Resonance}, keywords = {Nuclear and High Energy Physics, Biophysics, Biochemistry, Condensed Matter Physics}, number = {2}, pages = {334--339}, publisher = {Elsevier}, title = {{Hadamard frequency-encoded SOFAST-HMQC for ultrafast two-dimensional protein NMR}}, doi = {10.1016/j.jmr.2005.10.007}, volume = {178}, year = {2006}, } @article{8513, author = {Kaloshin, Vadim and Saprykina, Maria}, issn = {1553-5231}, journal = {Discrete & Continuous Dynamical Systems - A}, number = {2}, pages = {611--640}, publisher = {American Institute of Mathematical Sciences (AIMS)}, title = {{Generic 3-dimensional volume-preserving diffeomorphisms with superexponential growth of number of periodic orbits}}, doi = {10.3934/dcds.2006.15.611}, volume = {15}, year = {2006}, } @article{8514, abstract = {We study the extent to which the Hausdorff dimension of a compact subset of an infinite-dimensional Banach space is affected by a typical mapping into a finite-dimensional space. It is possible that the dimension drops under all such mappings, but the amount by which it typically drops is controlled by the ‘thickness exponent’ of the set, which was defined by Hunt and Kaloshin (Nonlinearity12 (1999), 1263–1275). More precisely, let $X$ be a compact subset of a Banach space $B$ with thickness exponent $\tau$ and Hausdorff dimension $d$. Let $M$ be any subspace of the (locally) Lipschitz functions from $B$ to $\mathbb{R}^{m}$ that contains the space of bounded linear functions. We prove that for almost every (in the sense of prevalence) function $f \in M$, the Hausdorff dimension of $f(X)$ is at least $\min\{ m, d / (1 + \tau) \}$. We also prove an analogous result for a certain part of the dimension spectra of Borel probability measures supported on $X$. The factor $1 / (1 + \tau)$ can be improved to $1 / (1 + \tau / 2)$ if $B$ is a Hilbert space. Since dimension cannot increase under a (locally) Lipschitz function, these theorems become dimension preservation results when $\tau = 0$. We conjecture that many of the attractors associated with the evolution equations of mathematical physics have thickness exponent zero. We also discuss the sharpness of our results in the case $\tau > 0$.}, author = {OTT, WILLIAM and HUNT, BRIAN and Kaloshin, Vadim}, issn = {0143-3857}, journal = {Ergodic Theory and Dynamical Systems}, number = {3}, pages = {869--891}, publisher = {Cambridge University Press}, title = {{The effect of projections on fractal sets and measures in Banach spaces}}, doi = {10.1017/s0143385705000714}, volume = {26}, year = {2006}, } @inproceedings{8515, abstract = {We consider the evolution of a set carried by a space periodic incompressible stochastic flow in a Euclidean space. We report on three main results obtained in [8, 9, 10] concerning long time behaviour for a typical realization of the stochastic flow. First, at time t most of the particles are at a distance of order √t away from the origin. Moreover, we prove a Central Limit Theorem for the evolution of a measure carried by the flow, which holds for almost every realization of the flow. Second, we show the existence of a zero measure full Hausdorff dimension set of points, which escape to infinity at a linear rate. Third, in the 2-dimensional case, we study the set of points visited by the original set by time t. Such a set, when scaled down by the factor of t, has a limiting non random shape.}, author = {Kaloshin, Vadim and DOLGOPYAT, D. and KORALOV, L.}, booktitle = {XIVth International Congress on Mathematical Physics}, isbn = {9789812562012}, location = {Lisbon, Portugal}, pages = {290--295}, publisher = {World Scientific}, title = {{Long time behaviour of periodic stochastic flows}}, doi = {10.1142/9789812704016_0026}, year = {2006}, } @article{854, abstract = {Phylogenetic relationships between the extinct woolly mammoth (Mammuthus primigenius), and the Asian (Elephas maximus) and African savanna (Loxodonta africana) elephants remain unresolved. Here, we report the sequence of the complete mitochondrial genome (16,842 base pairs) of a woolly mammoth extracted from permafrost-preserved remains from the Pleistocene epoch - the oldest mitochondrial genome sequence determined to date. We demonstrate that well-preserved mitochondrial genome fragments, as long as ∼1,600-1700 base pairs, can be retrieved from pre-Holocene remains of an extinct species. Phylogenetic reconstruction of the Elephantinae clade suggests that M. primigenius and E. maximus are sister species that diverged soon after their common ancestor split from the L. africana lineage. Low nucleotide diversity found between independently determined mitochondrial genomic sequences of woolly mammoths separated geographically and in time suggests that north-eastern Siberia was occupied by a relatively homogeneous population of M. primigenius throughout the late Pleistocene.}, author = {Rogaev, Evgeny I and Moliaka, Yuri K and Malyarchuk, Boris A and Fyodor Kondrashov and Derenko, Miroslava V and Chumakov, Ilya M and Grigorenko, Anastasia P}, journal = {PLoS Biology}, number = {3}, pages = {0403 -- 0410}, publisher = {Public Library of Science}, title = {{Complete mitochondrial genome and phylogeny of pleistocene mammoth Mammuthus primigenius}}, doi = {10.1371/journal.pbio.0040073}, volume = {4}, year = {2006}, } @article{868, abstract = {Background: The glyoxylate cycle is thought to be present in bacteria, protists, plants, fungi, and nematodes, but not in other Metazoa. However, activity of the glyoxylate cycle enzymes, malate synthase (MS) and isocitrate lyase (ICL), in animal tissues has been reported. In order to clarify the status of the MS and ICL genes in animals and get an insight into their evolution, we undertook a comparative-genomic study. Results: Using sequence similarity searches, we identified MS genes in arthropods, echinoderms, and vertebrates, including platypus and opossum, but not in the numerous sequenced genomes of placental mammals. The regions of the placental mammals' genomes expected to code for malate synthase, as determined by comparison of the gene orders in vertebrate genomes, show clear similarity to the opossum MS sequence but contain stop codons, indicating that the MS gene became a pseudogene in placental mammals. By contrast, the ICL gene is undetectable in animals other than the nematodes that possess a bifunctional, fused ICL-MS gene. Examination of phylogenetic trees of MS and ICL suggests multiple horizontal gene transfer events that probably went in both directions between several bacterial and eukaryotic lineages. The strongest evidence was obtained for the acquisition of the bifunctional ICL-MS gene from an as yet unknown bacterial source with the corresponding operonic organization by the common ancestor of the nematodes. Conclusion: The distribution of the MS and ICL genes in animals suggests that either they encode alternative enzymes of the glyoxylate cycle that are not orthologous to the known MS and ICL or the animal MS acquired a new function that remains to be characterized. Regardless of the ultimate solution to this conundrum, the genes for the glyoxylate cycle enzymes present a remarkable variety of evolutionary events including unusual horizontal gene transfer from bacteria to animals.}, author = {Fyodor Kondrashov and Koonin, Eugene V and Morgunov, Igor G and Finogenova, Tatiana V and Kondrashova, Marie N}, journal = {Biology Direct}, publisher = {BioMed Central}, title = {{Evolution of glyoxylate cycle enzymes in Metazoa Evidence of multiple horizontal transfer events and pseudogene formation}}, doi = {10.1186/1745-6150-1-31}, volume = {1}, year = {2006}, } @article{869, abstract = {The impact of synonymous nucleotide substitutions on fitness in mammals remains controversial. Despite some indications of selective constraint, synonymous sites are often assumed to be neutral, and the rate of their evolution is used as a proxy for mutation rate. We subdivide all sites into four classes in terms of the mutable CpG context, nonCpG, postC, preG, and postCpreG, and compare four-fold synonymous sites and intron sites residing outside transposable elements. The distribution of the rate of evolution across all synonymous sites is trimodal. Rate of evolution at nonCpG synonymous sites, not preceded by C and not followed by G, is ∼10% below that at such intron sites. In contrast, rate of evolution at postCpreG synonymous sites is ∼30% above that at such intron sites. Finally, synonymous and intron postC and preG sites evolve at similar rates. The relationship between the levels of polymorphism at the corresponding synonymous and intron sites is very similar to that between their rates of evolution. Within every class, synonymous sites are occupied by G or C much more often than intron sites, whose nucleotide composition is consistent with neutral mutation-drift equilibrium. These patterns suggest that synonymous sites are under weak selection in favor of G and C, with the average coefficient s∼0.25/Ne∼10-5, where Ne is the effective population size. Such selection decelerates evolution and reduces variability at sites with symmetric mutation, but has the opposite effects at sites where the favored nucleotides are more mutable. The amino-acid composition of proteins dictates that many synonymous sites are CpGprone, which causes them, on average, to evolve faster and to be more polymorphic than intron sites. An average genotype carries ∼107 suboptimal nucleotides at synonymous sites, implying synergistic epistasis in selection against them.}, author = {Fyodor Kondrashov and Ogurtsov, Aleksey Yu and Kondrashov, Alexey S}, journal = {Journal of Theoretical Biology}, number = {4}, pages = {616 -- 626}, publisher = {Elsevier}, title = {{Selection in favor of nucleotides G and C diversifies evolution rates and levels of polymorphism at mammalian synonymous sites}}, doi = {10.1016/j.jtbi.2005.10.020}, volume = {240}, year = {2006}, } @article{873, abstract = {New genes commonly appear through complete or partial duplications of pre-existing genes. Duplications of long DNA segments are constantly produced by rare mutations, may become fixed in a population by selection or random drift, and are subject to divergent evolution of the paralogous sequences after fixation, although gene conversion can impede this process. New data shed some light on each of these processes. Mutations which involve duplications can occur through at least two different mechanisms, backward strand slippage during DNA replication and unequal crossing-over. The background rate of duplication of a complete gene in humans is 10-9-10-10 per generation, although many genes located within hot-spots of large-scale mutation are duplicated much more often. Many gene duplications affect fitness strongly, and are responsible, through gene dosage effects, for a number of genetic diseases. However, high levels of intrapopulation polymorphism caused by presence or absence of long, gene-containing DNA segments imply that some duplications are not under strong selection. The polymorphism to fixation ratios appear to be approximately the same for gene duplications and for presumably selectively neutral nucleotide substitutions, which, according to the McDonald-Kreitman test, is consistent with selective neutrality of duplications. However, this pattern can also be due to negative selection against most of segregating duplications and positive selection for at least some duplications which become fixed. Patterns in post-fixation evolution of duplicated genes do not easily reveal the causes of fixations. Many gene duplications which became fixed recently in a variety of organisms were positively selected because the increased expression of the corresponding genes was beneficial. The effects of gene dosage provide a unified framework for studying all phases of the life history of a gene duplication. Application of well-known methods of evolutionary genetics to accumulating data on new, polymorphic, and fixed duplication will enhance our understanding of the role of natural selection in the evolution by gene duplication.}, author = {Fyodor Kondrashov and Kondrashov, Alexey S}, journal = {Journal of Theoretical Biology}, number = {2}, pages = {141 -- 151}, publisher = {Elsevier}, title = {{Role of selection in fixation of gene duplications}}, doi = {10.1016/j.jtbi.2005.08.033}, volume = {239}, year = {2006}, } @article{1715, abstract = {Background: Cell-to-cell communication at the synapse involves synaptic transmission as well as signaling mediated by growth factors, which provide developmental and plasticity cues. There is evidence that a retrograde, presynaptic transforming growth factor-β (TGF-β) signaling event regulates synapse development and function in Drosophila. Results: Here we show that a postsynaptic TGF-β signaling event occurs during larval development. The type I receptor Thick veins (Tkv) and the R-Smad transcription factor Mothers-against-dpp (Mad) are localized postsynaptically in the muscle. Furthermore, Mad phosphorylation occurs in regions facing the presynaptic active zones of neurotransmitter release within the postsynaptic subsynaptic reticulum (SSR). In order to monitor in real time the levels of TGF-β signaling in the synapse during synaptic transmission, we have established a FRAP assay to measure Mad nuclear import/export in the muscle. We show that Mad nuclear trafficking depends on stimulation of the muscle. Conclusions: Our data suggest a mechanism linking synaptic transmission and postsynaptic TGF-β signaling that may coordinate nerve-muscle development and function.}, author = {Dudu, Veronika and Bittig, Thomas and Entchev, Eugeni V and Anna Kicheva and Julicher, Frank and González-Gaitán, Marcos A}, journal = {Current Biology}, number = {7}, pages = {625 -- 635}, publisher = {Cell Press}, title = {{Postsynaptic mad signaling at the Drosophila neuromuscular junction}}, doi = {10.1016/j.cub.2006.02.061}, volume = {16}, year = {2006}, } @article{1716, author = {Dudu, Veronika and Bittig, Thomas and Entchev, Eugeni V and Anna Kicheva and Julicher, Frank and González-Gaitán, Marcos A}, journal = {Current Biology}, number = {12}, publisher = {Cell Press}, title = {{Erratum: Postsynaptic mad signaling at the Drosophila neuromuscular junction}}, doi = {10.1016/j.cub.2006.06.020}, volume = {16}, year = {2006}, } @article{1745, abstract = {SiGe islands grown by deposition of 10 monolayers of Ge on Si(0 0 1) at 740 °C were investigated by using a combination of selective wet chemical etching and atomic force microscopy. The used etchant, a solution consisting of ammonium hydroxide and hydrogen peroxide, shows a high selectivity of Ge over SixGe1-x and is characterized by relatively slow etching rates for Si-rich alloys. By performing successive etching experiments on the same sample area, we are able to gain a deeper insight into the lateral displacement the islands undergo during post growth annealing.}, author = {Georgios Katsaros and Rastelli, Armando and Stoffel, Mathieu and Isella, Giovanni and Von Känel, Hans and Bittner, Alexander M and Tersoff, Jerry and Denker, Ulrich and Schmidt, Oliver G and Costantini, Giovanni and Kern, Klaus}, journal = {Surface Science}, number = {12}, pages = {2608 -- 2613}, publisher = {Elsevier}, title = {{Investigating the lateral motion of SiGe islands by selective chemical etching}}, doi = {10.1016/j.susc.2006.04.027}, volume = {600}, year = {2006}, } @article{1746, abstract = {A microscopic picture for the GaAs overgrowth of self-organized InAs/GaAs(001) quantum dots is developed. Scanning tunneling microscopy measurements reveal two capping regimes: the first being characterized by a dot shrinking and a backward pyramid-to-dome shape transition. This regime is governed by fast dynamics resulting in island morphologies close to thermodynamic equilibrium. The second regime is marked by a true overgrowth and is controlled by kinetically limited surface diffusion processes. A simple model is developed to describe the observed structural changes which are rationalized in terms of energetic minimization driven by lattice mismatch and alloying.}, author = {Costantini, Giovanni and Rastelli, Armando and Manzano, Carlos and Acosta-Diaz, P and Songmuang, Rudeeson and Georgios Katsaros and Schmidt, Oliver G and Kern, Klaus}, journal = {Physical Review Letters}, number = {22}, publisher = {American Physical Society}, title = {{Interplay between thermodynamics and kinetics in the capping of InAs/GaAs (001) quantum dots}}, doi = {10.1103/PhysRevLett.96.226106}, volume = {96}, year = {2006}, }
2aa40939d69b09dd
Pattern Formation Chia sẻ Manage episode 250489446 series 1088229 In den nächsten Wochen bis zum 20.2.2020 möchte Anna Hein, Studentin der Wissenschaftskommunikation am KIT, eine Studie im Rahmen ihrer Masterarbeit über den Podcast Modellansatz durchführen. Dazu möchte sie gerne einige Interviews mit Ihnen, den Hörerinnen und Hörern des Podcast Modellansatz führen, um herauszufinden, wer den Podcast hört und wie und wofür er genutzt wird. Die Interviews werden anonymisiert und werden jeweils circa 15 Minuten in Anspruch nehmen. Für die Teilnahme an der Studie können Sie sich bis zum 20.2.2020 unter der Emailadresse bei Anna Hein melden. Wir würden uns sehr freuen, wenn sich viele Interessenten melden würden. In the coming weeks until February 20, 2020, Anna Hein, student of science communication at KIT, intends to conduct a study on the Modellansatz Podcast within her master's thesis. For this purpose, she would like to conduct some interviews with you, the listeners of the Modellansatz Podcast, to find out who listens to the podcast and how and for what purpose it is used. The interviews will be anonymous and will take about 15 minutes each. To participate in the study, you can register with Anna Hein until 20.2.2020 at . We would be very pleased if many interested parties would contact us. This is the second of three conversation recorded Conference on mathematics of wave phenomena 23-27 July 2018 in Karlsruhe. Gudrun is in conversation with Mariana Haragus about Benard-Rayleigh problems. On the one hand this is a much studied model problem in Partial Differential Equations. There it has connections to different fields of research due to the different ways to derive and read the stability properties and to work with nonlinearity. On the other hand it is a model for various applications where we observe an interplay between boyancy and gravity and for pattern formation in general. An everyday application is the following: If one puts a pan with a layer of oil on the hot oven (in order to heat it up) one observes different flow patterns over time. In the beginning it is easy to see that the oil is at rest and not moving at all. But if one waits long enough the still layer breaks up into small cells which makes it more difficult to see the bottom clearly. This is due to the fact that the oil starts to move in circular patterns in these cells. For the problem this means that the system has more than one solutions and depending on physical parameters one solution is stable (and observed in real life) while the others are unstable. In our example the temperature difference between bottom and top of the oil gets bigger as the pan is heating up. For a while the viscosity and the weight of the oil keep it still. But if the temperature difference is too big it is easier to redistribute the different temperature levels with the help of convection of the oil. The question for engineers as well as mathematicians is to find the point where these convection cells evolve in theory in order to keep processes on either side of this switch. In theory (not for real oil because it would start to burn) for even bigger temperature differences the original cells would break up into even smaller cells to make the exchange of energy faster. In 1903 Benard did experiments similar to the one described in the conversation which fascinated a lot of his colleagues at the time. The equations where derived a bit later and already in 1916 Lord Rayleigh found the 'switch', which nowadays is called the critical Rayleigh number. Its size depends on the thickness of the configuration, the viscositiy of the fluid, the gravity force and the temperature difference. Only in the 1980th it became clear that Benards' experiments and Rayleigh's analysis did not really cover the same problem since in the experiment the upper boundary is a free boundary to the surrounding air while Rayleigh considered fixed boundaries. And this changes the size of the critical Rayleigh number. For each person doing experiments it is also an observation that the shape of the container with small perturbations in the ideal shape changes the convection patterns. Maria does study the dynamics of nonlinear waves and patterns. This means she is interested in understanding processes which change over time. Her main questions are: • Existence of observed waves as solutions of the equations • The stability of certain types of solutions • How is the interaction of different waves She treats her problems with the theory of dynamical systems and bifurcations. The simplest tools go back to Poincaré when understanding ordinary differential equations. One could consider the partial differential equations to be the evolution in an infinite dimensional phase space. Here, in the 1980s, Klaus Kirchgässner had a few crucial ideas how to construct special solutions to nonlinear partial differential equations. It is possible to investigate waterwave problems which are dispersive equations as well as flow problems which are dissipative. Together with her colleagues in Besancon she is also very keen to match experiments for optical waves with her mathematical analysis. There Mariana is working with a variant of the Nonlinear Schrödinger equation called Lugiato-Lefever Equation. It has many different solutions, e.g. periodic solutions and solitons. Since 2002 Mariana has been Professor in Besancon (University of Franche-Comté, France). Before that she studied and worked in a lot of different places, namely in Bordeaux, Stuttgart, Bucharest, Nice, and Timisoara. 42 tập
61a5366a81f21270
direkt zum Inhalt springen direkt zum Hauptnavigationsmenü Sie sind hier TU Berlin Page Content There is no English translation for this web page. SEMINAR - Quantum Mechanics / Moleculardynamics Both quantum mechanical and classical molecular dynamic systems can be described by means of a state space. One of the main problems in the numerical simulation of such systems is the increasing dimension of this state space with the number of particles (/degrees of freedom) involved. This leads to an exponential increase of the possible states and thus of the numerical complexity, the so-called curse of dimensions. In the quantum mechanical world there are different concepts to describe the phenomena mathematically. Starting from the Schrödinger equation or the Schrödinger operator, we deal with concepts such as the Hartree-Fock method, the Coupled Cluster and the quantum Monte Carlo method. However, we also want to cover topics apart from the above mentioned ones, such as the basics of quantum computing or QM/MM coupling methods. Will be announced in the seminar Zusatzinformationen / Extras Quick Access: Schnellnavigation zur Seite über Nummerneingabe Auxiliary Functions
b9510e8cf97a8e97
Web-Schrödinger 3.3 (C)2007-2021 G. I. Márk, Ph. Lambin, L. P. Biró, EK MFA Budapest,Hungary -- Uni Namur, Belgium Subscribe to the mailing list to receive E-mail news about Web-Schrödinger (new versions. etc) Watch introductory videos from the Web-Schrödinger YT channel. Web-Schrödinger is a program for the interactive solution of the stationary (time independent) and time dependent two dimensional (2D) Schrödinger equation. The program itself runs on our server and can be used through the Internet with a simple Web browser (Internet Explorer, Mozilla, Opera, Chrome was tested). Nothing is installed on the user's computer. The user can load, run, and modify ready-made example files, or prepare her/his own configuration(s), which can be saved on her/his own computer for later use. See [1] for a detailed description of the program. Theoretical background Time dependent Schrödinger equation The time evolution of the quantum mechanical wave function ψ(r;t) is governed by the time dependent Schrödinger equation: time dependent Schroedinger equation where r  = (x,y) is the position coordinate, t is the time and H = K + V is the Hamilton operator, K is the operator of the kinetic energy, and V = V(x,y) is the operator of the potential energy. When the potential function V(x,y) and the initial wave function ψ(x,y,t0) = ψ0(x,y) is known, the time dependent Schrödinger equation determines the wave function ψ(x,y,t) for any time value. We can calculate all observables from the wave function, for example the rho(x,y,t) probability density and the j(x,y,t) probability current density. Stationary Schrödinger equation rho(x,y,t) gives the probability of finding the quantum mechanical particle around the point (x,y) at time t. We call those ψ(x,y,t)=ψ(x,y) states, where ψ(x,y) is independent of time, stationary states. The stationary (time independent) states are given by the stationary Schrödinger equation: Hψ(r) = Eψ(r) where E is the energy of the state. User Guide All functions of the program are available through a menu system. Upon starting the program a default configuration is loaded, the user can immediatelly run this through the Calculation menu, or load another configuration with the Load Example, or Load menu points. All parameters can be modified in the Edit menu and the current setup can be saved anytime with the help of the Save function. Menu system Load Example We have prepared several characteristic examples, illustrating the most important phenomena of quantum mechanics, including the spreading of the wave packet, tunneling, bound states, etc. The current list of the examples is given in Appendix A. The example library is continuously expanding, see Appendix A for the up to date status. After loading an example setup the user can study and modify the parameters through the Edit menu or go straight to Calculation to calculate the time development and/or the stationary states. This function makes it possible to load the user's own configuration files, from her/his own computer. Such parameter files can be created either by saving a (possibly modified) example configuration (or the default configuration) or writing a configuration file from the scratch with a text editor or any other program. The current state of the parameters can be saved anytime to the user's own computer. The wave function and the potential is represented on a 2D mesh. Here you can specify the number of mesh points (Nx , Ny) in the x and y direction and the size of the calculation region in Angström (sx, sy). For typical applications the Δx = sx/Nx, Δy = sy/Ny values should be between 0.1 - 1 Å. The origin of the coordinate system is in the middle of the calculation region. The numerical algorith uses a periodic boundary condition, i.e. what goes out of the calculation region at the right side, comes in at the left side. It is like if the whole plane were "tiled" with the calculation region. As a consequence when the wave packet approaches the boundary of the calculation box, it "meets" its copy at the neighboring box and this causes unphysical interference effects to appear in the probability density. The parameters of the calculation (spatial- and temporal mesh, potential, and initial state) should be carefully chosen to avoid this effect. V0 gives the default value of the potential in eV (Electronvolt). Note: due to the difference of the algorithms used for the solution of the time dependent and stationary Schrödinger equations, generally a finer mesh is necessary for the time dependent calculation. E.g. a Nx=256 is typical value for the time dependent, and Nx=64 for the stationary calculation The potential V(x,y) can be interactively assembled from objects of several types: circle, rectangle, and plane. Any number of these objects can be given. For each object the user can specify its geometrical parameters and its potential value. For pixels where several objects overlap, the object given most recently determines the pixel potential value. The program shows the potential function generated from the current set of objects as a grayscale image. Initial state Here the user can specify the initial wave function ψ0(x,y), which is the input of the time dependent calculation (it is not used at the stationary calculation). Its general form is a so called truncated plane wave [8] wave packet, i.e. a Gaussian wave packet convolved with a 2D square window function. The program displays the chosen initial state together with the potential function, as a composite color image. In order to ensure that the wave packet has its ideal form (minimal size and flat envelope) when it hits the potential, a time retardation procedure is included into the initial state preparation. The user can specify the retardation time by giving the the bx, by distance values, which mean that after proceeding such distances in x, and y the wave packet should have its "ideal" form. ax, ay give the spatial width of the wave packet. The initial state should be specified such a way, that its overlap with the potential objects is negligible. The user can place horizontal or vertical line segments (detectors) into the calculation window. The program calculates the probability current I(t) passing through each line segment during the time evolution of the wave packet and also its time integral T for the whole calculation time. T is called transmission, because it gives the probability that the quantum particle crosses the given line segment (detector). Calculation parameters Here we can specify the parameters of the time dependent and the stationary calculation. Parameters used for the time evolution calculation: The number of time points is Nt and Δt gives calculation time step. Δt has to be given in atomic time units, 1 au time = 0.0242 fs (femtosecond). The numerical algorithm imposes a condition on the maximal Δt value that can be used: Δt < 4/π (Δx)2 / D, where D is the number of dimensions, D=2 in 2D. (This formula is valid in atomic units, i.e. one has to insert Δx in Bohr, 1 Bohr = 0.529 Å. For the default Δx = 0.3 Å, Δt = 0.2 au is suitable and this is the default time step.) It is not necessary, however, to display the results in such a fine time scale. Therefore the user can input the "display timestep", i.e. the number of calculation time steps, when the wave function is displayed. Parameters used for the stationary calculation: Nstat gives the number of states calculated. Time development When the user hits the "RUN" button, the time development calculation starts on the server. The progress of the calculation is shown by small thumbnail images. For typical parameters the time development calculation takes 1-2 minutes. (If there are more concurrent jobs on the server – either from this user or from others – the calculation may be somewhat slower. The program writes out the number of concurrent jobs – if there is any – after hitting the "RUN" button.) When the user hits the "RUN" button, the calculation of the stationary states starts on the server. It takes several second, or minutes, depending on the mesh size, and the number of orbitals requested. (If there are more concurrent jobs on the server – either from this user or from others – the calculation may be somewhat slower. The program writes out the number of concurrent jobs – if there is any – after hitting the "RUN" button.) When the calculation is completed, the program displays the energies and the wave functions of the stationary states. After the time development calculation is completed on the server, the time development of the probability density is displayed in composite color images. The program first calculates the global maximum of the probability and normalizes each frame using this value. A nonlinear color scale (γ=2.5) is used in order to facilitate presentation. If the user placed detectors into the calculation window before the start of the calculation, the program also displays the I(t) probability current functions and T transmission values for each of the detectors. Appendix A: Examples The examples are diveded into two groups: examples for time development calculation and examples for stationary states calculation. Nothing prevents to perform both a time evolution and a stationary states calculation for the same example, but those examples listed under "time development" demonstrate interesting cases of time development, those listed under "stationary states" demonstrate interesting cases of eiegenstates. For some cases, however, e.g. for a potential box, both the time evolution and the stationary states gives instructive results. The examples were carefully designed to prevent the effect of the periodic boundary condition. For the time evolution examples, this was accomplished by halting the time development calculation before the wave packet reaches the edge of the calculation box. For the stationary states calculation, we applied a potential wall at the edges in each examples. Examples for time development calculation A wave packet is approaching a periodic potential with energy in the allowed band. The wave packet is passing through the potential. A wave packet is approaching a periodic potential with energy in the forbidden band. The wave packet is reflected from the potential. Wave packet scattering on a potential forming a Christmas tree Quantum analogue of a projectile motion. Wave packet scattering on a linearly increasing potential. The "Results" menu shows the transferred probabilities and probability densities crossing the detectors shown by the red line segments. Scattering of a wave packet on a circular hardcore potential. Note the circular component of the final state. Demonstration of the "quantum revival" phenomenon. Simulation of Scanning Tunneling Microscope imaging of a carbon nanotube. See [4] for details.. Tunneling of a wave packet through a potential wall of V>E. The WP is hitting the wall at 75o angle. Tunneling of a wave packet through a potential wall of V>E. The WP is hitting the wall az 90o angle. Two colliding billiard balls on a 1D track, shown in 2D configuration space. For more explanation, see this video. Comparison of an experiment, a classial mechanics- and quantum mechanics simulation of two colliding billiard balls on a one-dimensional track. Introduction of the concept of configuration space. Comparison of two-particle states for interacting- and non-interacting particles. Two-particle states for interacting particles show Wigner-crystal-like behavior. Two coupled pendulums, shown in 2D configuration space. Examples for stationary states calculation Eigenstates of a rectangular potential box. Eigenstates of a circular potential box. Eigenstates of a two-dimensional radial quadratic potential. Eigenstates of a simple model for a diatomic molecule. Note the two lowest orbitals are "s" like orbitals, similar to the atomic orbitals, the third orbital is a "sigma" orbital, and the fourth and fifth orbitals are "pi" orbitals. This includes a potential step inside a potential box: the left half of the potential has a slightly higher potential value than the righ half. Example file contest Develop your own example files demonstrating interesting quantum phenomena! You can send the SAVE-d files to mark@mfa.kfki.hu . Best example files will be included into the Web-Schrödinger "Examples" directory. Please attach also a brief description of the example! Mailing list We have a mailing list for announcing new features and examples. The mailing list is hosted by Google Groups. 1. Márk, Géza, I.: Web-Schrödinger: Program for the interactive solution of the time dependent and stationary two dimensional (2D) Schrödinger equation; arXiv:2004.10046 [physics.ed-ph] (2020) 2. Schrödinger equation; (in several languages) 3. Time development of quantum mechanical systems; (1995-) (English and Hungarian) 4. Márk, Géza, I.; Biró, László, P.; Gyulai, József: Simulation of STM images of 3D surfaces and comparison with experimental data: carbon nanotubes; Phys. Rev. B 58, 12645(1998). 5. Márk, Géza, I.; Biró, László, P.; Gyulai, József; Thiry, Paul, A.; Lucas, Amand, A.; Lambin, Philippe: Simulation of scanning tunneling spectroscopy of supported carbon nanotubes; Phys. Rev. B 62, 2797(2000). 6. Lambin, Philippe; Márk, Géza, I.; Meunier, Vincent; Biró, László, P.: Computation of STM images of carbon nanotubes; Int. J. Qunatum.. Chem. 95, 495(2003). 7. Márk, Géza, I.; Biró, László, P.; Lambin, Philippe: Calculation of axial charge spreading in carbon nanotubes and nanotube Y-junctions during STM measurement; Phys. Rev. B 70, 115423-1(2004). 8. Géza I. Márk PhD Thesis, FUNDP Namur, 2006. 9. Márk, Géza, I.; Vancsó, Péter; Hwang, Chanyong; Lambin, Philippe; Biró, László, P.: Anisotropic dynamics of charge carriers in graphene; Phys. Rev. B 85, 125443-1(2012). 10. Vancsó, Péter; Márk, Géza, István; Hwang, Chanyong; Lambin, Philippe; Biró, László, P.: Time and energy dependent dynamics of the STM tip – graphene system; European Journal of Physics B 85, 142-1(2012) 11. Márk, Géza, I.; Vancsó, Péter; Lambin, Philippe; Hwang, Chanyong; Biró, László, P.: Forming electronic waveguides from graphene grain boundaries; Journal of Nanophotonics 6, 061719-1(2012) 12. S. Janecek, E. Krotscheck: A fast and simple program for solving local Schrödinger equations in two and three dimensions; Comput. Phys. Comm. 178 (11) (2008) 835–842. 13. S.A. Chin, S. Janecek, and E. Krotscheck: An arbitrary order diffusion algorithm for solving Schrödinger equations; Computer Physics Communications 180 (2009) 1700–1708. Last updated: February 4, 2021 by Géza I. Márk , mark@mfa.kfki.hu This page was accessed  times since Feb 8, 2013.
dbd82501ea066f75
Skip to main content Chemistry LibreTexts 6.6: Orbital Angular Momentum and the p-Orbitals • Page ID • Learning Objectives • To relate the classical orbital angular momentum for an particle to the quantum equivalent • Characterize the mangnitude and orientation of orbital angular momentum for an electron in terms of quantum numbers Classical Orbital Angular Momentum The physical quantity known as angular momentum plays a dominant role in the understanding of the electronic structure of atoms. To gain a physical picture and feeling for the angular momentum it is necessary to consider a model system from the classical point of view. The simplest classical model of the hydrogen atom is one in which the electron moves in a circular orbit with a constant speed or angular velocity (Figure 6.6.1 ). Just as the linear momentum \(m\vec{v}\) plays a dominant role in the analysis of linear motion, so angular momentum (\(L\)) plays the central role in the analysis of a system with circular motion as found in the model of the hydrogen atom. Figure 6.6.1 : The angular momentum vector for a classical model of the atom. (CC BY-NC; Ümit Kaya via LibreTexts) In Figure 6.6.1 , \(m\) is the mass of the electron, \(\vec{v}\) is the linear velocity (the velocity the electron would possess if it continued moving at a tangent to the orbit) and \(r\) is the radius of the orbit. The linear velocity \(\vec{v}\) is a vector since it possesses at any instant both a magnitude and a direction in space. Obviously, as the electron rotates in the orbit the direction of \(\vec{v}\) is constantly changing, and thus the linear momentum \(m\vec{v}\) is not constant for the circular motion. This is so even though the speed of the electron (i.e, the magnitude of \(\vec{v}\) which is denoted by \(|\vec{v}|\)) remains unchanged. According to Newton's second law, a force must be acting on the electron if its momentum changes with time. This is the force which prevents the electron from flying on tangent to its orbit. In an atom the attractive force which contains the electron is the electrostatic force of attraction between the nucleus and the electron, directed along the radius r at right angles to the direction of the electron's motion. The angular momentum, like the linear momentum, is a vector and is defined as follows: \[|\vec{L}| = m \nu r\] The angular momentum vector \(\vec{L}\) is directed along the axis of rotation. From the definition it is evident that the angular momentum vector will remain constant as long as the speed of the electron in the orbit is constant (\(|\vec{v}|\) remains unchanged) and the plane and radius of the orbit remain unchanged. Thus for a given orbit, the angular momentum is constant as long as the angular velocity of the particle in the orbit is constant. In an atom the only force on the electron in the orbit is directed along \(r\); it has no component in the direction of the motion. The force acts in such a way as to change only the linear momentum. Therefore, while the linear momentum is not constant during the circular motion, the angular momentum is. A force exerted on the particle in the direction of the vector \(\vec{v}\) would change the angular velocity and the angular momentum. When a force is applied which does change \(\vec{L}\), a torque is said to be acting on the system. Thus angular momentum and torque are related in the same way as are linear momentum and force. Quantum Angular Momentum The important point of the above discussion is that both the angular momentum and the energy of an atom remain constant if the atom is left undisturbed. Any physical quantity which is constant in a classical system is both conserved and quantized in a quantum mechanical system. Thus both the energy and the angular momentum are quantized for an atom. There is a quantum number, denoted by \(l\), which governs the magnitude of the angular momentum, just as the quantum number \(n\) determines the energy. The magnitude of the angular momentum may assume only those values given by: \[ |L| = \sqrt{l(l+1)} \hbar \label{4}\] with \(l = 0, 1, 2, 3, ... n-1\). Furthermore, the value of n limits the maximum value of the angular momentum as the value of l cannot be greater than n - 1. For the state n = 1 discussed above, \(l\) may have the value of zero only. When n = 2, l may equal 0 or 1, and for n = 3, l = 0 or 1 or 2, etc. When l = 0, it is evident from Equation \(\ref{4}\) that the angular momentum of the electron is zero. The atomic orbitals which describe these states of zero angular momentum are called s orbitals. The s orbitals are distinguished from one another by stating the value of n, the principal quantum number. They are referred to as the 1s, 2s, 3s, etc., atomic orbitals. The preceding discussion referred to the 1s orbital since for the ground state of the hydrogen atom \(n = 1\) and \(l = 0\). This orbital, and all s orbitals in general, predict spherical density distributions for the electron as discussed previously. It is common usage to refer to an electron as being "in" an orbital even though an orbital is, but a mathematical function with no physical reality. To say an electron is in a particular orbital is meant to imply that the electron is in the quantum state which is described by that orbital. For example, when the electron is in the 2s orbital the hydrogen atom is in a state for which \(n = 2\) and \(l = 0\). Comparing these results with those for the 1s orbital in Figure 6.6.2 we see that as \(n\) increases the average value of \(r\) increases. This agrees with the fact that the energy of the electron also increases as \(n\) increases. The increased energy results in the electron being on the average pulled further away from the attractive force of the nucleus. As in the simple example of an electron moving on a line, nodes (values of \(r\) for which the electron density is zero) appear in the probability distributions. The number of nodes increases with increasing energy and equals \(n - 1\). When the electron possesses angular momentum the density distributions are no longer spherical. In fact for each value of \(l\), the electron density distribution assumes a characteristic shapes in Figure 6.6.2 . Figure 6.6.2 : The appearance of the three 2p orbitals in three-dimensional space. (CC BY-SA 3.0; I, Sarxos). When \(l = 1\), the orbitals are called p orbitals. In this case the orbital and its electron density are concentrated along a line (axis) in space. The 2p orbital or wavefunction is positive in value on one side and negative in value on the other side of a plane which is perpendicular to the axis of the orbital and passes through the nucleus. The orbital has a node in this plane, and consequently an electron in a 2p orbital does not place any electronic charge density at the nucleus. The electron density of a 1s orbital, on the other hand, is a maximum at the nucleus. The same diagram for the 2p density distribution is obtained for any plane which contains this axis. Thus in three dimensions the electron density would appear to be concentrated in two lobes, one on each side of the nucleus, each lobe being circular in cross section Figure 6.6.3 . An electron possesses orbital angular momentum has a density distributions is no longer spherical. The \(m_l\) Quantum Number and Magnetic Fields The magnetic quantum number, designated by the letter \(m_l\), is the third quantum numbers which describe the unique quantum state of an electron. The magnetic quantum number distinguishes the orbitals available within a subshell, and is used to calculate the azimuthal component of the orientation of the orbital in space. As with our discussion of rigid rotors, the quantum number \(m_l\) refers to the projection of the angular momentum in this arbitrarily chosen direction, conventionally called the \(z\) direction or quantization axis. \(L_z\), the magnitude of the angular momentum in the z direction, is given by the formula \[ L_z = m_l \hbar\] The quantum number \(m_l\) refers, loosely, to the direction of the angular momentum vector. The magnetic quantum number \(m_l\) only affects the electron's energy if it is in a magnetic field because in the absence of one, all spherical harmonics corresponding to the different arbitrary values of \(m_l\) are equivalent. The magnetic quantum number determines the energy shift of an atomic orbital due to an external magnetic field (this is called the Zeeman effect) - hence the name magnetic quantum number. However, the actual magnetic dipole moment of an electron in an atomic orbital arrives not only from the electron angular momentum, but also from the electron spin, expressed in the spin quantum number, which is the fourth quantum number. \(m_s\) and discussed in the next chapter. Figure 6.6.3 : The orbiting electron with a non-zero \(l\) value acts like a magnetic field with is no energetic difference for any particular orientation (only one energy state, on the left). However, in external magnetic field there is a high-energy state and a low-energy state depending on the relative orientations of the magnet to the external field. (CC SA-BY 3.0; Darekk2). Which \(m_l\) Number Corresponds to which p-Orbital? The answer is complicated; while \(m_l=0\) corresponds to the \(p_z\), the orbitals for \(m_l=+1\) and \(m_l=−1\) lie in the xy-plane (see Spherical Harmonics), but not on the axes. The reason for this outcome is that the wavefunctions are usually formulated in spherical coordinates to make the math easier, but graphs in the Cartesian coordinates make more intuitive sense for humans. The \(p_x\) and \(p_y\) orbitals are constructed via a linear combination approach from radial and angular wavefunctions and converted into \(xy\) (this was discussed previously). Thus, it is not possible to directly correlate the values of \(m_l=±1\) with specific orbitals. The notion that we can do so is sometimes presented in introductory courses to make a complex mathematical model just a little bit simpler and more intuitive, but it is incorrect. The three wavefunctions for \(n=2\) and \(l=1\) are as follows. \[ \begin{align} |\psi_{2,1,0} \rangle &=r \cos θR(r) \\[4pt] |\psi_{2,1,+1} \rangle &=−\dfrac{r}{2} \sinθ e^{iϕ} R(r) \\[4pt] |\psi_{2,1,-1} \rangle &=+\dfrac{r}{2} \sinθ e^{-iϕ} R(r) \end{align}\] The notation is \(|\psi_{n,l,m_l} \rangle\) with \(R(r)\) is the radial component of this wavefuction, \(θ\) is the angle with respect to the z-axis and \(ϕ\) is the angle with respect to the \(xz\)-plane. \[R(r)=\sqrt{\dfrac{Z^5}{32\pi a_0^5}}\mathrm{e}^{-Zr/2a_0}\] in which \(Z\) is the atomic number (or probably better nuclear charge) and \(a_0\) is the Bohr radius. In switching from spherical to Cartesian coordinates, we make the substitution \(z=r \cosθ\), so: \[|\psi_{2,1,0} \rangle =z R(r)\] This is \(\psi_{2p_z}\) since the value of \(\psi \) is dependent on \(z\): when \(z=0\); \(\psi =0\), which is expected since \(z=0\) describes the \(xy\)-plane. The other two wavefunctions are degenerate in the \(xy\)-plane. An equivalent statement is that these two orbitals do not lie on the x- and y-axes, but rather bisect them. Thus it is typical to take linear combinations of them to make the equation look prettier. If any set of wavefunctions is a solution to the Schrödinger equation, then any set of linear combinations of these wavefunctions must also be a solution (Section 2.4). We can do this because of the linearity of the Schrödinger equation. In the equations below, we're going to make use of some trigonometry, notably Euler's formula: \[ \begin{align} \mathrm{e}^{\mathrm{i}\phi} &=\cos{\phi}+\mathrm{i}\sin{\phi}\\[4pt] \sin{\phi} &= \dfrac{\mathrm{e}^{\mathrm{i}\phi}-\mathrm{e}^{-\mathrm{i}\phi}}{2\mathrm{i}}\\[4pt] \cos{\phi} &= \dfrac{\mathrm{e}^{\mathrm{i}\phi}+\mathrm{e}^{-\mathrm{i}\phi}}{2} \end{align}\] We're also going to use \(x=\sin θ\cos ϕ\) and \(y=\sin θ \sinϕ \). \begin{align*} \psi_{2p_x} &=\dfrac{1}{\sqrt{2}}\left(\psi_{2,1,+1}-\psi_{2,1,-1}\right) \\[4pt] &=\dfrac{1}{2}\left(\mathrm{e}^{\mathrm{i}\phi}+\mathrm{e}^{-\mathrm{i}\phi} \right)r\sin{\theta} f(r) \\[4pt] &=r\sin{\theta}\cos{\phi}f(r)=xf(r) \\[4pt] \psi_{2p_y} &=\dfrac{\mathrm{i}}{\sqrt{2}}\left(\psi_{2,1,+1}+\psi_{2,1,-1}\right)\\[4pt] &=\dfrac{1}{2\mathrm{i}}\left(\mathrm{e}^{\mathrm{i}\phi}-\mathrm{e}^{-\mathrm{i}\phi} \right)r\sin{\theta}f(r)\\[4pt] &=r\sin{\theta}\sin{\phi}f(r)=yf(r)\\ \end{align*} So, while \(m_l=0\) corresponds to \(|p_z \rangle\), \(m_l=+1\) and \(m_l=−1\) cannot be directly assigned to either \(|p_x \rangle\) or \(|p_y \rangle\), but rather a combination of \(|p_x \rangle\) and \(|p_y \rangle\). An alternative description is that \(m_l=+1\) might correspond to \((|p_x \rangle\ + |p_y \rangle )\) and \(m_l=−1\) might correspond to \((|p_x \rangle\ - |p_y \rangle)\). d-Orbitals (even higher angular momenta wavefunctions) When \(l = 2\), the orbitals are called d orbitals and Figure 6.6.4 shows the contours in a plane for a 3d orbital and its density distribution. Notice that the density is again zero at the nucleus and that there are now two nodes in the orbital and in its density distribution. As the angular momentum of the electron increases, the density distribution becomes increasingly concentrated along an axis or in a plane in space. Only electrons in \(s\) orbitals with zero angular momentum give spherical density distributions and in addition place charge density at the position of the nucleus. Figure 6.6.4 : The appearance of the 3d electron density distribution in three-dimensional space. (CC BY-SA 3.0; I, Sarxos) As with the p-orbitals, the only d-orbital that a specific \(m_l\) can be ascribed is the \(d_{z^2}\) orbitals with \(m_l=0\). The rest are linear combinations of the hydrogen atom wavefunctions with complex spherical harmonic angular components. There seems to be neither rhyme nor reason for the naming of the states corresponding to the different values of \(\ell\) (s, p, d, f for l = 0, 1, 2, 3). This set of labels had its origin in the early work of experimental atomic spectroscopy. The letter s stood for sharp, p for principal, d for diffuse and f for fundamental in characterizing spectral lines. From the letter f onwards the naming of the orbitals is alphabetical \(l = 4,5,6 \rightarrow g,h,i, ....\). We have not as yet accounted for the full degeneracy of the hydrogen atom orbitals which we stated earlier to be \(n^2\) for every value of \(n\). For example, when \(n = 2\), there are four distinct atomic orbitals. The remaining degeneracy is again determined by the angular momentum of the system. Since angular momentum like linear momentum is a vector quantity, we may refer to the component of the angular momentum vector which lies along some chosen axis. For reasons we shall investigate, the number of values a particular component can assume for a given value of \(l\) is (\(2l + 1\)). Thus when \(l = 0\), there is no angular momentum and there is but a single orbital, an s orbital. When \(l = 1\), there are three possible values for the component (\(2 \times 1 + 1\)) of the total angular momentum which are physically distinguishable from one another. There are, therefore, three p orbitals. Similarly there are five d orbitals, (\(2 \times 2+1\)), seven f orbitals, (\(2 \times 3 +1\)), etc. All of the orbitals with the same value of \(n\) and \(l\), the three 2p orbitals for example, are similar but differ in their spatial orientations. To gain a better understanding of this final element of degeneracy, we must consider in more detail what quantum mechanics predicts concerning the angular momentum of an electron in an atom. Contributors and Attributions • Was this article helpful?
a44615ad2c9fa080
Advanced control with a Cooper-pair box: stimulated Raman adiabatic passage and Fock-state generation in a nanomechanical resonator Jens Siewert MATIS-INFM, Consiglio Nazionale delle Ricerche, and Dipartimento di Metodologie Fisiche e Chimiche per l’Ingegneria, Universita di Catania, I-95125 Catania, Italy Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg, Germany    Tobias Brandes Department of Physics, The University of Manchester, Manchester, United Kingdom    Giuseppe Falci MATIS-INFM, Consiglio Nazionale delle Ricerche, and Dipartimento di Metodologie Fisiche e Chimiche per l’Ingegneria, Universita di Catania, I-95125 Catania, Italy January 30, 2021 The rapid experimental progress in the field of superconducting nanocircuits gives rise to an increasing quest for advanced quantum-control techniques for these macroscopically coherent systems. Here we demonstrate theoretically that stimulated Raman adiabatic passage (STIRAP) should be possible with the quantronium setup of a Cooper-pair box. The scheme appears to be robust against decoherence and should be realizable even with the existing technology. As an application we present a method to generate single-phonon states of a nanomechnical resonator by vacuum-stimulated adiabatic passage with the superconducting nanocircuit coupled to the resonator. 32.80.Qk, 73.23.-b, 73.40.Gk One of the most fascinating experimental breakthroughs of the recent past is the observation of quantum-coherent dynamics in superconducting nanocircuits. It includes circuits exhibiting the dynamics of single ‘artificial atoms’ Nakamura99 ; Vion02 ; Chiorescu03 , two coupled artificial atoms Nakamura03 ; Majer05 and artificial atoms coupled to electromagnetic resonators Wallraff04 ; Chiorescu04 . This development opens new perspectives to study quantum phenomena in solid-state devices that traditionally have been part of nuclear magnetic resonance, quantum optics, and cavity quantum electrodynamics. There exist already a large number of theoretical proposals for such studies such as, e.g., the detection of geometric phases Falci00 , the preparation of Schrödinger cat states in electrical and nanomechanical resonators Marquardt01 ; Armour02 , cooling techniques Martin04 , an analogue of electromagnetically induced transparency Orlando04 , and adiabatic passage in superconducting nanocircuits Alec03 ; AdvSolSt04 ; Nori05 . One of the challenges is the preparation of Fock states in a resonator coupled to a superconducting nanocircuit. In quantum optics, the analogous problem has been solved both theoretically and experimentally Parkins93 ; Henrich00 . The idea is to apply adiabatic passage to the dark state of a three-level atom. Instead of driving the transition with two classical fields as in conventional STIRAP Bergmann98 , one of the external fields is replaced by the quantum field of the cavity. While the atom undergoes the transition, a single photon is emitted into the cavity. In the following we will demonstrate the application of this scheme to a Cooper-pair box operated as in the experiments by Vion et al. Vion02 (the so-called quantronium device) coupled to a nanomechanical resonator. To this end, we need to make sure that adiabatic passage in a three-level system using classical fields can be realized with the quantronium setup of a Cooper-pair box. This circuit is appropriate for the substitution of one of the classical driving fields by the quantum field of the nanomechanical resonator without changing the functionality of the Cooper-pair box. Coupling the resonator to the nanocircuit and verification of the vacuum-assisted adiabatic passage completes the analogue of the atom-cavity system in Refs. Parkins93 ; Henrich00 . We will discuss also the effects of decoherence on the scheme in a real experiment. We remark that, in principle, this program can be carried out for different regimes and setups of superconducting nanocircuits. (An alternative realization is a flux qubit coupled to an electrical resonator studied by Mariantoni et al. Markus05 .) We have chosen the quantronium as, on the one hand, it is very much analogous to the atom-laser system in quantum optics and, on the other hand, it is a rather thoroughly studied system with respect to its decoherence properties. Quantronium in a three-level STIRAP scheme. Adiabatic passage in three-level atoms is commonly realized with the STIRAP technique which is based on a configuration of two hyperfine ground states and coupled to an excited state (with energies , , ) by classical laser fields ,  Bergmann98 ; ScullyBook97 . In the frame rotating with the frequencies of the driving fields , the Hamiltonian reads (applying the rotating-wave approximation) with the detuning . This Hamiltonian has a so-called dark state From Eq. (2) it can be seen that by slowly varying the coupling strengths , the dark state can be rotated in the two-dimensional subspace spanned by and . For the so-called counterintuitive scheme, the system is prepared in the state with the couplings and . By slowly switching off while is switched on, the population can be transferred from state to state . Adiabaticity requires (). a) In the quantronium setup, a superconducting island of total capacitance Figure 1: a) In the quantronium setup, a superconducting island of total capacitance is coupled to a superconducting lead via two Josephson junctions. The gate voltage controls the offset charge of the island via the gate capacitance . The magnetic flux represents another control parameter for the setup (here we choose ). b) The lowest four energy levels of the quantronium with as a function of gate charge. At the working point the three lowest levels can be used as a scheme , , with resonance frequencies and . In order to realize adiabatic population transfer with the quantronium setup (see Fig. 1a) consider the corresponding Hamiltonian in the basis of the charge states with the charging energy (where is the total capacitance of the island and the charge of a Cooper pair) and the Josephson coupling energy . For the time being we assume . The offset charge can be tuned with the gate voltage . In the quantronium setup, the gate voltage (and hence the gate charge) has a d.c.  part and an a.c. part with a small amplitude . The STIRAP operation can be carried out between the three lowest energy levels (see Fig. 1b). For the working point values such as are preferable that lead to low decoherence rates. However, at symmetry points small level spacings and selection rules may impede the operation of the scheme Nori05 . Therefore the working point needs to be chosen away from such points, e.g., at . If two resonant frequencies , are applied to the gate (see Fig. 1b), it is possible to adiabatically transfer the population from the ground state to the first excited state . It is interesting to note that the microwave field couples diagonally to the charge states (as opposed to the dipole coupling in the three-level atom case). Nevertheless, an effective Hamiltonian as in Eq. (1) is obtained as only those off-diagonal matrix elements in the eigenbasis of the driven Hamiltonian are important that couple two states resonantly Kmetic86 . Population transfer by STIRAP in the quantronium setup Figure 2: Population transfer by STIRAP in the quantronium setup (). a) Gaussian pulses are applied in the counterintuitive scheme. The maximum gate charge of the microwave fields are . For a charging energy of eV the time unit corresponds to about s. b)–d) Time evolution of the populations , , without decoherence (solid lines) and with decoherence (dashed lines). The arrows denote the final populations in the ideal case (no decoherence). For the calculations with decoherence we have used the decay rate and the dephasing rate . The latter rate corresponds to a dephasing time of about 50 ns. In Fig. 2b–d (solid lines) we show the numerical solution of the Schrödinger equation for the Hamiltonian Eq. (3) with a gate charge (zero detuning, ). Initially the system is prepared in the state . Then, two Gaussian-shaped microwave pulses are applied (cf. Fig. 2a). We observe that a population transfer to state of nearly unit efficiency can be achieved. The state practically does not get populated during the STIRAP procedure (cf. Fig. 2d). Note that there are many parameters that may be used to optimize the efficiency such as duration, delay, relative height and over-all shape of the pulses, the detunings etc. Bergmann98 . Effects of decoherence. The functionality of solid-state quantum-coherent devices is rather sensitive to various (often device-dependent) sources of decoherence. In the quantronium, high-frequency noise that is mainly responsible for unwanted transitions, coexists with low-frequency noise which mainly affects calibration of the device and determines power-law reduction of the amplitude of the signal Falci-prl ; Ithier05 . A detailed analysis of decoherence in the STIRAP protocol due to a solid-state environment is beyond the scope of this work. Here we only estimate the feasibility of the protocol and argue that the main processes determining decoherence do not involve the level . These processes have been well characterized and, as a matter of fact, do not prevent very long decoherence times in the quantronium. We start our analysis from the quantum-optical master equation where is the density matrix and is the Hamiltonian (3) in the rotating frame Kuhn99 . At low temperature the dissipator includes spontaneous decay rates of the excited states , as well as environment-assisted absorption between eigenstates in the presence of the laser coupling. In quantum-optical systems the rate vanishes and the remaining processes mainly act towards depopulating states while they are not populated, and therefore hardly affect the protocol. In contrast, STIRAP for the quantronium may be sensitive to the extra decay involving the two low-lying states. An estimate of the effect of decoherence is achieved by studying the master equation (written in the basis ) with the dissipator where . The dissipator is taken time-independent (which overestimates decoherence) and includes all transitions as well as a dephasing rate accounting phenomenologically for low-frequency noise. For the decay rate of the second excited state we assume . In order to obtain a realistic estimate of decoherence effects, rates on the order of those observed in the experiments of Ref. Ithier05 are used. The dashed lines in Fig. 2b–d show results for the solution of the master equation with the dissipator (4). One recognizes immediately a remarkable robustness of the STIRAP procedure against decoherence. The main noticeable effects are the variation of populations during the waiting time after finishing the pulse sequence and a slightly increasing population of level . Low-frequency noise is modeled more realistically as due to impurities which are static during each run of the protocol but may switch on a longer time scale, thus leading to statistically distributed level separations. Averaging determines defocusing of the signal. Fluctuations of may be relatively large, but they represent equal detunings of both microwave fields and do not affect STIRAP. On the other hand, fluctuations of the separation between the two lowest eigenstates are potentially detrimental since they determine fluctuations of the difference of detunings. This leads to a reduced efficiency of population transfer which, however, may be improved by optimizing the parameters of the protocol. Coupling the quantronium to a harmonic-oscillator mode. As we have demonstrated, STIRAP should be well within reach of present-day technology for superconducting nanocircuits. Therefore one might hope to apply this technique similarly as in quantum optics for the preparation of peculiar quantum states. One such application is the generation of Fock states in a cavity coupled to a three-level atom Parkins93 . For this purpose, the Cooper-pair box needs to be coupled to a harmonic oscillator degree of freedom. The generic coupling Hamiltonian is . There are various ways to implement this Hamiltonian with electrical resonator circuits FalciPlastina03 and transmission lines Wallraff04 , or with nanomechanical resonators Armour02 ; Martin04 . In the following we will explain that along these lines it is possible to generate Fock states in a nanomechanical oscillator. a) Coupled system of quantronium and nanomechanical osciallator. b) The four relevant states of the STIRAP scheme for Fock state generation in the presence of decoherence. c) Level population for the quantronium-resonator setup in the presence of decoherence. Parameters are Figure 3: a) Coupled system of quantronium and nanomechanical osciallator. b) The four relevant states of the STIRAP scheme for Fock state generation in the presence of decoherence. c) Level population for the quantronium-resonator setup in the presence of decoherence. Parameters are , , , , . The nanomechanical oscillator (mass ) is coupled capacitively to the Cooper-pair box Armour02 ; Martin04 via the position-dependent capacitance , see Fig. 3a. Here denotes the oscillator displacement. The coupling can be tuned by the voltage . Assuming and taking into account only a single mode of the mechanical oscillator, the coupled quantronium–resonator system is described by the Hamiltonian Armour02 ; Martin04 where is the distance of the resonator from the island and , denote the creation and annihilation operators for the nanomechanical oscillator. The total gate charge is now a sum of the box gate charge and where . The composed system is described by the basis states with the (uncoupled) quantronium eigenstates and the resonator Fock states . For the states relevant in our discussion we will use the notation , , and . We assume that it is possible to prepare the vacuum state , i.e., the oscillator frequency has to be sufficiently large compared to the temperature in the experiment (for a discussion of possible values in an experiment see below). The population transfer is performed from the initial state via to the state . As the “Stokes field” is replaced by the vacuum field of the cavity (which is coupled via the quantronium-resonator coupling parameter ), a single phonon is emitted into the resonator during the STIRAP operation. Again, the cavity field may trigger transitions between eigenstates of the setup although it has only terms diagonal in the charge basis due to mixing of charge states by the Josephson coupling. As mentioned above, is required. With typical temperatures of mK, oscillator frequencies above GHz are necessary (which is at the limit of present-day technology Roukes03 ). Note also that the oscillator frequency needs to be resonant with the quantronium transition . With a charging energy of eV and it is possible to have GHz. With these parameters one may hope to achieve similar decoherence effects as in the experiments of Refs. Vion02 ; Ithier05 and, at the same time, to generate the appropriate level spacings. Assuming the same decoherence rates as in the STIRAP process with classical microwave fields (Fig. 2) and taking into account a finite quality factor of the nanomechanical resonator we can numerically evaluate the time evolution of the coupled system. Note that for this calculation it is necessary to take into account also the state which is not part of the STIRAP scheme (see Fig. 3b) but contributes to reduce coherence of the population transfer. It can be seen that a highly efficient transfer of the system to the state should be feasible (cf. Fig. 3c). As to the detection, it would be desirable to directly measure the state of the oscillator. However, it may be easier to probe the state via a measurement of the quantronium eigenstate. Either, one probes merely the final state . Alternatively, the system can be viewed as a realization of the Jaynes-Cummings model ScullyBook97 and one may try to detect Rabi oscillations between the states and induced by the cavity field. To this end, the resonator-box coupling needs to be set to the appropriate value that facilitates the observation of such Rabi oscillations (while ). Note that for this type of detection high-quality resonators are required, and it is necessary to distinguish between the quantronium eigenstates and . The procedure described here is not limited (at least in theory) to the generation of single-phonon states of the resonator Parkins93 . The final state of the protocol described so far may be changed (via a pulse in the quantronium with vanishing resonator-box coupling) into . This state may serve as the initial state for another STIRAP transfer , etc. It is an important advantage of the STIRAP protocol for its realization in solid-state devices that the efficiency does not depend sensitively on the absolute values of the couplings during the procedure. This makes it robust against fluctuations in the environment. Another advantage is its versatility. For example, instead of changing the amplitudes of the driving fields it is possible to change the driving frequencies Bergmann98 . This may be an option for a Cooper-pair box coupled to an electrical resonator such as in Ref. FalciPlastina03 where it is easier to change the resonator frequency than the capacitive coupling. Interestingly, the protocol to generate Fock states can even be modified such that it suffices to switch the couplings from at time to at (see Ref. Kuhn99 ). That is, a single phonon (or photon) can be generated and emitted from the cavity with an ‘always-on’ cavity coupling. This may be interesting for setups where both coupling and resonator frequency are fixed such as in Ref. Wallraff04 . Acknowledgments This work has been supported financially by SFB 631 of the DFG. JS would like to thank P. Schlagheck for pointing out to him Ref. Henrich00 and D. Esteve for stimulating comments. Illuminating discussions with A. Kuhn and M. Storcz are gratefully acknowledged. • (1) Y. Nakamura, Yu. Pashkin, and J.S. Tsai, Nature 398, 786 (1999). • (2) D. Vion, A. Aassime, A. Cottet, P. Joyez, H. Pothier, C. Urbina, D. Esteve, and M.H. Devoret, Science 296, 886 (2002). • (3) I. Chiorescu, Y. Nakamura, C.J.P.M. Harmans, and J.E. Mooij, Science 299, 1869 (2003). • (4) T. Yamamoto, Yu.A. Pashkin, O. Astafiev, Y. Nakamura, and J.S. Tsai, Nature 425, 941 (2003). • (5) J.B. Majer, F.G. Paauw, A.C.J. ter Haar, C.J.P.M. Harmans, and J.E. Mooij, Phys. Rev. Lett. 94, 090501 (2005). • (6) A. Wallraff, D.I. Schuster, A. Blais, L. Frunzio, R.S. Huang, J. Majer, S. Kumar, S.M. Girvin, and R.J. Schoelkopf, Nature 431, 162 (2004). • (7) I. Chiorescu, P. Bertet, K. Semba, Y. Nakamura, C.J.P.M. Harmans, and J.E. Mooij, Nature 431, 159 (2004). • (8) G. Falci, R. Fazio, G.M. Palma, J. Siewert, and V. Vedral, Nature 407, 355 (2000); L. Faoro, J. Siewert, and R. Fazio, Phys. Rev. Lett. 90, 028301 (2003); M. Cholascinski, Phys. Rev. B 69, 134516 (2004). • (9) F. Marquardt and C. Bruder, Phys. Rev. B 63, 054514 (2001). • (10) A. Armour, M. Blencowe, and K.C. Schwab, Phys. Rev. Lett. 88, 148301 (2002). • (11) I. Martin, A. Shnirman, L. Tian, and P. Zoller, Phys. Rev. B 69, 125339 (2004); P. Zhang, Y.D. Wang, and C.P. Sun, Phys. Rev. Lett. 95, 097204 (2005). • (12) K.V.R.M. Murali, Z. Dutton, W.D. Oliver, D.S. Crankshaw, and T. Orlando, Phys. Rev. Lett. 93, 087003 (2004). • (13) M.H.S. Amin, A.Yu. Smirnov, and A. Maassen v.d. Brink, Phys. Rev. B 67, 100508(R) (2003). • (14) J. Siewert and T. Brandes, Adv. Solid State Phys. 44, 181 (2004). • (15) Y.-X. Liu, J.Q. You, L.F. Wei, C.P. Sun, and F. Nori, Phys. Rev. Lett. 95, 087001 (2005). • (16) A.S. Parkins, P. Marte, P. Zoller, H.J. Kimble, Phys. Rev. Lett. 71, 3095 (1993). • (17) M. Henrich, T. Legero, A. Kuhn, and G. Rempe, Phys. Rev. Lett. 85, 4872 (2000). • (18) K. Bergmann, H. Theuer, and B.W. Shore, Rev. Mod. Phys. 70, 1003 (1998); N.V. Vitanov, T. Halfmann, B.W. Shore, and K. Bergmann, Annu. Rev. Phys. Chem. 52, 763 (2001). • (19) M. Mariantoni, M.J. Storcz et al., submitted (2005). • (20) M.O. Scully and M.S. Zubairy: Quantum Optics (Cambridge Univ. Press, Cambridge 1997). • (21) M.A. Kmetic, R.A. Thuraisingham, and W.J. Meath, Phys. Rev. A 33, 1688 (1986). • (22) E. Paladino, L. Faoro, G. Falci, and R. Fazio, Phys. Rev. Lett. 88, 228304 (2002); G. Falci, A. D’Arrigo, A. Mastellone, and E. Paladino , Phys. Rev. Lett. 94, 167002 (2005). • (23) G. Ithier, E. Collin, P. Joyez, P.J. Meeson, D. Vion et al., eprint condmat/0508588 (2005). • (24) A. Kuhn, M. Hennrich, T. Bondo, and G. Rempe, Appl. Phys. B 69, 373 (1999). • (25) F. Plastina and G. Falci, Phys. Rev. B 67, 224514 (2003). • (26) X.M.H. Huang, C.A. Zorman, M. Mehregany, and M. Roukes, Nature 421, 496 (2003). For everything else, email us at [email protected].
a2797d6fe89940b4
My starting point will be the one-dimensional time-independent Schrödinger equation - u_{xx}(k,x) + V(x) u(k,x) = k^2 u(k,x) (2) where V: {\Bbb R} \to {\Bbb R} is a given potential function, k \in {\Bbb R} is a frequency parameter, and u: {\Bbb R} \times {\Bbb R} \to {\Bbb C} is the wave function. This equation (after reinstating constants such as Planck’s constant \hbar, which we have normalised away) describes the instantaneous state of a quantum particle with energy k^2 in the presence of the potential V. To avoid technicalities let us assume that V is smooth and compactly supported (say in the interval {}[-R,R]) for now, though the eventual conjecture will concern potentials V that are merely square-integrable. For each fixed frequency k, the equation (2) is a linear homogeneous second order ODE, and so has a two-dimensional space of solutions. In the free case V=0, the solution space is given by u(k,x) = \alpha(k) e^{ikx} + \beta(k) e^{-ikx} (3) where \alpha(k) and \beta(k) are arbitrary complex numbers; physically, these numbers represent the amplitudes of the rightward and leftward propagating components of the solution respectively. Now suppose that V is non-zero, but is still compactly supported on an interval {}[-R,+R]. Then for a fixed frequency k, a solution to (2) will still behave like (3) in the regions x > R and x < R, where the potential vanishes; however, the amplitudes on either side of the potential may be different. Thus we would have u(k,x) = \alpha_+(k) e^{ikx} + \beta_+(k) e^{-ikx} for x > R and u(k,x) = \alpha_-(k) e^{ikx} + \beta_-(k) e^{-ikx} for x < -R. Since there is only a two-dimensional linear space of solutions, the four complex numbers \alpha_-(k), \beta_-(k), \alpha_+(k), \beta_+(k) must be related to each other by a linear relationship of the form \begin{pmatrix} \alpha_+(k) \\ \beta_+(k) \end{pmatrix} = \overbrace{V}(k) \begin{pmatrix} \alpha_-(k) \\ \beta_-(k) \end{pmatrix} where \overbrace{V}(k) is a 2 \times 2 matrix depending on V and k, known as the scattering matrix of V at frequency k. (We choose this notation to deliberately invoke a resemblance to the Fourier transform \hat V(k) := \int_{-\infty}^\infty V(x) e^{-2ikx}\ dx of V; more on this later.) Physically, this matrix determines how much of an incoming wave at frequency k gets reflected by the potential, and how much gets transmitted. What can we say about the matrix \overbrace{V}(k)? By using the Wronskian of two solutions to (2) (or by viewing (2) as a Hamiltonian flow in phase space) we can show that \overbrace{V}(k) must have determinant 1. Also, by using the observation that the solution space to (2) is closed under complex conjugation u(k,x) \mapsto \overline{u(k,x)}, one sees that each coefficient of the matrix \overbrace{V}(k) is the complex conjugate of the diagonally opposite coefficient. Combining the two, we see that \overbrace{V}(k) takes values in the Lie group SU(1,1) := \{ \begin{pmatrix} a & \overline{b} \\ b & \overline{a} \end{pmatrix}: a,b \in {\Bbb C}, |a|^2-|b|^2 = 1 \} (which, incidentally, is isomorphic to SL_2({\Bbb R})), thus we have \overbrace{V}(k) = \begin{pmatrix} a(k) & \overline{b(k)} \\ b (k) & \overline{a(k)} \end{pmatrix} for some functions a: {\Bbb R} \to {\Bbb C} and b: {\Bbb R} \to {\Bbb C} obeying the constraint |a(k)|^2 - |b(k)|^2 = 1. (The functions \frac{1}{a(k)} and \frac{b(k)}{a(k)} are sometimes known as the transmission coefficient and reflection coefficient respectively; note that they square-sum to 1, a fact related to the law of conservation of energy.) These coefficients evolve in a beautifully simple manner if V evolves via the Korteweg-de Vries (KdV) equation V_t + V_{xxx} = 6VV_x (indeed, one has \partial_t a = 0 and \partial_t b = 8ik^3 b), being part of the fascinating subject of completely integrable systems, but that is a long story which we will not discuss here. This connection does however provide one important source of motivation for studying the scattering transform V \mapsto \overbrace{V} and its inverse. What are the values of the coefficients a(k), b(k)? In the free case V=0, one has a(k)=1 and b(k)=0. When V is non-zero but very small, one can linearise in V (discarding all terms of order O(V^2) or higher), and obtain the approximation a(k) \approx 1 -\frac{i}{2k}\int_{-\infty}^\infty V; \quad b(k) \approx \frac{-i}{2k} \hat V(k) known as the Born approximation; this helps explain why we think of \overbrace{V}(k) as a nonlinear variant of the Fourier transform. A slightly more precise approximation, known as the WKB approximation, is a(k) \approx e^{-\frac{i}{2k}\int_{-\infty}^\infty V}; \quad b(k) \approx \frac{-i}{2k} e^{-\frac{i}{2k}\int_{-\infty}^\infty V} \int_{-\infty}^{\infty} V(x) e^{-2ikx + \frac{i}{k} \int_{-\infty}^x V}\ dx. (One can avoid the additional technicalities caused by the WKB phase correction by working with the Dirac equation instead of the Schrödinger; this formulation is in fact cleaner in many respects, but we shall stick with the more traditional Schrödinger formulation here. More generally, one can consider analogous scattering transforms for AKNS systems.) One can in fact expand a(k) and b(k) as a formal power series of multilinear integrals in V (distorted slightly by the WKB phase correction e^{\frac{i}{k} \int_{-\infty}^x V}), whose terms resemble the multilinear expression (1) except for some (crucial) sign changes and some WKB phase corrections. It is relatively easy to show that this multilinear series is absolutely convergent for every k when the potential V is absolutely integrable (this is the nonlinear analogue to the obvious fact that the Fourier integral \hat V(k) = \int_{-\infty}^\infty V(k) e^{-2ikx} is absolutely convergent when V is absolutely integrable; it can also be deduced without recourse to multilinear series by using Levinson’s theorem.) If V is not absolutely integrable, but instead lies in L^p({\Bbb R}) for some p > 1, then the series can diverge for some k; this fact is closely related to a classic result of Wigner and von Neumann that the Schrödinger operator can contain embedded pure point spectrum. However, Christ and Kiselev showed that the series is absolutely convergent for almost every k in the case 1 < p < 2 (this is a non-linear version of the Hausdorff-Young inequality). In fact they proved a stronger statement, namely that for almost every k, the eigenfunctions x \mapsto u(k,x) are bounded (and converge asymptotically to plane waves \alpha_\pm(k) e^{ikx} + \beta_\pm(k) e^{-ikx} as x \to \infty). There is an analogue of the Born and WKB approximations for these eigenfunctions, which shows that the Christ-Kiselev result is the nonlinear analogue of a classical result of Menshov, Paley and Zygmund showing the conditional convergence of the Fourier integral \int_{-\infty}^\infty V(x) e^{-2ikx}\ dx for almost every k when V \in L^p({\Bbb R}) for some 1 < p < 2. The analogue of the Menshov-Paley-Zygmund theorem at the endpoint p=2 is the celebrated theorem of Carleson on almost everywhere convergence of Fourier series of L^2 functions. (The claim fails for p > 2, as can be seen by investigating random Fourier series, though I don’t recall the reference for this fact.) The nonlinear version of this would assert that for square-integrable potentials V, the eigenfunctions x \mapsto u(k,x) are bounded for almost every k. This is the nonlinear Carleson theorem conjecture. Unfortunately, it cannot be established by multilinear series, because of a divergence in the trilinear term of the expansion; but other methods may succeed instead. For instance, the weaker statement that the coefficients a(k) and b(k) (defined by density) are well defined and finite almost everywhere for square-integrable V (which is a nonlinear analogue of Plancherel’s theorem that the Fourier transform can be defined by density on L^2({\Bbb R})) was essentially established by Deift and Killip, using a trace formula (a nonlinear analogue to Plancherel’s formula). Also, the “dyadic” or “function field” model of the conjecture is known, by a modification of Carleson’s original argument. But the general case still seems to require more tools; for instance, we still do not have a good nonlinear Littlewood-Paley theory (except in the dyadic case), which is preventing time-frequency type arguments from being extended directly to the nonlinear setting.
efaf83cbd4253b32
PhD thesis Chapter 3 - An atomistic-continuum study of point Document Sample PhD thesis Chapter 3 - An atomistic-continuum study of point Powered By Docstoc IN SILICON 1) Introduction Accurate modeling of coupled stress-diffusion problems requires that the effect of stress on the diffusivity and chemical potential of defects and dopants be quantified. Although the aggregate effects of stress on diffusion are readily observable, it is difficult to experimentally measure stress-induced changes in diffusivity and chemical potential. Despite these difficulties a number of careful measurements have been made regarding the effect of stress on diffusivities in model semiconductor systems [Zhao et al. 1999A, Zhao et al. 1999B], and the formation energies of vacancies have been measured in metals [Simmons and Balluffy 1960]. Due to the experimental challenges, an extensive literature has emerged regarding the numerical calculation of the formation energies of these defects using atomistic simulation [Antonelli et al. 1998, Antonelli and Bernholc 1989, Puska et al. 1998, Zywietz et al. 1998, Song et al. 1993, Tang et al. 1997, Al-Mushadani and Needs 2003]. Although early work used empirical potentials, more recent work has focused on the application of tight-binding and ab initio methods which are more accurate in modeling the alterations in bonding that occur at the defect. These calculations have been limited to a few hundred atoms due to the computational requirements of these methods. This chapter addresses a number of unresolved issues in the application of atomistic simulations to accurately extract formation volumes and stress fields of point defects. In order to illustrate the methods that can be used to calculate the appropriate thermodynamic and elastic parameters from atomistic data we have performed calculations regarding a simple model point-defect, a vacancy in the Stillinger Weber [Stillinger and Weber 1985] model of silicon. An empirical model of silicon bonding was employed because it allows the exploration of a much larger range of system sizes than would have been possible using a more accurate model. Using an empirical potential precludes a quantitatively accurate measure of, for example, the formation volume of a vacancy in silicon since this model does not properly model the change in bonding that occurs at the vacancy. However the larger system sizes accessible via such a method are necessary to demonstrate a new technique for accurately calculating the prediction that does arise from the Stillinger Weber model of silicon and, by extension, in other atomistic potentials. This is critical since our goal is to make firm connections between the atomistic data and continuum concepts that, as we shall show, are not yet convergent on the scale of current ab initio calculations. This work paves the way for a multiscale modeling technique in which ab intio, atomistic and continuum concepts are used together to extract such quantities with predictive accuracy. 2) Formation volume Chapter 1 introduced the free energy of activation which quantifies the effect of an external stress on the formation and migration of a defect in a crystal. This section focuses on the formation energy which is a part of the activation energy. The formation free energy determines the number of defects in the crystal. It comes from a change in internal energy Ef, a change in entropy Sf (usually small) and a work term, G f = E f − TS f − σ : V f = E f − TS f − Wext , (3.1) where Vf is a tensor describing the change in volume and shape of the system and Wext is the work done by σ on the system. The derivative of the free energy with respect to the externally applied stress provides the fundamental definition of this volume term. Equation (3.1) shows that the free energy depends on the pressure through the work. However it may also be indirectly pressure-dependent if the internal energy depends on the pressure. The internal energy of formation can be split into two parts, an elastic part, EfLE, accounting for the elastic energy related to the crystal relaxation around the vacancy and a core energy, Efcore, arising from broken bonds. H f = E fLE + E fcore − σ : V f (3.2) While the elastic part can be treated using linear elasticity, the core energy part must be treated atomistically. In linear elasticity, there is no interaction between internal and external stresses [Eshelby 1961] therefore ELE does not depend on σ. The core energy comes from the broken bonds and is therefore expected to be independent of the pressure. Therefore the only dependence of H upon σ is from the σV term. The formation volume is the change of volume of a system upon introduction of a defect. Let system 1 be a perfect crystal under some external stress σ and system 2 the same crystal under the same external stress σ to which a defect was added. The formation volume Vf is the difference of volumes of the two systems. Similarly Ef is the difference in internal energy between the two systems. The external stress contributes to the internal energy through the elastic energy EfLE, but since these two systems are under the same stress these contributions cancel out. As described in chapter 1, the stress dependence of the formation of defects is of technological importance. This dependence is captured by the formation volume. If, for (1) (3) FIG. 3.1: Vacancy as an Eshelby inclusion. Part of the medium is removed (1). Its volume is decreased by Vt (2). It is reinserted into the medium (3). a given defect, Vf is 0 the concentration of this defect does not depend on the external stress. If a defect A has a positive formation volume and a defect B has a negative formation volume, under compression, the number of B defects increases and the number of A defects decreases. Under tension the number of A defects increases and the number of B defects decreases. If part of a film or of a device is under tension and another part is compressive, a segregation of species A and B can result. Therefore the behavior of the dopants/defects under stress depends upon the sign and magnitude of Vf. Although the origin of the formation volume is atomistic in nature, a formulation in the context of continuum elasticity has also been adapted to interpret Vf in terms of an internal transformation of the material. This picture assumes the existence of a continuum defect that has a reference state independent of the surrounding crystal. Figure 3.1 shows the theoretical construction that would create such an “Eshelby inclusion” [Eshelby 1961]: material is removed from a continuous medium, the removed material undergoes a transformation described by a tensor Vt, then it is reinserted into the medium. Upon reinsertion into the medium there will be elastic distortions both of the inclusion and of the crystal around it. It is worth noting that the change in volume (and potentially of shape) described by Vt is the change in shape and volume of the inclusion when not interacting elastically with the surrounding material. Thus Vt is not equal to the distortion of the inclusion because this distortion is affected by the elasticity of the medium. If the volume of the part of the medium which is removed decreases in step 2, upon reinsertion it will make the medium shrink. It is therefore called a center of contraction. If an external stress is applied, there will be an interaction between the center of contraction and the crystal. The tensor, Vt, is calculated by assuming a homogeneous strain over the transformed material, and multiplying this strain by the initial, scalar volume of this region. When a vacancy is to be represented by this continuum analogue, the scalar volume is often assumed to be the atomic volume, Ω. In this interpretation the external work is exactly balanced by the work done to transform the inclusion against the external stress, σ, and can be shown to result in an external work Wext = σ : Vt. When evaluated at the boundary the strain field results in the change in volume and shape, Vt, that must be equivalent to Vf to be consistent with the thermodynamic formulation. However, the arguments leading to Wext = σ : Vt are meaningful only within continuum elasticity [Eshelby 1961], a theory that loses validity in the neighborhood of the defect. While the interpretation of Vt as a continuum transformation is not physically relevant for a point defect, this transformation can be used to calculate the elastic strain and stress fields in the vicinity of the transformation if Vf is known. -f1 d3 FIG. 3.2: Dipole representation of the point defect. It is possible to extend the Eshelby inclusion model to point defects such as vacancies by shrinking the inclusion to a point. The elastic field of a vacancy is then modeled using a force dipole. This dipole is similar to an electric dipole. It is composed of forces an infinitesimally small distance apart. Since forces can act in any direction and be separated by a displacement in any direction the dipole is most generally represented by a tensor. This model can work even for a defect with an anisotropic stress field. When the dipole is proportional to the identity matrix it represents an isotropic center of contraction or expansion. If three force pairs f1, f2 and f3 are applied at three points d1, d2 and d3 away from the vacancy (Fig. 3.2), the dipole is defined as [de Graeve 2002] D = ∑ fi ⊗ di . (3.3) These three vectors do not have to be orthogonal, they only have to be a basis of 3D space. Far away, when r >> d, the force field is F(r ) = −D.∇ r [δ(r )] . (3.4) For the sake of simplicity, the origin of r is taken to be at the defect. The analytical form of Eq. (3.4) ensures that the sum of forces is 0. The sum of moments is ∫ r × F(r ) = (D 32 − D 23 ) e1 + (D13 − D 31 ) e 2 + (D 21 − D12 ) e 3 (3.5) where Dij is the (i, j) component of the tensor D and (ei) are unit vectors. Equilibrium requires that Eq. (3.5) be equal to 0 and thus that D be symmetric. The eigenvalues of a symmetric matrix are real and its eigenvectors can be chosen to be orthonormal. The dipole tensor can thus be written as ⎛ f1′ d1 0 0 ⎞ ⎜ ⎟ D = R.D′.R = R. ⎜ 0-1 f 2 d ′2 0 ⎟ .R -1 (3.6) ⎜ 0 0 f 3 d′ ⎟ ′ 3⎠ where R is a rotation matrix. Having characterized the force distribution associated with the defect the displacement field can be derived from the elastic solution in a generalized elastic medium. The most general derivation of this kind is to compose the solution from the Green’s function that satisfies the equation ∂ 2 G km (r ) C ijkl + δ im δ(r ) = 0 . (3.7) ∂x j ∂x i Here Cijkl is the elastic modulus tensor of the solid and Gkm is the tensorial elastic Green’s function. Once the Green’s function is derived from Eq. (3.7), the displacement field can be expressed u(r ) = −∇ r [G (r ).D] . (3.8) The resulting solution can be calculated from the expression [Barnett 1972, de Graeve ∫ [− (M ) ] 1 π u(r ) = 2 .D.r + (J.D.z ) dψ . ˆ (3.9) 2 0 4π r Here M-1 is the inverse of the matrix M, where M is defined by M ir (z ) = C ijrs z j z s , (3.10) r is given by r= , (3.11) J is such that2 J ij = C kpln M ik M lj (z p rn + z n rp ) -1 -1 ˆ ˆ (3.12) and z is ⎛ cosψ sinθ + sinψ cosθ cosϕ ⎞ ⎜ ⎟ z = ⎜ − cosψ cosθ + sinψ sinθ cosϕ ⎟ (3.13) ⎜ − sinψ sinϕ ⎟ ⎝ ⎠ where θ and ϕ are the polar and azimuthal angles of r. The strain is ∫ [2(M ) ] 1 π ε(r ) = .D.r ⊗ r − 2(J.D.r ⊗ z + J.D.z ⊗ r ) + (A.D.z ⊗ z ) dψ -1 s s s ˆ ˆ ˆ ˆ 2 0 4π r where the “s” stands for symmetric, i.e. A + At As = . (3.15) If the medium is isotropic, Eq. (3.9) can be written in a closed form [Hirth and Lothe u(r ) = − 2 4π C11 r and the strains are ε rr = ∂u r (D.r ).r ˆ ˆ ∂r 2π C11 r J is used here instead of F (notation used by Barnett) to avoid confusion with forces. ε θθ = ε ϕϕ = (D.r ).r ˆ ˆ . (3.18) r 4π C11 r The stresses then are C11 − C12 (D.r ).r ˆ ˆ σ rr = C11 ε rr + C12 ε θθ + C12 ε ϕϕ = 3 2 π C11 r C11 − C12 (D.r ).r ˆ ˆ σ θθ = σ ϕϕ = C12 ε rr + (C11 + C12 ) ε θθ = − 3 . (3.20) 2 2π C11 r C11 − C12 As we assume isotropy, is equal to C44. So (keeping in mind that C44 actually C11 − C12 means “isotropic C44”, i.e. ), we can write C 44 (D.r ).r ˆ ˆ σ rr = (3.21) C11 π r 3 C 44 (D.r ).r ˆ ˆ σ θθ = σ ϕϕ = − . (3.22) C11 2π r 3 The radial force on an area A a distance r from the defect is then A C 44 (D.r ).r ˆ ˆ F = σ rr A = (3.23) π C11 r 3 where A is the surface of the atom on which the force applies. A priori, the dipole may not be enough to represent any point defect and higher order terms, such as a quadrupole, may be necessary. However, results for the vacancy show that the dipole is a good description of this point defect. It is possible that more complicated defects or clusters require a quadrupole term. In any case, the contribution of higher order terms to the stress field should die off faster than the dipole and may be noticeable only close to the defect. 3) Calculating the formation volume a) Change of volume of the simulation cell The most common method used to extract the formation volumes of defects has been the direct measurement of the change in volume of the relaxed supercell upon the introduction of the defect [Zhao et al. 1999A, Zhao et al. 1999B]. This is a rigorously correct method of calculating the formation volume given two assumptions: that the core energy, defined in Eq. (3.2), is not pressure dependent and that the supercell size is sufficiently large such that defect-defect interactions have a negligible effect on the elastic relaxation of the cell. The former is typically a good assumption. The latter may not always be a good assumption for the small supercell sizes typically simulated by ab initio calculation. The vacancy-vacancy interaction will be shown to have a negligible effect even for small systems; however this may not be the case for other defects, in particular the anisotropic ones. b) Obtaining the dipole from positions and forces Although the above elastic analysis provides a means to calculate the displacement and stress fields around a defect of elastic dipole D, it does not provide a means to extract this dipole value. The dipole value can however be extracted from the forces on the atoms surrounding the defect. In an isotropic medium, the radial force expected on atom n from the dipole is C 44 D . r n F′ n = A n (3.24) C11 π r n 4 where rn is the position of atom n relative to the center of the defect. This provides the forces as a function of the dipole. In fact the dipole is unknown and the forces can be obtained form atomistics. Equation (3.24) must be somehow inverted to have the dipole as a function of the forces. We define the vector ∆n as the difference between the actual force on atom n, Fn (obtained from atomistic simulations) and the radial force expected from the dipole in an isotropic medium, F’n, C 44 D . r n ∆ n = F n − F′ n = F n − A n . (3.25) C11 π r n 4 We then define the scalar ∆ by ⎛ ⎞ ⎜ C D.rn ⎟ ∆2 = ∑ ∆ n ( ) 2 = ∑ ⎜ F n − A n 44 C11 π r n 4 ⎟ . (3.26) n n ⎜ ⎟ ⎝ ⎠ If the representation of a vacancy (as a center of contraction) in elasticity and its atomistic counterpart were in perfect correspondence, ∆ would be 0. But since D has 6 components, while there are 3n forces and 3n positions it is not generally possible to find a D that satisfies the condition ∆ = 0. We therefore pick the tensor D which minimizes ∆2. To this end, we calculate the derivatives of ∆2 with respect to the components of D ⎛ Fn r n D ik rkn r jn ⎞ ∂∆ 2 2 C 44 ⎜ i j n C 44 ⎟ ∂D ij π C11 ∑A ⎜− n 4 + ∑A C ⎜ r 8 ⎟ = 0. (3.27) k 11 π rn ⎠ This gives Fin r jn n n C 44 D ik rk r j ∑A n = ∑∑ A ( ) n 2 C11 π r n 8 . (3.28) n rn n k Fn ⊗ rn X = ∑ An 4 n rn 1 C 44 rn ⊗ rn π C11 ∑ (A )n 2 , (3.30) n rn Eq. (3.28) can be rewritten as [de Graeve 2002] D = X.Y-1. (3.31) Equation (3.31) provides a means to calculate the value of the dipole using the positions of and the forces on the atoms from atomistics. It is a closed form solution for a generalized defect in an isotropic medium. We will take the sum in Eqs. (3.29) and (3.30) to be over atoms on a cubic shell, as shown in Fig. 3.3. FIG. 3.3: Cubic shells used to calculate the dipole. From Eqs. (3.29) and (3.30) the forces on atoms are needed to calculate the dipole. However, at equilibrium the net force on any atom is zero. Figure 3.4 shows, in black, an atom belonging to the shell. If all atoms within the shell (white atoms) were removed the only force remaining would be the force from the atoms outside the shell (gray atoms). For the black atom to be at equilibrium, the traction due to atoms inside the shell (wide arrow) must cancel out the traction from the atoms outside the shell. Thus the traction across the surface of the shell due to the vacancy is negative the force on this atom from atoms outside the shell. c) Simulation techniques So far an expression was obtained for the dipole as a function of positions and forces extracted from atomic simulations. The question of the choice of the technique to use in atomic simulations to obtain forces remains. We now introduce several atomic simulation techniques and compare their strengths and weaknesses. The families of representations of materials are ab initio, tight-binding and empirical FIG. 3.4: Force on an atom belonging to the shell (black atom) from the atoms outside the shell (gray atoms) and “force from the vacancy” (wide arrow). potentials. In ab initio simulations, the Schrödinger equation is solved (under some assumptions) [Kohn and Sham 1965, Kohn 1999]. These calculations are intrinsically quantum mechanical, which makes them very accurate. However they are computationally intensive which prevents the simulation of large systems. Empirical potentials eliminate the electronic degrees of freedom. The force from one atom on another atom is calculated as a function of their separation distance and the location of surrounding atoms. The expression for this potential is not theoretically derived but some insight from quantum mechanics may be used in motivating these expressions. Empirical potentials have parameters which are fitted to the established properties of the material of interest (known experimentally or from ab initio calculations). As a result, while the lattice parameters and cohesive energies are outputs of ab initio calculations, they are inputs for empirical potentials. Empirical potentials can be considered to be mostly interpolations between known properties of the material in question (relative energies of crystal structures, elastic properties, etc.) Therefore predictions which rely on aspects of the potential far from the fitting regime are not quantitatively reliable. Tight-binding [Slater and Koster 1954, Goodwin et al. 1989] is another simulation technique, it uses a very simplified quantum mechanical description of the atoms. This makes these simulations simpler to implement, less computationally-intensive but also less accurate than ab initio calculations. Their relative simplicity also allows for larger systems than ab initio. Therefore, both in terms of system size and of accuracy, tight- binding (TB) is intermediate between empirical potentials and ab initio. Stillinger and Weber [Stillinger and Weber 1985] designed an empirical potential to study the melting of silicon. Due to the covalent nature of silicon bonds, a mere two- body term does not suffice because the energy would then be proportional to the number of bonds which would drive the system to a close-packed structure. Stillinger Weber (SW) potential uses both two-body and three body terms: Φ = ∑ v 2 (rij ) + ∑ v (r , r3 ij jk , θ jik ) . (3.32) i< j i < j< k The first summation is over pairs of atoms and the second is over triples. For a given pair of atoms, the two-body term depends only on the distance r between the atoms, ⎡ ⎛ r ⎞ −4 ⎤ ⎛ 1 ⎞ v 2 (r) = ε A ⎢B⎜ ⎟ − 1⎥ exp⎜ ⎟ (3.33) ⎢ ⎝σ⎠ ⎣ ⎥ ⎦ ⎝r/σ −a ⎠ where ε, A, B and σ are positive constants. The exponential term drives v2 to 0 when r/σ approaches the constant a from below. σa is therefore a cut-off distance. Equation (3.33) applies when r/σ < a and v2 is set to 0 when r/σ > a. The value of a is chosen such that the cut-off occurs between first and second nearest neighbors, as a consequence there is no two-body interaction between second nearest neighbors. Whereas, physically, atoms further apart contribute to the energy limiting two-body interactions to first nearest- neighbors simplifies the relationship between the model parameters and many properties such as lattice parameters and bond lengths for various crystal structures, elastic constants. This greatly simplifies the routine to optimize the parameters. i k r j θjik FIG. 3.5: The two-body terms only depend on the interatomic distance while the three-body terms also account for the angle between the bonds. The three-body term models aspects of the sp3 bonding that cannot be adequately described by two-body interactions. In this term the energy depends both on distances and angles, as shown in Fig. 3.5, v 3 (rij , rik , rjk , θ ijk ) = ε h (rij , rik , θ jik ) + ε h (rji , rjk , θ ijk ) + ε h (rki , rkj , θ ikj ) (3.34) ⎛ γ γ ⎞⎛ 1⎞ h (r1 , r2 , θ ) = λ exp⎜ ⎜ r / σ − a + r / σ − a ⎟⎜ cosθ + 3 ⎟ ⎟ (3.35) ⎝ 1 2 ⎠⎝ ⎠ and λ and γ are positive constants. Again the exponential plays the role of a cut-off and h is zero when r1 or r2 is greater than σa. Notice that in the case of a perfect diamond cubic crystal, cos θ = -1/3 due to the tetrahedral symmetry and the three-body terms do not contribute to the energy. When an atom is removed to form a vacancy, its first nearest-neighbors can relax (generally inward). Different simulation techniques predict different amounts of relaxation and different formation energy. Table 3.1 shows the range of formation energy of a vacancy and of the radial component of the displacement of the first nearest neighbors from experiment, ab initio, tight binding, Stillinger Weber and other empirical potentials. Space group Td corresponds to a radial displacement of the first nearest-neighbors while in D2d there is a pairing of nearest-neighbors which form two dimers with the distance between the two atoms of a dimer smaller than the distance from atoms of the other dimer. The formation energy obtained by ab initio calculations is not very wide-ranged and is consistent with experimental results. The displacement of the first nearest neighbors, on the other hand, can vary greatly (by a factor of two.) In ab initio simulations for instance it varies between -0.48 Å and -0.22 Å. Pushka and coworkers also found the symmetry to be either D2d or Td depending on the size of their system [Pushka et al. 1998]. This indicates that energy converges faster than geometry and that geometric data, such as formation volumes, cannot be obtained with small systems. According to empirical potentials, the first nearest neighbors may move inwards or outwards. These simulations are the least reliable because the potential are fitted to perfect crystals and therefore poorly model the changes in bonding near a technique references space energy (eV) displacement (Å) experiment Watkins 1964; Dannefaer 3.6 ± 0.2 ab initio Antonelli 1989, 1998; Zhu D2d* 3.3 → 3.65 -0.48 → -0.22 1996; Puska 1998; Zywietz tight Song 1993; Lenosky 1997; D2d 3.68 → 5.24 -0.50 → -0.42 binding Tang 1997; Munro 1999 SW Stillinger and Weber 1985 Td 2.82 - 0.56 other Balamane 1992 Td 2.82 → 3.70 -0.51 → +0.24 Table 3.1: Formation energy of a vacancy and displacement of the first nearest neighbors from experiment, ab initio, tight binding, Stillinger Weber and other empirical potentials. *: there exist a few reports of Td symmetry. Any technique, be it experimental or computational, has limitations. It is therefore not always possible to use only one technique. Simulation techniques can be limited in two ways: accuracy and computational cost. The most accurate techniques being the most computationally intensive, they are limited to small systems. Computationally less intensive techniques on the other hand are not efficient far from equilibrium, in particular where the lattice is distorted (defects, surfaces.) Multiscale modeling of materials aims to bring two (or more) different techniques together, each providing its specific strength(s) and compensating for the weakness(es) of the other technique. One possibility is to use several techniques within the same simulation: ab initio is used where accuracy is needed and an empirical potential is used where structural changes are not expected to occur. This provides a means to increase the system size without increasing the computational cost significantly. A slightly different kind of simulation uses atomistics close to a singularity (crack tip, defect, indenter) and continuum mechanics for the rest of the system [Shilkrot et al. 2002]. 4) Results a) Atomistic results The dipole tensor, D, gives the magnitude and anisotropy of the center of contraction and cannot be obtained by elasticity, but must be determined by the microscopic structure of the point defect. A number of different techniques have been used to characterize the relaxation around a point defect. One typical method is to note the relaxation of the nearest neighbor atoms. However this method is not effective for describing the asymptotic elastic relaxation in the vicinity of the defect, which is important for accurately calculating the relaxation volume, i.e. the quantity necessary to predict the thermodynamic response of the defect to stress. We detail here a systematic method for extracting the relaxation around the vacancy. One method to obtain D would be to fit the displacement curve as a whole to the asymptotic elastic solution. While this is feasible it is not an efficient way to proceed and involves fitting the curve in regions close to the defect and close to the periodic boundary where the solution in an infinite medium cannot be expected to apply. Rather we obtain D from Eq. (3.31), i.e. we find the value of the dipole that provides a best fit to the forces obtained form atomistics. b) Isotropy of the vacancy in silicon Equations (3.9) to (3.31) make no assumptions as to the isotropy of the dipole although (3.16) to (3.31) do assume an isotropic elastic medium. Equilibrium only requires that the tensor be symmetric to ensure that there is no net moment. However, conjugate gradient (CG) calculations show that the actual dipole of a vacancy, as may be expected, is nearly isotropic. Figure 3.6(a) shows the ratio of off-diagonal term of D to diagonal terms of D. Far from both the vacancy and the boundaries the dipole is very close to being diagonal. At any shell the non-diagonal terms are never more than a few percent of the diagonal terms. Figure 3.6(b) shows the standard deviation for the diagonal terms of D normalized by the trace of D as a function of the shell where D is calculated. When D is calculated far from the vacancy the standard deviation is less than 0.1 % of the trace and the three diagonal terms are essentially equal. Thus for shells far enough from the vacancy, the dipole is nearly proportional to the identity tensor. An example of such a tensor (in eV) is ⎛ 8.506 3.2x10 -3 − 1.7 x10 -3 ⎞ ⎜ ⎟ D = ⎜ − 9.4x10 -3 8.501 − 5.4x10 -3 ⎟ . (3.36) ⎜ − 0.2x10 -3 − 2.4 x10 -3 8.506 ⎟ ⎝ ⎠ Therefore, we can write the dipole as D=DI (3.37) where D is a scalar and I is the identity tensor. In what follows, when we refer to the dipole, we will be referring to the scalar D. FIG. 3.6: The ratio of off-diagonal terms of D to diagonal terms (a) and the standard deviation for the diagonal terms of D (b) as a function of the shell where D is calculated. Plotted for 512, 1 728, 4 096, 8 000, 13 824 and 32 768 atoms. c) Displacement field and formation volume Once extracted the value of the dipole can be used to calculate the formation volume. The simplest case to calculate is an isotropic elastic sphere of radius R, where the radial displacement is given by the expression: D ⎡ C11 − C12 ⎛ r ⎞ ⎤ u r (r ) = ⎢1 + 2 ⎜ ⎟ ⎥. (3.38) 4π r 2 C11 ⎢ ⎣ C11 + 2C12 ⎝ R ⎠ ⎥⎦ While the first term arises directly from the asymptotic elastic field from Eq. (3.16), the second term is imposed by the free boundary at R. From Eq. (3.38) it follows that the measured formation volume is related directly to the displacement at the outer boundary V f = 4π R 2 u r (R ) = . (3.39) C11 + 2C12 Note that Vf is independent of R. For a large system, where continuum elasticity applies, the formation volume is independent of the size of the system. Since a large cube should not be different from a large sphere, Eq. (3.39) is expected to hold for any isotropic system where finite size effects can be neglected, independent of geometry. The dipole values that were obtained from a series of conjugate gradient calculations using the Stillinger-Weber model ranging in size from 512 atoms to 32 768 atoms. The value of D was calculated on concentric shells around the defect. The shell of first nearest neighbors of the vacancy is not used: since there is nothing strictly inside this shell, the external force is 0 at equilibrium. Shells too close to the vacancy show evidence of discreteness effects. This is to be expected since continuum elasticity does not apply down to the atomic scale. Ten samples were used for each system size except the larger ones since they were more computationally-intensive. In some cases the simulations converge to distinct vacancy structures with different formation energies, FIG. 3.7: Dipole values as a function of the cubic shell at which the force data is extracted for 30 samples made of 4 096 atoms. different formation volumes and different dipole values. Figure 3.7 shows the value of the dipole as a function of the shell where it is calculated for 30 samples made of 4 096 atoms. There are two kinds of curves corresponding to two structures of the vacancy. Within each structure there exists some variation of the properties. Only simulations leading to the lowest energy structure were considered to plot the figures (other than Fig. 3.7) in this chapter. In all figures bearing error bars, the error bars are sample-to- sample variations among the samples of the lowest-energy structure. Therefore they do not account for systematic errors due to system size effects. Table 3.2 shows the number of lowest energy samples obtained for each system size. system size 512 1 728 4 096 8 000 13 824 32 768 number of samples 10 10 10 10 6 4 Table 3.2: Number of sample used for each system size. Figure 3.8(a) shows the dipole values as a function of the shell at which it is calculated. Figure 3.8(b) shows the dipole as a function of the shell over shellmax, where shellmax is half the vacancy-vacancy distance. Thus shell/shellmax varies between 0 at the FIG. 3.8: Dipole values as a function of the cubic shell at which the data are extracted (a) and as a function of the ratio of the shell to the largest shell (b). Each shell is numbered by the distance that separates the closest atom in the shell from the vacancy. The error bars are from sample-to-sample standard deviation. Plotted for 512, 1 728, 4 096, 8 000, 13 824 and 32 768 atoms. vacancy and 1 at the boundary of the simulation cell. The fact, shown in Fig. 3.8(b), that the curves for different systems sizes are close together far from the vacancy is an indication that linear elasticity applies there. The error bars correspond to sample-to- sample standard deviation. The shell of the first nearest neighbors was not plotted as indicated above and the shell of second nearest neighbors gives a very high dipole due to finite size effects. The third to fifth shells give a fairly low dipole value, again a finite size effect. The sixth shell and above form a plateau where the dipole is almost constant. For shells further out, the boundary has an increasingly important influence and the dipole decreases. The sixth and seventh shells will be used to extract the dipole because they are the smaller shells without finite size effects. For small systems, 512 and 1 728 atoms, there is no evident plateau since there is no region far enough from both the defect and the boundary. We can now use the dipole extracted from the atomistic simulations to obtain the formation volume from Eq. (3.39) and Stillinger Weber elastic constants. This volume is plotted in Fig. 3.9 along with the direct measurements of the change of volume of the simulation supercell. The calculated formation volume does not match the relaxation of the simulation cell. The reason for this discrepancy is that the calculated formation volume assumes that the system is isotropic. Since there is no closed-form expression for the stress field in the anisotropic case a fully anisotropic calculation would be much more complicated. In the next sections a method will be discussed to correct for anisotropy when calculating the dipole value from isotropic equations. FIG. 3.9: The formation volume versus the system size measured both using Eq. (3.39) (solid line) and from direct measurements of the change of volume of the simulation supercell (dashed line). The error bars correspond to sample-to-sample standard deviation; they do not account for systematic errors. d) Finite element calculations In order to correct for the assumption of isotropy made in Eqs. (3.29) and (3.30) it is necessary to calculate a value of D/Vf appropriate for determining the formation volume in the anisotropic medium given the dipole extracted assuming an isotropic medium. This has been addressed by a series of finite element (FE) calculations in which the stress field around the defect was obtained and related to the volumetric relaxation of the box [Bouville et al. 2004D]. Obtaining the relationship between the extracted dipole and the formation volume required a convergence study of the solution with respect to the refinement of the discretization. The constitutive behavior of the mesh was taken from the anisotropic (cubic) elastic moduli of the Stilinger Weber potential [Balamane et al. 1992]. FIG. 3.10: Relaxation volume as a function of the number of elements. The vacancy was modeled by a cube-shaped hollow region of dimensions 1/48 x 1/48 x 1/48 of the system size located at the centroid of the mesh. The dipole was represented by point forces, directed toward the origin, applied at the centers of the inner faces of this cube. With the points of application of these forces being known, their magnitudes were specified such that the dipole strength was 1 nN.Å (= 0.624 eV). For the dipole to be equivalent to forces applied at the first nearest neighbors of the vacancy, this corresponds to a system of dimensions 12 x 12 x 12 unit cells. The outer surfaces of the cube were allowed to relax inward while maintaining the planarity of the surfaces. The extent of this relaxation was varied until corresponding normal force on each outer face vanished. These boundary conditions were easier to implement than periodic ones, and were therefore preferred. They resulted in displacement fields for which the relaxation volume differed by less than 10-2 Å3 from the fields for periodic boundary conditions. Since the dipole is an elastic singularity and the cubic shape introduces further stress concentrations, the finite element solutions were slow to converge with mesh refinement. This necessitated considerably fine meshes. Figure 3.10 shows the relaxation volume as a function of the number of elements. If the number of elements is 6N3, the number of nodes is 6(N+1)3 - 12(N+1)2. The three-dimensional stress tensor obtained at element quadrature points with each mesh was projected to the nodes of the mesh using a least-squares formulation. The radial stress component at each node was then obtained. The slow convergence rate applies to these stresses also. Finite element error analysis predicts that the stress projected to the nodes converges at the rate |σnode - σexact| ≤ C h2, (3.40) where h is the element size and C is a constant [Hughes 2000]. The same is true of the volume. Thus using the results from two mesh sizes the asymptotic value can be ⎛ h1 ⎞ ⎜ ⎟ V2 − V1 ⎜h ⎟ V≈⎝ 2⎠ 2 . (3.41) ⎛ h1 ⎞ ⎜ ⎟ −1 ⎜h ⎟ ⎝ 2⎠ Figure 3.11 shows the volume obtained from Eq. (3.41) where the size of mesh 2 is constant (6x563 elements) and the size of mesh 1 is on the x axis. This shows that, unexpectedly, Eq. (3.41) does not provide an asymptotic value independent of the choice of the meshes. This is because convergence is very slow for finite element calculations with a singularity. The stresses have the same problem. FIG. 3.11: The volume obtained from Eq. (3.41) where the size of mesh 1 is the x axis and the size of mesh 2 is 6x563. Figure 3.12(a) shows the output dipole per unit input dipole as a function of the shell where it is calculated for five finite element meshes. The curves for the finer meshes have similar shapes and they are similar to what was observed atomistically (Fig. 3.8). However the magnitude of the dipole is different for the different meshes due to the lack of convergence. The high values close to the defect are due to finite size effects and the fact that the dipole was implemented as force pairs a finite distance apart. Equation (3.39) provides a relationship between the formation volume and the dipole for a sphere of radius R made of an isotropic material. However the proportionality constant applies only to an isotropic medium. In an anisotropic medium the formation volume is also of the form Vf = K D but in this case the proportionality constant K is unknown. Since both atomistic and finite elements results follow this relationship, f f VFE Vat = . (3.42) D FE D at FIG. 3.12: Ratio of the output dipole to the input dipole (a) and to the relaxation volume in eV/Å3 (b) as a function of the cubic shell at which the data is extracted from finite element calculations. Closed circles: 6x63 elements, open triangles: 6x123, closed diamonds: 6x243 elements, crosses: 6x363 elements and open squares: 6x483 elements. The thick solid line in (b) is an extrapolation. f f f In order to obtain Vat from Dat, only the ratio VFE / D FE is needed. Although VFE and DFE converge slowly their ratio may not. Figure 3.12(b) shows the ratio of the output dipole to the volume, D FE / VFE , as a function of the shell. Unlike the volume and the dipole taken separately, the ratio is nearly converged. The two curves are closer together than those in Fig. 3.12(a). Figure 3.13 shows DFE as a function of VFE for five different f f mesh sizes. D FE / VFE is almost independent of the mesh although VFE and DFE are not converged yet. The medium used in the FE calculations is anisotropic. The dipole was extracted from the FE calculations using Eqs. (3.29) through (3.31). Thus the error introduced in the results shown in Fig. 3.8 by the use of Eqs. (3.29) through (3.31), which assume an FIG. 3.13: DFE (calculated at the shell situated at 0.45) in eV as a function of VFE in Å3 for five different mesh sizes: 6x63, 6x123, 6x243, 6x323 and 6x483. The dotted line shows that D FE /VFE is almost constant for the larger shells 6x243, 6x323 and 6x483 (filled symbols). isotropic medium, also exists in the results relating the dipole value to the formation volume shown in Fig. 3.10. Since the FE results are used to derive a formation volume from the dipole extracted in this way it is reasonable to expect that the errors cancel out and the formation volume obtained no longer includes a systematic error arising from an assumption of isotropy. Since D FE / VFE is close to convergence, Eq. (3.41) can be applied to it. Figure 3.14 shows the result for D FE / VFE (in eV/Å3) thus obtained. The dotted line is a power law fit to the part of the data far enough from the vacancy for finite size effects to be neglected. Its equation is ⎛ shell ⎞ D FE / V FE = 0.47 − 0.59⎜ ⎜ shell ⎟ ⎟ . (3.43) ⎝ max ⎠ At the defect, D FE / VFE ≈ 0.47 eV/Å3, as opposed to 0.67 eV/Å3 in the isotropic case. FIG. 3.14: D FE / VFE (in eV/Å3) from finite elements as a function of the shell at which the data is extracted. The dotted line is a power law fitted to the data far from the vacancy. e) System size effects Figure 3.15(a) shows as a function of the system size the values of the formation volume obtained from the dipole and of the formation volume calculated by directly measuring the change in volume of the supercell upon the introduction of the defect to a FIG. 3.15: The formation volume as a function of the system size (a) and of 104 over the system size (b). Solid line: volume calculated using the dipole; dotted line: direct extraction from atomistic simulations. The error bars correspond to sample-to-sample standard deviation; they do not account for systematic errors. system held at zero pressure. Figure 3.15(b) shows the formation volume as a function of 10 000 over the system size. If there is convergence, the formation volume for an infinitely large system can be read at the intersection between the curve and the y-axis. The volume obtained through the dipole converges to a value of 15 Å3 while the direct measurement gives 13.8 Å3. 5) Summary Accurate calculation of formation volumes from atomistic models is important for modeling stress-defect interactions during diffusive processes. The Stillinger Weber potential was used because it allows for the simulation of larger systems than quantum mechanical methods. We presented a new method which calculates the formation volume by matching stresses near the defect to the asymptotic elastic prediction. This method has been shown to converge with system size to a value close to that obtained by measuring the change in volume of the simulation cell. This validates the new method presented in this chapter. It is now possible to find the elastic field around the defect given Vf. This will enable the simulation of real systems by superposing the stress field surrounding the individual defects. As shown in table 3.1, the Stillinger Weber description of the vacancy is not quantitatively accurate. In order to obtain quantitatively accurate results a better description of silicon is necessary close to the vacancy. However, this improvement in model accuracy should not happen at the cost of a dramatic shrinkage of the system size. Two possibilities are available. One could use tight-binding which is more accurate than empirical potentials but not as computationally intensive as ab initio, the description of the vacancy is fair and the system can be simulated using thousands of atoms. This approach is not efficient because tight-binding is used far from the vacancy where an empirical potential would be good enough since only the correct elastic properties are needed. Another solution is then to use ab initio (or tight-binding) methods close to the vacancy, where the system is far from the equilibrium structure, and an empirical potential further away where computationally-intensive methods are not necessary. A full-scale simulation of diffusion in semiconductors would require data on other point defects, interstitials, substitutionals, vacancy-interstitial pairs and other defects [Goedecker et al. 2002] since all of these defects can exist and interact in the devices. The methodology we developed can be applied to these defects, with some modification. These methods may also be applicable to other kinds of crystal defects, such as dislocations [Shilkrot et al. 2002] or more generally to any material inhomogeneity leading to a singularity in the stress field. We applied this methodology to silicon, but it is general enough to be applied to other materials.
77ac4c1c5b7aad56
About this Journal Submit a Manuscript Table of Contents ISRN Optics Volume 2013 (2013), Article ID 783865, 51 pages Review Article Universal Dynamical Control of Open Quantum Systems Weizmann Institute of Science, 76100 Rehovot, Israel Received 25 March 2013; Accepted 24 April 2013 Academic Editors: M. D. Hoogerland, D. Kouznetsov, A. Miroshnichenko, and S. R. Restaino Due to increasing demands on speed and security of data processing, along with requirements on measurement precision in fundamental research, quantum phenomena are expected to play an increasing role in future technologies. Special attention must hence be paid to omnipresent decoherence effects, which hamper quantumness. Their consequence is always a deviation of the quantum state evolution (error) with respect to the expected unitary evolution if these effects are absent. In operational tasks such as the preparation, transformation, transmission, and detection of quantum states, these effects are detrimental and must be suppressed by strategies known as dynamical decoupling, or the more general dynamical control by modulation developed by us. The underlying dynamics must be Zeno-like, yielding suppressed coupling to the bath. There are, however, tasks which cannot be implemented by unitary evolution, in particular those involving a change of the system’s state entropy. Such tasks necessitate efficient coupling to a bath for their implementation. Examples include the use of measurements to cool (purify) a system, to equilibrate it, or to harvest and convert energy from the environment. If the underlying dynamics is anti-Zeno like, enhancement of this coupling to the bath will occur and thereby facilitate the task, as discovered by us. A general task may also require state and energy transfer, or entanglement of noninteracting parties via shared modes of the bath which call for maximizing the shared (two-partite) couplings with the bath, but suppressing the single-partite couplings. For such tasks, a more subtle interplay of Zeno and anti-Zeno dynamics may be optimal. We have therefore constructed a general framework for optimizing the way a system interacts with its environment to achieve a desired task. This optimization consists in adjusting a given “score” that quantifies the success of the task, such as the targeted fidelity, purity, entropy, entanglement, or energy by dynamical modification of the system-bath coupling spectrum on demand. 1. Introduction Due to the ongoing trends of device miniaturization, increasing demands on speed and security of data processing, along with requirements on measurement precision in fundamental research, quantum phenomena are expected to play an increasing role in future technologies. Special attention must hence be paid to omnipresent decoherence effects, which hamper quantumness [170]. These may have different physical origins, such as coupling of the system to an external environment (bath), noise in the classical fields controlling the system, or population leakage out of a relevant system subspace. Their consequence is always a deviation of the quantum state evolution (error) with respect to the expected unitary evolution if these effects are absent. In operational tasks such as the preparation, transformation, transmission, and detection of quantum states, these effects are detrimental and must be suppressed by dynamical control. The underlying dynamics must be Zeno-like yielding suppressed coupling to the bath. Environmental effects generally hamper or completely destroy the “quantumness” of any complex device. Particularly fragile against environment effects is quantum entanglement (QE) in multipartite systems. This fragility may disable quantum information processing and other forthcoming quantum technologies: interferometry, metrology, and lithography. Commonly, the fragility of QE rapidly mounts with the number of entangled particles and the temperature of the environment (thermal “bath”). This QE fragility has been the standard resolution of the Schrödinger-cat paradox: the environment has been assumed to preclude macrosystem entanglement. In-depth study of the mechanisms of decoherence and their prevention is therefore an essential prerequisite for applications involving quantum information processing or communications [3]. The present paper aimed at furthering our understanding of these formidable issues. It is based on progress by our group, as well as others, towards a unified approach to the dynamical control of decoherence and disentanglement. This unified approach culminates in universal formulae allowing design of the required control fields. Most theoretical and experimental methods that aimed at assessing and controlling (suppressing) decoherence of qubits (two-level systems that are the quantum mechanical counterparts of classical bits) have focused on one of two particular situations: (a) single qubits decohering independently, or (b) many qubits collectively perturbed by the same environment. Thus, quantum communication protocols based on entangled two-photon states have been studied under collective depolarization conditions, namely, identical random fluctuations of the polarization for both photons [71, 72]. Entangled qubits that reside at the same site or at equivalent sites of the system, for example, atoms in optical lattices, have likewise been assumed to undergo identical decoherence. By contrast, more general problems of decay of nonlocal mutual entanglement of two or more small systems are less well understood. This decoherence process may occur on a time scale much shorter than the time for either body to undergo local decoherence, but much longer than the time each takes to become disentangled from its environment. The disentanglement of individual particles from their environment is dynamically controlled by interactions on non-Markovian time-scales, as discussed below. Their disentanglement from each other, however, may be purely Markovian [7375], in which case the present non-Markovian approach to dynamical control/prevention is insufficient. 1.1. Dynamical Control of Single-Particle Decay and Decoherence on Non-Markovian Time Scales Quantum-state decay to a continuum or changes in its population via coupling to a thermal bath is known as amplitude noise (AN). It characterizes decoherence processes in many quantum systems, for example, spontaneous emission of photons by excited atoms [76], vibrational and collisional relaxation of trapped ions [1], and the relaxation of current-biased Josephson junctions [77]. Another source of decoherence in the same systems is proper dephasing or phase noise (PN) [78], which does not affect the populations of quantum states but randomizes their energies or phases. For independently decohering qubits, a powerful approach for the suppression of decoherence appears to be the “dynamical decoupling” (DD) of the system from the bath [7992]. The standard “bang-bang” DD, that is, -phase flips of the coupling via strong and sufficiently frequent resonant pulses driving the qubit [8284], has been proposed for the suppression of proper dephasing [93]. This approach is based on the assumption that during these strong and short pulses there is no free evolution; that is, the coupling to the bath is intermittent with control fields. These -pulses hence serve as a complete phase reversal, meaning that the evolution after the pulse negates the deleterious effects of dephasing prior to the pulse, similar to spin-echo technique [94]. However, some residual decoherence remains and increases with the interpulse time interval, and thus in order to combat decoherence effectively, the pulses should be very frequent. While standard DD has been developed for combating first-order dephasing, several extensions have been suggested to further optimize DD under proper dephasing, such as multipulse control [89], continuous DD [88], concatenated DD [90], and optimal DD [95, 96]. DD has also been adapted to suppress other types of decoherence couplings such as internal state coupling [91] and heating [84]. Our group has proposed a universal strategy of approximate DD [97103] for both decay and proper dephasing, by either pulsed or continuous wave (CW) modulation of the system-bath coupling. This strategy allows us to optimally tailor the strength and rate of the modulating pulses to the spectrum of the bath (or continuum) by means of a simple universal formula. In many cases, the standard -phase “bang-bang” (BB) is then found to be inadequate or nonoptimal compared to dynamic control based on the optimization of the universal formula [104]. Our group has purported to substantially expand the arsenal of decay and decoherence control. We have presented a universal form of the decay rate of unstable states into any reservoir (continuum), dynamically modified by perturbations with arbitrary time dependence, focusing on non-Markovian time-scales [97, 99, 100, 102, 105]. An analogous form has been obtained by us for the dynamically modified rate of proper dephasing [100, 101, 105]. Our unified, optimized approach reduces to the BB method in the particular case of proper dephasing or decay via coupling to spectrally symmetric (e.g., Lorentzian or Gaussian) noise baths with limited spectral width (see below). The type of phase modulation advocated for the suppression of coupling to phonon or photon baths with frequency cutoff [103] is, however, drastically different from the BB method. Other situations to which our approach applies, but not the BB method, include amplitude modulation of the coupling to the continuum, as in the case of decay from quasibound states of a periodically tilted washboard potential [99]: such modulation has been experimentally shown [106] to give rise to either slowdown of the decay (Zeno-like behavior) or its speedup (anti-Zeno-like behavior), depending on the modulation rate. The theory has been generalized by us to finite temperatures and to qubits driven by an arbitrary time-dependent field, which may cause the failure of the rotating-wave approximation [100]. It has also been extended to the analysis of multilevel systems, where quantum interference between the levels may either inhibit or accelerate the decay [107]. Our general approach [99] to dynamical control of states coupled to an arbitrary “bath” or continuum has reaffirmed the intuitive anticipation that, in order to suppress their decay, we must modulate the system-bath coupling at a rate exceeding the spectral interval over which the coupling is significant. Yet our analysis can serve as a general recipe for optimized design of the modulation aimed at an effective use of the fields for decay and decoherence suppression or enhancement. 1.2. Control of Symmetry-Breaking Multipartite Decoherence Control of multiqubit or, more generally, multipartite decoherence is of even greater interest, because it can help protect the entanglement of such systems, which is the cornerstone of many quantum information processing applications. However, it is very susceptible to decoherence, decays faster than single-qubit coherence, and can even completely disappear in finite time, an effect dubbed entanglement sudden death (ESD) [73, 74, 108113]. Entanglement is effectively protected in the collective decoherence situation, by singling out decoherence-free subspaces (DFS) [114], wherein symmetrically degenerate many-qubit states, also known as “dark” or “trapping” states [78], are decoupled from the bath [87, 115117]. Symmetry is a powerful means of protecting entangled quantum states against decoherence, since it allows the existence of a decoherence-free subspace or a decoherence-free subsystem [77, 78, 8087, 102, 114120]. In multipartite systems, this requires that all particles be perturbed by the same environment. In keeping with this requirement, quantum communication protocols based on entangled two-photon states have been studied under collective depolarization conditions, namely, identical random fluctuations of the polarization for both photons [71]. Entangled states of two or more particles, wherein each particle travels along a different channel or is stored at a different site in the system, may present more challenging problems insofar as combating and controlling decoherence effects are concerned: if their channels or sites are differently coupled to the environment, their entanglement is expected to be more fragile and harder to protect. To address these fundamental challenges, we have developed a very general treatment. Our treatment does not assume the perturbations to be stroboscopic, that is, strong or fast enough, but rather to act concurrently with the particle-bath interactions. This treatment extends our earlier single-qubit universal strategy [97, 99, 100, 104, 121, 122] to multiple entangled systems (particles) which are either coupled to partly correlated (or uncorrelated) finite-temperature baths or undergo locally varying random dephasing [107, 123126]. Furthermore, it applies to any difference between the couplings of individual particles to the environment. This difference may range from the large-difference limit of completely independent couplings, which can be treated by the single-particle dynamical control of decoherence via modulation of the system-bath coupling, to the opposite zero-difference limit of completely identical couplings, allowing for multiparticle collective behavior and decoherence-free variables [86, 87, 115117, 127130]. The general treatment presented here is valid anywhere between these two limits and allows us to pose and answer the key question: under what conditions, if any, is local control by modulation, addressing each particle individually, preferable to global control, which does not discriminate between the particles? We show that in the realistic scenario, where the particles are differently coupled to the bath, it is advantageous to locally control each particle by individual modulation, even if such modulation is suboptimal for suppressing the decoherence of a single particle. This local modulation allows synchronizing the phase-relation between the different modulations and eliminates the cross coupling between the different systems. As a result, it allows us to preserve the multipartite entanglement and reduces the multipartite decoherence problem to the single particle decoherence problem. We show the advantages of local modulation, over global modulation (i.e., identical modulation for all systems and levels), as regards the preservation of arbitrary initial states, preservation of entanglement, and the intriguing possibility of entanglement increase compared to its initial value. The experimental realization of a universal quantum computer is widely recognized to be difficult due to decoherence effects, particularly dephasing [1, 131133], whose deleterious effects on entanglement of qubits via two-qubit gates [134136] are crucial. To help overcome this problem, we put forth a universal dynamical control approach to the dephasing problem during all the stages of quantum computations [125, 137], namely, (i) storage, wherein the quantum information is preserved in between gate operations, (ii) single-qubit gates, wherein individual qubits are manipulated, without changing their mutual entanglement, and (iii) two-qubit gates, that introduce controlled entanglement. We show that in terms of reducing the effects of dephasing, it is advantageous to concurrently and specifically control all the qubits of the system, whether they undergo quantum gate operations or not. Our approach consists in specifically tailoring each dynamical quantum gate, with the aim of suppressing the dephasing, thereby greatly increasing the gate fidelity. In the course of two-qubit entangling gates, we show that cross dephasing can be completely eliminated by introducing additional control fields. Most significantly, we show that one can increase the gate duration, while simultaneously reducing the effects of dephasing, resulting in a total increase in gate fidelity. This is at odds with the conventional approaches, whereby one tries to either reduce the gate duration, or increase the coherence time. A general task may also require state and energy transfer [138], or entanglement [139] of noninteracting parties via shared modes of the bath [123, 140] which call for maximizing the shared (two-partite) couplings with the bath, but suppressing the single-partite couplings. It is therefore desirable to have a general framework for optimizing the way a system interacts with its environment to achieve a desired task. This optimization consists in adjusting a given “score” that quantifies the success of the task, such as the targeted fidelity, purity, entropy, entanglement, or energy by dynamical modification of the system-bath coupling spectrum on demand. The goal of this work is to develop such a framework. 1.3. Dynamical Protection from Spontaneous Emission Schemes of quantum information processing that are based on optically manipulated atoms face the challenge of protecting the quantum states of the system from decoherence, or fidelity loss, due to atomic spontaneous emission (SE) [1, 141, 142]. SE becomes the dominant source of decoherence at low temperatures, as nonradiative (phonon) relaxation becomes weak [4, 5]. SE suppression cannot be achieved by frequent modulations or perturbations of the decaying state, because of the extremely broad spectrum of the radiative continuum (“bath”) [76, 97]. A promising means of protection from SE is to embed the atoms in photonic crystals (three-dimensionally periodic dielectrics) that possess spectrally wide, omnidirectional photonic bandgaps (PBGs) [6]: atomic SE would then be blocked at frequencies within the PBG [68]. Thus far, studies of coherent optical processes in a PBG have assumed fixed values of the atomic transition frequency [9]. However, in order to operate quantum logic gates, based on pairwise entanglement of atoms by field-induced dipole-dipole interactions [10, 143, 144], one should be able to switch the interaction on and off, most conveniently by AC Stark-shifts of the transition frequency of one atom relative to the other, thereby changing its detuning from the PBG edge. The question then arises: should such frequency shifts be performed adiabatically, in order to minimize the decoherence and maximize the quantum-gate fidelity? The answer is expected to be affirmative, based on the existing treatments of adiabatic entanglement and protection from decoherence [11, 12, 129] and on the tendency of nonadiabatic evolution to spoil fidelity and promote transitions to the continuum [13]. Surprisingly, our analysis (Section 6) demonstrates that only an appropriately phased sequence of “sudden” (strongly nonadiabatic) changes of the detuning from the PBG edge may yield higher fidelity of qubit and quantum gate operations than their adiabatic counterparts. This unconventional nonadiabatic protection from decoherence is valid for qubits that are strongly coupled to the continuum edge [14, 145], as opposed to the weak coupling approach in Sections 25. 1.4. Outline In this paper we develop, step by step, the framework for universal dynamical control by modulating fields of multilevel systems or qubits, aimed at suppressing or preventing their noise, decoherence, or relaxation in the presence of a thermal bath. Its crux is the general master equation (ME) of a multilevel, multipartite system, weakly coupled to an arbitrary bath and subject to arbitrary temporal driving or modulation. The present ME, derived by the technique [146, 147], is more general than the ones obtained previously in that it does not invoke the rotating wave approximation and therefore applies at arbitrarily short times or for arbitrarily fast modulations. Remarkably, when our general ME is applied to either AN or PN, the resulting dynamically controlled relaxation or decoherence rates obey analogous formulae provided that the corresponding density-matrix (generalized Bloch) equations are written in the appropriate basis. This underscores the universality of our treatment. It allows us to present a PN treatment that does not describe noise phenomenologically, but rather dynamically starting from the ubiquitous spin-boson Hamiltonian. In Sections 2 and 3, we present a universal formula for the control of single-qubit zero-temperature relaxation and discuss several limits of this formula. In Sections 4 and 5, we extend this formula to multipartite or multilevel systems. In Section 6 dynamical control in the strong coupling regime is considered. In Section 7, the treatment is extended to the control of finite-temperature relaxation and decoherence and culminates in single-particle Bloch equations with dynamically modified decoherence rates that essentially obey the universal formula of Section 3. We then discuss in Section 7.4 the possible modulation arsenal for either AN or PN control. In Section 8, we discuss the extensions of the universal control formula to entangled multipartite systems. The formalism is applicable in a natural and straightforward manner to such systems [123]. It allows us to focus on the ability of symmetries to overcome multipartite decoherence [87, 114117]. In Section 8, we discuss the implementations of the universal formula to multipartite quantum computation. Section 9 discusses some general aspects of multipartite dynamical control. We develop a general optimization strategy for performing a chosen unitary or nonunitary task on an open quantum system. The goal is to design a controlled time-dependent system Hamiltonian by variationally minimizing or maximizing a chosen function of the system state, which quantifies the task success (score), such as fidelity, purity, or entanglement. If the time dependence of the system Hamiltonian is fast enough to be comparable to or shorter than the response time of the bath, then the resulting non-Markovian dynamics is shown to optimize the chosen task score to second order in the coupling to the bath. This strategy can not only protect a desired unitary system evolution from bath-induced decoherence but also take advantage of the system-bath coupling so as to realize a desired nonunitary effect on the system. Section 10 summarizes our conclusions whereby this universal control can effectively protect complex systems from a variety of decoherence sources. 2. Modulation-Affected Control of Decay into Continua and Zero-Temperature Baths: Weak-Coupling Theory 2.1. Framework Consider the decay of a state via its coupling to a bath, described by the orthonormal basis , which forms either a discrete or a continuous spectrum (or a mixture thereof). The total Hamiltonian is Here is the dynamically modulated Hamiltonian of the system, with being the energy of . The time-dependent frequency can be attributed to the controllable dynamically imposed Stark shift, or to proper dephasing (uncontrolled, random fluctuation). The term is the time-dependent Hamiltonian of the bath, with being the energies of . The time-dependent frequencies , like , may arise from proper dephasing or dynamical Stark shifts. Finally denotes the off-diagonal coupling of with the continuum/bath, with being the dynamical modulation function and the system-bath coupling matrix elements. We write the wave function of the system as with the initial condition being A one-level system which can exchange its population with the bath states represents the case of autoionization or photoionization. However, the above Hamiltonian describes also a qubit, which can undergo transitions between the excited and ground states and , respectively, due to its off-diagonal coupling to the bath. The bath may consist of quantum oscillators (modes) or two-level systems (spins) with different eigenfrequencies. Typical examples are spontaneous emission into photon or phonon continua. In the rotating-wave approximation (RWA), which is alleviated in Section 7, the present formalism applies to a relaxing qubit, under the substitutions 3. Single-Qubit Zero-Temperature Relaxation To gain insight into the requirements of decoherence control, consider first the simplest case of a qubit with states and energy separation relaxing into a zero-temperature bath via off-diagonal () coupling, Figure 1(a). The Hamiltonian is given by the sum extending over all bath modes, where are the annihilation and creation operators of mode , respectively, with and denoting the bath vacuum and th-mode single excitation, respectively, and being the corresponding transition matrix element and . being Hermitian conjugate. We have also taken the rotating wave approximation (RWA). The general time-dependent state can be written as The Schrödinger equation results in the following coupled equations [83]: One can go to the rotating frame, define , , and get: where is the bath response/correlation function, expressible in terms of a sum over all transition matrix elements squared oscillating at the respective mode frequencies . Figure 1: (a) Schematic drawing of a two-level system with off-diagonal coupling to a continuum or a bath. (b) Schematic drawing of a bath comprised of many harmonic oscillators with different frequencies, whose temporal dephasing after correlation time renders the system-bath interaction practically irreversible. It is the spread of oscillation frequencies that causes the environment response to decohere after a (typically short) correlation time (Figure 1(b)). Hence, the Markovian assumption that the correlation function decays to instantaneously, , is widely used: it is in particular the basis for the venerated Lindblad’s master equation describing decoherence [148]. It leads to exponential decay of at the Golden Rule (GR) rate [76, 78] as We, however, are interested in the extremely non-Markovian time scales, much shorter than , on which all bath modes excitations oscillate in unison and the system-bath exchange is fully reversible. How does one probe, or, better still, maintain the system in a state corresponding to such time scales? To this end, we assume modulations of and , that result in the time-dependent modulation function , which has two components, namely, an amplitude modulation and phase modulation . The modulation function is related to in 3 via with This modulation may pertain to any intervention in the system-bath dynamics: (i) measurements that effectively interrupt and completely dephase the evolution, describable by stochastic [149], (ii) coherent perturbations that describe phase modulations of the system-bath interactions [99, 124]. For any , the exact equation (12) is then rewritten as We now resort to the crucial approximation that varies slower than either or . This approximation is justifiable in the weak-coupling regime (to second order in ), as discussed below. Under this approximation, (18) is transformed into a differential equation describing relaxation at a time-dependent rate as where is the instantaneous time-dependent relaxation rate and is the Lamb shift due to the coupling to the bath. One can separate the spectral representation of into the real and imaginary parts which satisfy the Kramers-Kronig relations with denoting the principal value. Henceforth, we shall concentrate on the relaxation rate, as it determines the excited state population, where is the average relaxation rate. It is advantageous to consider the frequency domain, as it gives more insight into the mechanisms of decoherence. For this purpose, we define the finite-time Fourier transform of the modulation function as The average time-dependent relaxation rate can be rewritten, by using the Fourier transforms of and , in the following form: where is the spectral-response function of the bath, and is the finite-time spectral intensity of the (random or coherent) intervention/modulation function, where the factor comes about from the definition of the decoherence rate averaged over the interval. The relaxation rate described by (25)–(27) embodies our universal recipe for dynamically controlled relaxation [99, 124], which has the following merits: (a) it holds for any bath and any type of interventions, that is, coherent modulations and incoherent interruptions/measurements alike; (b) it shows that in order to suppress relaxation we need to minimize the spectral overlap of , given to us by nature, and , which we may design to some extent; (c) most importantly, it shows that in the short-time domain, only broad (coarse-grained) spectral features of and are important. The latter implies that, in contrast to the claim that correlations of the system with each individual bath mode must be accounted for, if we are to preserve coherence in the system, we actually only need to characterize and suppress (by means of ) the broad spectral features of , the bath response function. The universality of (25)–(27) will be elucidated in what follows, by focusing on several limits. 3.1. The Limit of Slow Modulation Rate If corresponds to sufficiently slow rates of interruption/modulation , the spectrum of is much narrower than the interval of change of around , the resonance frequency of the system. Then can be replaced by , so that the spectral width of plays no role in determining , and we may as well replace by a spectrally finite, flat (white-noise) reservoir; that is, we may take the Markovian limit. The result is that (25) coincides with the Golden Rule (GR) rate, (14) (Figure 2(a)) as Namely, slow interventions do not affect the onset and rate of exponential decay. Figure 2: Frequency-domain representation of the dynamically controlled decoherence rate in various limits (Section 7). (a) Golden Rule limit. (b) Quantum Zeno effect (QZE) limit. (c) Anti-Zeno effect (AZE) limit. Here, and are the modulation and bath spectra, respectively, and are the interval of change and width of , respectively, and is the interruption rate. 3.2. The Limit of Frequent Modulation Frequent interruptions, intermittent with free evolution, are represented by a repetition of the free-evolution modulation spectrum where being the time-interval between consecutive interruptions. If describes extremely frequent interruptions or measurements , is much broader than . We may then pull out of the integral, whereupon (25) yields This limit is that of the quantum Zeno effect (QZE), namely, the suppression of relaxation as the interval between interruptions decreases [150152]. In this limit, the system-bath exchange is reversible and the system coherence is fully maintained (Figure 2(b)). Namely, the essence of the QZE is that sufficiently rapid interventions prevent the excitation escape to the continuum, by reversing the exchange with the bath. 3.3. Intermediate Modulation Rate In the intermediate time-scale of interventions, where the width of is broader than the width of (so that the Golden Rule is violated) but narrower than the width of (so that the QZE does not hold), the overlap of and grows as the rate of interruptions, or modulations, increases. This brings about the increase of relaxation rates with the rate of interruptions, marking the anti-Zeno effect (AZE) [85, 102, 153] (Figure 2(c)). On such time-scales, more frequent interventions (in particular, interrupting measurements) enhance the departure of the evolution from reversibility. Namely, the essence of the AZE is that if you do not intervene in time to prevent the excitation escape to the continuum, then any intervention only drives the system further from its initial state. We note that the AZE can only come about when the peaks of and do not overlap, that is, the resonant coupling is shifted from the maximum of . If, by contrast, the peaks of and do coincide, any rate of interruptions would result in QZE (Figure 2(b)). This can be understood by viewing as an averaging kernel of around . If is the maximum of the spectrum, any averaging can only be lower than this maximum, which is the Golden Rule decay rate. Hence, any rate of interruptions can only decrease the decay rate with respect to the Golden Rule rate, that is, cause the QZE. 3.4. Quasiperiodic Amplitude and Phase Modulation (APM) The modulation function can be either random or regular (coherent) in time, as detailed below. Consider first the most general coherent amplitude and phase modulation (APM) of the quasiperiodic form, Here () are arbitrary discrete frequencies with the minimum spectral distance . If is periodic with the period , then and become the Fourier components of . For a general quasiperiodic , one obtains Here equals the average of over a period of the order of , , and , whereas is a bell-like function of normalized to 1. For a sufficiently long time, the function becomes narrower than the respective characteristic width of around , and one can set Thus, when where is the effective correlation (memory) time of the reservoir, (25) is reduced to For the validity of (37), it is also necessary that This condition is well satisfied in the regime of interest, that is, weak coupling to essentially any reservoir, unless (for some harmonic ) is extremely close to a sharp feature in , for example, a band edge [145], a case covered by Section 6. Otherwise, the long-time limit of the general decay rate (25) under the APM is a sum of the GR rates, corresponding to the resonant frequencies shifted by , with the weights . Formula (37) provides a simple general recipe for manipulating the decay rate by APM. Its powerful generality allows for the optimized control of decay, not only for a single level but also for a band characterized by a spectral distribution (e.g., inhomogeneous or vibrational spectrum). We can then choose and in (37) so as to minimize the decay convoluted with . In what follows, various limits of (37) will be analyzed. 3.5. Coherent Phase Modulation (PM) 3.5.1. Monochromatic Perturbation Let Then where is a frequency shift, induced by the ac Stark effect (in the case, e.g., of atoms) or by the Zeeman effect (in the case of spins). In principle, such a shift may drastically enhance or suppress relative to . It provides the maximal variation of achievable by an external perturbation, since it does not involve any averaging (smoothing) of incurred by the width of : the modified can even vanish, if the shifted frequency is beyond the cut-off frequency of the coupling, where . Conversely, the increase of due to a shift can be much greater than that achievable by repeated measurements, that is, the anti-Zeno effect [97, 98, 101, 102]. In practice, however, ac Stark shifts are usually small for (cw) monochromatic perturbations, whence pulsed perturbations should often be used. 3.5.2. Impulsive Phase Modulation Let the phase of the modulation function periodically jump by an amount at times . Such modulation can be achieved by a train of identical, equidistant, narrow pulses of nonresonant radiation, which produce pulsed frequency shifts . Now where is the integer part. One then obtains that The decay, according to (22), has then the form (at ) where is defined by (25). For sufficiently long times For small phase shifts, , the peak dominates, whereas In this case, one can retain only the term in (37) (unless is changing very fast). Then the modulation acts as a constant shift With the increase of , the difference between the and peak heights diminishes, vanishing for . Then that is, for contains two identical peaks symmetrically shifted in opposite directions (the other peaks decrease with as , totaling 0.19). The above features allow one to adjust the modulation parameters for a given scenario to obtain an optimal decrease or increase of . The phase-modulation (PM) scheme with a small is preferable near the continuum edge, since it yields a spectral shift in the required direction (positive or negative). The adverse effect of peaks in then scales as and hence can be significantly reduced by decreasing . On the other hand, if is near a symmetric peak of , is reduced more effectively for , as in [80, 81], since the main peaks of at and then shift stronger with than the peak at for . 3.6. Amplitude Modulation (AM) Amplitude modulation (AM) of the coupling arises, for example, for radiative-decay modulation due to atomic motion through a high- cavity or a photonic crystal [154, 155] or for atomic tunneling in optical lattices with time-varying lattice acceleration [106, 156]. Let the coupling be turned on and off periodically, for the time and , respectively, that is, (). Now [157] so that (see (43)) where is given by (25) and (50). This case is also covered by (37) and (38), where the parameters are now found to be with It is instructive to consider the limit wherein and is much greater than the correlation time of the continuum; that is, does not change significantly over the spectral intervals . In this case, one can approximate the sum (37) by the integral (25) with characterized by the spectral broadening ~1. Then (25) for reduces to that obtained when ideal projective measurements are performed at intervals [97]. Thus the AM scheme can imitate measurement-induced (dephasing) effects on quantum dynamics, if the interruption intervals exceed the correlation time of the continuum. The decay probability , calculated for parameters similar to [106], completely coincides with that obtained for ideal impulsive measurements at intervals [97, 98, 101] and demonstrates either the quantum Zeno effect (QZE) or the anti-Zeno effect (AZE) behavior, depending on the rate of modulation. Since the Hamiltonian for atoms in accelerated optical lattices is similar to the Legett Hamiltonian for current-biased Josephson junctions [77], the present theory has been extended to describe effects of current modulations on the rate of macroscopic quantum tunneling in Josephson junctions in [100]. Projective measurements at an effective rate , whether impulsive or continuous, usually result in a broadened (to a width ) modulation function , without a shift of its center of gravity [97, 98, 101, 158, 159], This feature was shown in [97] to be responsible for either the standard quantum Zeno effect whereby scales as or the anti-Zeno effect whereby grows with . In contrast, a weak and broadband chaotic field, such that where is the mean intensity, is the bandwidth, and is the effective polarizability (electric or magnetic, depending on the system), would give rise to a Lorentzian dephasing function with a substantial shift This shift would have a much stronger effect on than the QZE or AZE, which are associated with the rate , since 4. Multipartite Decay Control 4.1. Multipartite PN Control by Resonant Modulation One can describe phase noise, or proper dephasing, by a stochastic fluctuation of the excited-state energy, , where is a stochastic variable with zero mean, and is the second moment. For multipartite systems, where each qubit can undergo different proper dephasing, , one has an additional second moment for the cross dephasing, . A general treatment of multipartite systems undergoing this type of proper dephasing is given in [107]. Here we give the main results for the case of two qubits. Let us take two TLS, or qubits, which are initially prepared in a Bell state. We wish to obtain the conditions that will preserve it. In order to do that, we change to the Bell basis, which is given by For an initial Bell-state , where , one can then obtain the fidelity, , as where where is the amplitude of the resonant field applied on qubit , , and the corresponds to and to . Expressions (61)–(67) provide our recipe for minimizing the Bell-state fidelity losses. They hold for any dephasing time-correlations and arbitrary modulation. One can choose between two modulation schemes, depending on our goals. When one wishes to preserve and initial quantum state, one can equate the modified dephasing and cross dephasing rates of all qubits, . This results in complete preservation of the singlet only, that is, , for all , but reduces the fidelity of the triplet state. On the other hand, if one wishes to equate the fidelity for all initial states, one can eliminate the cross dephasing terms, by applying different modulations to each qubit (Figure 3), causing for all . This requirement can be important for quantum communication schemes. Figure 3: Cross decoherence as a function of local modulation. Here two qubits are modulated by continuous resonant fields, with amplitudes . The cross decoherence decays as the two qubits’ modulations become increasingly different. The bath parameters are , where is the correlation time, and . 5. Dynamical Control of Zero-Temperature Decay in Multilevel Systems 5.1. General Formalism Here we discuss in detail a model for dynamical decay modifications in a multilevel system. The system with energies , , is coupled to a zero-temperature bath of harmonic oscillators with frequencies . Using the factorized coupling defined in Section 2.1, the corresponding Hamiltonian is found to be as in 1, where where now each level has a different modulation and a different coupling to the bath and denotes a gate operation. The system evolution is divided into two phases, one of storage without gate operations and a gate operation of finite duration The full wave function is given by Similarly to what was said in Section 2.1, one can consider two types of situations. The above equations (68)–(72) were written for an -level system which can exchange its population with the reservoir. In addition, one can consider an -level system, where transitions are possible between any level and a lower level , the reservoir consisting of quantum systems, as described in Section 2.1. The theory in Section 5 holds for both situations, with the minor difference that one should substitute as in (70) and (72) and perform a similar substitution in (76) below. In order to find the solution, one has to diagonalize the system hamiltonian by introducing a matrix that rotates the amplitudes as such that, by defining , one gets where are the eigenvalues of the new rotated system. Thus the transformed wave function becomes Using these rotated state amplitudes, a procedure similar to that used for one level, one finds that they obey the following integrodifferential equations, assuming slowly varying as Here, the and matrices are given by with and being the modulation and reservoir-response matrices, respectively, given by where During the storage phase, one has , and , and during the gate-operation phase, , , and . The solution to (77) is of the form To simplify the analysis, one can define the fluence and the modulation spectral matrices as The relevant imaginary parts of the spectral response of the reservoir can be expressed, analogously to (20) and (21), by the Kramers-Kronig relations Defining we shall now represent in different regimes (phases). (i) As a reference, it is important to consider the decoherence effects with no modulations at all, that is, . In this case, one obtains a diagonal decoherence matrix This means that interference of decaying levels and cancels out in the long time limit, and the decoherence is without cross relaxation. (ii) During the storage phase, (84) results in One can easily see that for the off-diagonal terms, a simple separation into decay rates and energy shifts is inapplicable in this formulation. (iii) During gate operations, (84) assumes the form In a more compact and enlightening form, one can rewrite this equation as , where is given in (86). 6. The Strong-Coupling Regime: Decay Control Near Continuum Edge by Nonadiabatic Interference The analysis expounded thus far has been based on a perturbative treatment of the system-bath coupling. Here, we address the regime of strong system-bath coupling, as in the case of a resonance frequency very near to the continuum edge, a situation that may be encountered in atomic excitation near the ionization energy, vibrational excitation frequency in a solid near the Debye cutoff, or an atomic excitation in a photonic crystal near a photonic bandgap. In the strong-coupling regime, it is advantageous to work in the combined basis of the system (qubit) and field (bath) states that incorporate the system-bath interaction. Dynamical control of the decay can then be analysed by exact solution of the Schrödinger equation in this basis. Analytical expressions are obtainable for alternating static evolutions with different parameters (e.g., resonant frequency), the dynamical control resulting from their interference. Specifically, we shall consider optical manipulations of atoms embedded in photonic crystals with atomic transition frequencies near a photonic bandgap (PBG), that is, near the edge of the photonic mode continuum, where the qubit is strongly coupled to the continuum, and spontaneous emission (SE) is only partially blocked, because an initially excited atom then evolves into a superposition of decaying and stable states, the stable state representing photon-atom binding [14, 145]. In what follows we shall demonstrate the ability of appropriately alternating sudden changes of the detuning to augment the interference of the emitted and back-scattered photon amplitudes, thereby increasing the probability amplitude of the stable (photon-atom bound) state. As a result, phase-gate operations affected by dipole-dipole interactions can be performed with higher fidelity than in the case of adiabatic frequency change. 6.1. Hamiltonian and Equations of Motion We consider a two-level atom with excited and ground states and coupled to the field of a discrete (or defect) mode and to the photonic band structure (PBS) in a photonic crystal. The hamiltonian of the system in the rotating-wave approximation assumes the form [145] Here, is the energy of the atomic transition frequency, and are, respectively, the creation and annihilation operators of the field mode at frequency , is the mode density of the PBS, and and are the coupling rates to the atomic dipole of a mode from the continuum and the discrete mode, respectively. Let us first consider the initial state obtained by absorbing a photon from the discrete mode as where is the vacuum state of the field. Then the evolution of the wavefunction has the general form where we have denoted by and the single-photon state of the relevant modes. The Schrödinger equation then leads to the set of coupled differential equations This evolution reflects the interplay between the off-resonant Rabi oscillations of and , at the driving rate , and the partly inhibited oscillatory decay from to via coupling to the continuum . This decay depends on the detuning of from the continuum edge at (the upper cutoff of the PBG). For a spectrally steep edge (see below), we are in the regime of strong coupling to the mode continuum (as in a high-Q cavity [8]) which allows for the existence of an oscillatory, nondecaying, component of , associated with a photon-atom bound state [7, 145]. 6.2. Periodic Sudden Changes of the Detuning Let us now introduce abrupt changes of , that is, of the detuning from the upper cutoff, , of the PBG (by fast AC-Stark modulations as discussed below), at intervals . In the sudden-change approximation for , the amplitudes of the excited state, the discrete mode and the continuum still evolve according to (91), except that from to the atomic transition frequency is , that is, the detuning , while for , we have , that is, . This dynamics leads to the relation Here, and are solutions of (91) with a static (fixed) atomic transition frequency, or . However, the initial condition at the instant of the frequency change from to is no longer the excited state (89) but the superposition In other words, the dynamics is equivalent to two successive static evolutions, the second one starting from initial conditions . Using the Laplace transform of the system (91) with the initial condition (93), it is possible to express the dynamic amplitude of the excited state after the sudden change as where we have used the initial conditions and the solution of (91) for the initial condition (89). There is an advantageous feature to the sudden change: since the time dependence of in (92) arises from the static amplitudes , , and at the shifted time , a consequence of the sudden change is to revive the excited-state population oscillations, which tend to disappear at long times in the static case. Hence, by applying several successive sudden changes, we should be able to maintain large-amplitude oscillations of the coherence between and . The scenario leading to the largest amplitude consists in periodic shifts of the energy detuning from to . When the initial detuning is large and we first reduce it to before it increases to , the dynamic population and the coherence, thanks to the revival of oscillations, are periodically larger than the static ones. This remarkable result occurs unexpectedly: it implies that successive abrupt changes can reverse the decay to the continuum, even though they cannot be associated with the Zeno effect: they occur at intervals much longer than the correlation (Zeno) time of the radiative continuum, which is utterly negligible ( s) [97], or even longer than the static-oscillation half period. The fact that this happens only for the rather “counter-intuitive” ordering of detuning values (from large to small then back again) is a manifestation of interference between successive static evolutions: their relative phases determine the beating between the emitted and reabsorbed (back-scattered) photon amplitudes and thereby the oscillation of . Let us now consider the initial superposition and a nonnegligible coupling constant . In this case, the periodic dynamic population of the excited state also strongly exceeds the static one. Most importantly, the instantaneous dynamic fidelity is periodically enhanced as compared to the static one, as demonstrated numerically. In order to use these results for quantum logic gates, let us consider the example of the dipole-dipole induced control-phase gate, which consists in shifting the phase of the target-qubit excited state by via interaction with the control qubit [10, 143, 144]. The phase shift must be accumulated gradually, to preserve the coherence of the system. We have found that ten or twenty sudden shifts of or , respectively, alternating with appropriate detuning changes, can keep the fidelity high, with little decoherence. The system begins to evolve following the “counter-intuitive” detuning sequence discussed above (not to be confused with the adiabatic STIRAP method [11, 12, 129]). As soon as two sudden changes of the detuning have been performed, the conditional phase shift of or takes place and the process is further repeated. The total gate operation is completed within the time interval of maximum fidelity. The fidelity of the system relative to its initial state during the realization of a control phase gate, with alternating detunings, is perhaps our most impressive finding. We find that the fidelity is increased using the “counterintuitive” sequence of detunings (solid line) as compared to the static (fixed) choice of maximal detuning (long-dashed line), or compared to the dynamically enhanced fidelity obtained without gate operations (dot-dashed line). 6.3. Comparison with the Weak-Coupling Regime We have compared the results of this method, which allows for possibly strong coupling of with the continuum edge, with those of the universal formula of Section 2 (25), which expresses the decay rate of by the convolution of the modulation spectrum and the PBS coupling spectrum. We find good agreement with this formula only in the regime of weak coupling to the PBG edge, when the dimensionless detuning parameter , as expected from the limitations of the theory in Section 2. 6.4. Experimental Scenario The following experimental scenario may be envisioned for demonstrating the proposed effect: pairs of qubits are realizable by two species of active rare-earth dopants [17, 18] or quantum dots in a photonic crystal. The transition frequency of one species is initially detuned by from the PBG edge with coupling constant and by ~3 MHz from the resonance of the other species. This is abruptly modulated by nonresonant laser pulses which exert ~3 MHz AC Stark shifts. Between successive shifts, the qubits are near resonant with their neighbours and therefore become dipole-dipole coupled, thus affecting the high-fidelity phase-control gate operation [10, 143, 144]. The required pulse rate is , much lower than the pulse rate stipulated under similar conditions by previously proposed strategies [81, 99, 118]. 7. Finite-Temperature Relaxation and Decoherence Control So far we have treated the case of an empty (zero-temperature) bath. In order to account for finite-temperature situations, where the bath state is close to a thermal (Gibbs) state, we resort to a master equation (ME) for any dynamically controlled reduced density matrix of the system [100, 124] that we have derived using the Nakajima-Zwanzig formalism [70, 146, 147, 160]. This ME becomes manageable and transparent under the following assumptions. (i) The weak-coupling limit of the system-bath interaction prevails, corresponding to the neglect of terms. This is equivalent to the Born approximation, whereby the back effect of the system on the bath and their resulting entanglement are ignored. (ii) The system and the bath states are initially factorisable. (iii) The initial mean value of vanishes. We present the general form of the Nakajima-Zwanzig formalism and resort to the aforementioned assumptions only when necessary. Hence, the formalism may seem cumbersome, yet it can be simplified greatly if the assumptions are made from the outset (see [70]). 7.1. Explicit Equations for Factorisable Interaction Hamiltonians We now wish to write the ME explicitly for time-dependent Hamiltonians of the following form [100]: where and are the system and bath Hamiltonians, respectively, and , the interaction Hamiltonian, is the product of operators and which act on the system and bath, respectively. Finally, defining the correlation function for the bath, we obtain the ME for in the Born approximation as We focus on two regimes: a two-level system coupled to either an amplitude- or phase-noise (AN or PN) thermal bath. The bath Hamiltonian (in either regime) will be explicitly taken to consist of harmonic oscillators and be linearly coupled to the system Here are the annihilation and creation operators of mode , respectively, and is the coupling amplitude to mode . 7.1.1. Amplitude-Noise Regime We first consider the AN regime of a two-level system coupled to a thermal bath. We will use off-resonant dynamic modulations, resulting in AC-Stark shifts. The Hamiltonians then assume the following form: where is the dynamical AC-Stark shifts, is the time-dependent modulation of the interaction strength, and the Pauli matrix . 7.1.2. Phase-Noise Regime Next, we consider the PN regime of a two-level system coupled to a thermal bath via operator. To combat it, we will use near-resonant fields with time-varying amplitude as our control. The Hamiltonians then assume the following forms: where is the time-dependent resonant field, with real envelope , is the time-dependent modulation of the interaction strength, and . Since we are interested in dephasing, phases due to the (unperturbed) energy difference between the levels are immaterial. 7.2. Universal Master Equation To derive a universal ME for both amplitude- and phase-noise scenarios, we move to the interaction picture and rotate to the appropriate diagonalizing basis, where the appropriate basis for the AN case of (100) is while for the PN case of (102) the basis is In this rotated and tilted frame, where is the phase-modulation due to the time-dependent control in the system Hamiltonian. Allowance for arbitrary time-dependent intervention in the system and interaction dynamics , , respectively, yields the following universal ME for a dynamically controlled decohering system [100, 124]: Here is the modulated interaction operator, where denotes the rotated and tilted frame, and . The modulation function is given by for both AN and PN. It is important to note that is a function of    (not of ): this convolutionless form of the ME is fully non-Markovian to second order in , as proven exactly in [124]. 7.3. Universal Modified Bloch Equations The resulting modified Bloch equations, in the appropriate diagonalizing basis (see (104) for AN and (105) for PN), are given by The time-dependent relaxation rates are real, and the only difference between them is the complex conjugate of the combined modulation function, . They can be very different for a complex correlation function. One can derive the corresponding time-averaged relaxation rates of the upper and lower states as For both AN (see (100)) and PN (see (102)), where is the zero-temperature bath spectrum, and are the frequency-dependent density of bath modes and the transition matrix element, respectively, is the temperature-dependent bath mode population, and is the inverse temperature. Also, is the Heaviside function, that is, the zero-temperature bath spectrum is defined only for positive frequencies . Hence, the first right-hand side of (114) is nonzero for positive frequencies and the second right-hand side is nonzero for negative frequencies. For either AN or PN, we may control the decoherence by either off-resonant or near-resonant modulations, respectively. The modulation spectrum has the same form for both (see Section 7.4) as where the modulation function is given in (107) and (109). The time-dependent modulation phase factor is obtained for AN in the form of an AC-Stark shift, time-integrated over where is the Rabi frequency of the control field and is the detuning. The corresponding phase factor for PN is the integral of the Rabi frequency , that is, the pulse area of the resonant control field, (107) (Figure 4). Figure 4: Schematic drawing of system and bath. (a) Amplitude noise (AN) (red) combatted by AC-Stark shift modulation (green). (b) Phase noise (PN) (red) combatted by resonant-field modulation (green). Hence, upon making the appropriate substitutions, the Bloch equations (110) have the same universal form for either AN or PN. An arbitrary combination of AN and PN requires a more detailed treatment, yet the universal form is maintained. 7.3.1. Dynamically Modified Decay Rates Since we are interested here in dynamical control of relaxation, we shall concentrate on the transition rates rather than the level shifts. The average rate of the transition and its counterpart are given by Here the upper (lower) sign corresponds to the subscript , and can be shown [161] to be nonnegative, with , and vanishes for at : . For the oscillator bath, one finds that where and is the average number of quanta in the oscillator (bath mode) with frequency . We apply (118) to the case of coherent modulation of quasiperiodic form, (see (31)). Without a limitation of the generality, we can assume that . We then find, using (118), that the rates tend to the long-time limits where or Equation (121) shows that is given by the overlap of the modulation spectrum with the bath-CF spectrum . The limits (123) are approached when and . Here is the bath memory (correlation) time, defined as the inverse of , the spectral interval over which changes around the relevant frequencies. Had we used the standard dipolar RWA hamiltonian in the case of an oscillator bath, dropping the antiresonant terms in , we would have arrived at the transition rates wherein the integration is performed from 0 to , rather than from to , as in (121). This means that the RWA transition rates hold for a slow modulation, when at , being peaked near . However, whenever the suppression of requires modulation at a rate comparable to , the RWA is inadequate. For instance, (120) and (124) imply that, at , the rate vanishes identically, irrespective of , in contrast to the true upward-transition rate in (121), which may be comparable to for ultrafast modulation. The difference between the RWA and non-RWA decay rates stems from the fact that the RWA implies that a downward (upward) transition is accompanied by emission (absorption) of a bath quantum, whereas the non-RWA (negative-frequency) contribution to in (121) allows for just the opposite: downward (upward) transitions that are accompanied by absorption (emission). The latter processes are possible since the modulation may cause level to be shifted below . The validity of the (decohering) qubit model in the presence of modulation at a rate is now elucidated: it requires that , being the effective transition rate from level to any other level , and, in particular, . If   are strongly suppressed by the modulation, the TLS model holds for long times. 7.3.2. Dynamically Modified Proper Dephasing We turn now to proper dephasing when it dominates over decay. The random frequency fluctuations are typically characterized by a (single) correlation time , with ensemble mean . When the field is used only for gate operations, we assume that it does not affect proper dephasing. The ensemble average over results in with the dephasing rate The dephasing CF is the counterpart of the bath CF . At , the decoherence rate and shift approach their asymptotic values For the validity of (127), it is necessary that We assume the secular approximation, which holds if By analogy with (118), one can obtain that where is given by (117) with As follows from (131), is a symmetric function, The proper dephasing rate associated with is In the presence of a constant [cw ], it is modified into For a sufficiently strong field, the dephasing rate can be suppressed by the factor . This suppression reflects the ability of strong, near-resonant Rabi splitting to shift the system out of the randomly fluctuating bandwidth, or average its effects. Quantum gate operations may be performed by slight modulations of the control field, which can flip the qubit without affecting proper dephasing. By comparison, the “bang-bang” (BB) method involving -periodic -pulses [2, 82, 84] is an analog of the above “parity kicks.” Using the analog of (121), such pulses can be shown to suppress approximately according to (135) with . This BB method requires pulsed fields with Rabi frequencies , that is, much stronger fields than the cw field in (135). Using  s, cw Rabi frequencies exceeding 1 MHz achieve a significant dephasing suppression. 7.4. Modulation Arsenal Any modulation with quasi-discrete, finite spectrum is deemed quasiperiodic, implying that it can be expanded as where are arbitrary discrete frequencies such that where is the minimal spectral interval. One can define the long-time limit of the quasi-periodic modulation, when where is the bath-memory (correlation) time, defined as the inverse of the largest spectral interval over which and change appreciably near the relevant frequencies . In this limit, the average decay rate is given by (Figure 5(a)) as Figure 5: Spectral representation of the bath coupling, , and the modulation, . (a) General quasi-periodic modulation, with peaks at . (b) On-off modulation, with repetition rate for . (c) Impulsive phase modulation, (-pulses), . (d) Monochromatic modulation, or impulsive phase modulation, with small phase shifts, , and repetition rate. 7.4.1. Phase Modulation (PM) of the Coupling Monochromatic Perturbation. Let Then where is a frequency shift, induced by the AC Stark effect (in the case of atoms) or by the Zeeman effect (in the case of spins). In principle, such a shift may drastically enhance or suppress relative to the Golden Rule decay rate, that is, the decay rate without any perturbation as Equation (40) provides the maximal change of achievable by an external perturbation, since it does not involve any averaging (smoothing) of incurred by the width of : the modified can even vanish, if the shifted frequency is beyond the cutoff frequency of the coupling, where (Figure 5(d)). This would accomplish the goal of dynamical decoupling [8187, 118, 162]. Conversely, the increase of due to a shift can be much greater than that achievable by repeated measurements, that is, the anti-Zeno effect [97, 98, 101, 102]. In practice, however, AC Stark shifts are usually small for (cw) monochromatic perturbations, whence pulsed perturbations should often be used, resulting in multiple shifts, as per (139). Dynamical Decoupling. Dynamical decoupling (DD) is one of the best known approaches to combat decoherence, especially dephasing [7992, 95, 96]. A full description of this approach is beyond the scope of this work, but we present its most essential aspects and how it can be incorporated into the general framework described above. 7.4.2. Standard DD DD is based on the notion that the phase-modulation control fields are short and strong enough such that the free evolution can be neglected during these pulses. Hence, the propagator can be decomposed into the free propagator, followed by the control-field propagator, free propagator, and so forth. The control fields used result in the periodic accumulation of -phases; that is, each pulse has a total area of , whose effects are similar to time-reversal or the spin-echo technique [94]. Thus, the free evolution propagator after the control -pulse negates the effects of the free evolution propagator prior to the control fields, up to first order of the noise in the Magnus expansion. While the formalism of dynamical decoupling is quite different from the formalism presented here, it can be easily incorporated into the general framework of universal dynamical decoherence control by introducing impulsive phase modulation. Let the phase of the modulation function periodically jump by an amount at times . Such modulation can be achieved by a train of identical, equidistant, narrow pulses of nonresonant radiation, which produce pulsed AC Stark shifts of . When , this modulation corresponds to dynamical-decoupling (DD) pulses. For sufficiently long times (see (138)), one can use (139), with For small phase shifts, , the peak dominates, whereas In this case, one can retain only the term in (139), unless is changing very fast with frequency. Then the modulation acts as a constant shift (Figure 5(d)) as As increases, the difference between the and peak heights diminishes, vanishing for . Then
9616e2113d397f86
Measurement problem From Wikipedia, the free encyclopedia Jump to: navigation, search The measurement problem in quantum mechanics is the problem of how (or whether) wavefunction collapse occurs. The inability to observe this process directly has given rise to different interpretations of quantum mechanics, and poses a key set of questions that each interpretation must answer. The wavefunction in quantum mechanics evolves deterministically according to the Schrödinger equation as a linear superposition of different states, but actual measurements always find the physical system in a definite state. Any future evolution is based on the state the system was discovered to be in when the measurement was made, meaning that the measurement "did something" to the system that is not obviously a consequence of Schrödinger evolution. To express matters differently (to paraphrase Steven Weinberg[1][2]), the Schrödinger wave equation determines the wavefunction at any later time. If observers and their measuring apparatus are themselves described by a deterministic wave function, why can we not predict precise results for measurements, but only probabilities? As a general question: How can one establish a correspondence between quantum and classical reality?[3] Schrödinger's cat[edit] The best known example is the "paradox" of the Schrödinger's cat. A mechanism is arranged to kill a cat if a quantum event, such as the decay of a radioactive atom, occurs. Thus the fate of a large scale object, the cat, is entangled with the fate of a quantum object, the atom. Prior to observation, according to the Schrödinger equation, the cat is apparently evolving into a linear combination of states that can be characterized as an "alive cat" and states that can be characterized as a "dead cat". Each of these possibilities is associated with a specific nonzero probability amplitude; the cat seems to be in some kind of "combination" state called a "quantum superposition". However, a single, particular observation of the cat does not measure the probabilities: it always finds either a living cat, or a dead cat. After the measurement the cat is definitively alive or dead. The question is: How are the probabilities converted into an actual, sharply well-defined outcome? Hugh Everett's many-worlds interpretation attempts to solve the problem by suggesting there is only one wavefunction, the superposition of the entire universe, and it never collapses—so there is no measurement problem. Instead, the act of measurement is simply an interaction between quantum entities, e.g. observer, measuring instrument, electron/positron etc., which entangle to form a single larger entity, for instance living cat/happy scientist. Everett also attempted to demonstrate the way that in measurements the probabilistic nature of quantum mechanics would appear; work later extended by Bryce DeWitt. De Broglie–Bohm theory tries to solve the measurement problem very differently: the information describing the system contains not only the wavefunction, but also supplementary data (a trajectory) giving the position of the particle(s). The role of the wavefunction is to generate the velocity field for the particles. These velocities are such that the probability distribution for the particle remains consistent with the predictions of the orthodox quantum mechanics. According to de Broglie–Bohm theory, interaction with the environment during a measurement procedure separates the wave packets in configuration space which is where apparent wavefunction collapse comes from even though there is no actual collapse. Erich Joos and Heinz-Dieter Zeh claim that the phenomenon of quantum decoherence, which was put on firm ground in the 1980s, resolves the problem.[4] The idea is that the environment causes the classical appearance of macroscopic objects. Zeh further claims that decoherence makes it possible to identify the fuzzy boundary between the quantum microworld and the world where the classical intuition is applicable.[5][6] Quantum decoherence was proposed in the context of the many-worlds interpretation[citation needed], but it has also become an important part of some modern updates of the Copenhagen interpretation based on consistent histories.[7][8] Quantum decoherence does not describe the actual process of the wavefunction collapse, but it explains the conversion of the quantum probabilities (that exhibit interference effects) to the ordinary classical probabilities. See, for example, Zurek,[3] Zeh[5] and Schlosshauer.[9] The present situation is slowly clarifying, as described in a recent paper by Schlosshauer as follows:[10] Several decoherence-unrelated proposals have been put forward in the past to elucidate the meaning of probabilities and arrive at the Born rule ... It is fair to say that no decisive conclusion appears to have been reached as to the success of these derivations. ... As it is well known, [many papers by Bohr insist upon] the fundamental role of classical concepts. The experimental evidence for superpositions of macroscopically distinct states on increasingly large length scales counters such a dictum. Superpositions appear to be novel and individually existing states, often without any classical counterparts. Only the physical interactions between systems then determine a particular decomposition into classical states from the view of each particular system. Thus classical concepts are to be understood as locally emergent in a relative-state sense and should no longer claim a fundamental role in the physical theory. A fourth approach is given by objective collapse models. In such models, the Schrödinger equation is modified and obtains nonlinear terms. These nonlinear modifications are of stochastic nature and lead to a behaviour which for microscopic quantum objects, e.g. electrons or atoms, is unmeasurably close to that given by the usual Schrödinger equation. For macroscopic objects, however, the nonlinear modification becomes important and induces the collapse of the wavefunction. Objective collapse models are effective theories. The stochastic modification is thought of to stem from some external non-quantum field, but the nature of this field is unknown. One possible candidate is the gravitational interaction as in the models of Diósi and Penrose. The main difference of objective collapse models compared to the other approaches is that they make falsifiable predictions that differ from standard quantum mechanics. Experiments are already getting close to the parameter regime where these predictions can be tested.[11] An interesting solution to the measurement problem is also provided by the hidden-measurements interpretation of quantum mechanics. The hypothesis at the basis of this approach is that in a typical quantum measurement there is a condition of lack of knowledge about which interaction between the measured entity and the measuring apparatus is actualized at each run of the experiment. One can then show that the Born rule can be derived by considering a uniform average over all these possible measurement-interactions. [12][13] See also[edit] References and notes[edit] 1. ^ Steven Weinberg (1998). The Oxford History of the Twentieth Century (Michael Howard & William Roger Louis, editors ed.). Oxford University Press. p. 26. ISBN 0-19-820428-0.  2. ^ Steven Weinberg: Einstein's Mistakes in Physics Today (2005); see subsection "Contra quantum mechanics" 3. ^ a b Wojciech Hubert Zurek Decoherence, einselection, and the quantum origins of the classical Reviews of Modern Physics, Vol. 75, July 2003 4. ^ Joos, E., and H. D. Zeh, "The emergence of classical properties through interaction with the environment" (1985), Z. Phys. B 59, 223. 5. ^ a b H D Zeh in E. Joos .... (2003). Decoherence and the Appearance of a Classical World in Quantum Theory (2nd Edition; Erich Joos, H. D. Zeh, C. Kiefer, Domenico Giulini, J. Kupsch, I. O. Stamatescu (editors) ed.). Springer-Verlag. Chapter 2. ISBN 3-540-00390-8.  7. ^ V. P. Belavkin (1994). "Nondemolition principle of quantum measurement theory". Foundations of Physics. 24 (5): 685–714. arXiv:quant-ph/0512188free to read. Bibcode:1994FoPh...24..685B. doi:10.1007/BF02054669.  8. ^ V. P. Belavkin (2001). "Quantum noise, bits and jumps: uncertainties, decoherence, measurements and filtering". Progress in Quantum Electronics. 25 (1): 1–53. arXiv:quant-ph/0512208free to read. Bibcode:2001PQE....25....1B. doi:10.1016/S0079-6727(00)00011-2.  9. ^ Maximilian Schlosshauer (2005). "Decoherence, the measurement problem, and interpretations of quantum mechanics". Rev. Mod. Phys. 76 (4): 1267–1305. arXiv:quant-ph/0312059free to read. Bibcode:2004RvMP...76.1267S. doi:10.1103/RevModPhys.76.1267.  10. ^ Maximilian Schlosshauer (January 2006). "Experimental motivation and empirical consistency in minimal no-collapse quantum mechanics". Annals of Physics. 321 (1): 112–149. arXiv:quant-ph/0506199free to read. Bibcode:2006AnPhy.321..112S. doi:10.1016/j.aop.2005.10.004.  11. ^ Angelo Bassi; Kinjalk Lochan; Seema Satin; Tejinder P. Singh; Hendrik Ulbricht (2013). "Models of wave-function collapse, underlying theories, and experimental tests". Reviews of Modern Physics. 85: 471–527. arXiv:1204.4325free to read. Bibcode:2013RvMP...85..471B. doi:10.1103/RevModPhys.85.471.  12. ^ Aerts, D. (1986). A possible explanation for the probabilities of quantum mechanics, Journal of Mathematical Physics, 27, pp. 202-210. 13. ^ Aerts, D. and Sassoli de Bianchi, M. (2014). The extended Bloch representation of quantum mechanics and the hidden-measurement solution to the measurement problem. Annals of Physics 351, Pages 975–1025 (2014) Further reading[edit] External links[edit]
aaafc2bdbc8fd353
Semiclassical theory of helium atom From Scholarpedia Gregor Tanner and Klaus Richter (2013), Scholarpedia, 8(4):9818. doi:10.4249/scholarpedia.9818 revision #132323 [link to/cite this article] Jump to: navigation, search Post-publication activity Curator: Gregor Tanner In memory of our teacher and friend, Dieter Wintgen, who died on the 16th August 1994 at the age of 37 years on the descent from the Weisshorn (4505 m). Semiclassical theory of helium atom refers to a description of the quantum spectrum of helium in terms of the underlying classical dynamics of the strongly chaotic three-body Coulomb system formed by the nucleus and the two electrons. Helium and its role for the development of quantum mechanics Helium: an atomic three-body problem The semiclassical theory of the helium atom (or other two-electron atoms) follows the idea of computing and understanding the quantum energy levels starting from trajectories of the underlying classical system. In helium, the classical dynamics is given by the pair of interacting electrons moving in the field of the (heavy) nucleus. Two-electron atoms represent a paradigmatic system for the successful application of concepts of quantum chaos theory and in particular the Gutzwiller trace formula. Figure 1: The helium atom composed of two electrons and a nucleus of charge Z=2 (from Tanner et al. 2000) Helium, as the prototype of a two-electron atom, is composed of the nucleus with charge Z=2 and two electrons, see Figure 1. The interplay between the attractive Coulomb interaction between the nucleus and the electrons and the Coulomb repulsion between the electrons gives rise to exceedingly complicated spectral features, despite the seemingly simple form of the underlying quantum Hamiltonian. Correspondingly, orbits of the two interacting electrons, when considered as classical particles, are predominantly characterized by chaotic dynamics and cannot be calculated analytically. Hence, helium as a microscopic three-body Coulomb system has much in common with its celestial analogue, the gravitational three-body problem. The failure of the "old quantum theory" Modern semiclassical theory of the helium atom has its roots in the early days of quantum theory: The observation that atomic spectra consist of discrete lines called for a then novel theoretical approach, a quantum theory for atoms. Bohr's early attempts were formulated in terms of quantum postulates and successfully reproduced the energy levels of hydrogen by requiring periodic (elliptic) Kepler electron motion with quantized radii, respectively momenta p, \[\tag{1} \oint p d q = n h \] Figure 2: Periodic orbit configurations of the helium electron pair that served as quasi-classical models for the ground state (from Tanner et al. 2000) (where n is an integer and h Planck's constant). It was natural to try this approach also for helium, the simplest atom with more than one electron. By applying Bohr's ad hoc quantization rule (1) to various periodic orbit configurations of the electron pair motion in helium (see Figure 2), a number of leading physicists of that time, including Bohr, Born, Kramers, Landé , Sommerfeld and van Vleck, tried to compute the ground state energy of helium. However, without success: all models gave unsatisfactory results. Figure 3: Heisenberg's proposal for Kepler-type electron pair motion in helium (from Tanner et al. 2000) Heisenberg, then a student of Sommerfeld, devised a different trajectory configuration with the electrons moving on perturbed Kepler ellipses on different sides of the helium nucleus; in Figure 3 Heisenberg's sketch of this configuration posted in a letter to Sommerfeld in 1922 is shown. Assuming half-integer quantum numbers in his letter, Heisenberg arrived at a helium ionization potential of 24.6 V very close to the observed value of 24.5 V. However, discouraged by Bohr who did not accept such half-integer orbital quantum numbers, Heisenberg never published his results. Though the good agreement must be considered as accidental, the Heisenberg model came closest to an adequate semiclassical description of the helium ground state. Modern semiclassical theory reveals that the association of energy levels with individual periodic orbits in the old quantum theory was too simple-minded. Indeed, for chaotic systems such as the three-body problem helium, it is the entirety of all periodic orbits which conspire to form the energy levels such as beautifully shown in Gutzwiller's trace formula. For a comprehensive account of the developments of the semiclassical theory for helium up to the year 2000, see Tanner et al. 2000. The problems and failure of (most of) the attempts to quantize the electron pair motion in helium marked the end of the "old quantum theory" which was subsequently replaced by the "new quantum theory": quantum (wave) mechanics which has proven very successful to this day. Spectral properties and quantum-mechanical concepts By now considerable parts of the rich energy spectrum of the helium atom have been computed quantum mechanically by numerically solving the Schrödinger equation for the two-electron Hamiltonian of helium. To that end, besides the orbital dynamics, the spin degree of freedom of the two electrons has to be considered. The electron spins can be paired antiparallel or parallel leading to the distinction of singlet states (total spin $S = 0$) and triplet states ($S=1$) often referred to as parahelium and orthohelium, respectively. Figure 4 depicts, as a representative case, the level diagram of parahelium. The helium states and energy levels can be classified as follows: (i) the ground state and bound singly excited states, (ii) doubly excited resonant states, and (iii) unbound continuum states at energies above the two particle fragmentation threshold that are not considered here. States of category (i) are composed of one electron in a hydrogen-type ground state with quantum number $N=1$ and the second electron being excited with energy levels (labeled by $n=1,2,3...$) forming a Rydberg series (see Figure 4) converging to the first ionization threshold at an energy of $-Z^2/2$ (in atomic units). In energy region (ii) the doubly excited states have a finite lifetime; they can decay, owing to the mutual repulsive interaction between the electrons, by autoionization where one electron leaves the system while the second one remains bounded to the nucleus. These doubly excited states are organized in doubly infinite level sequences with quantum numbers N and n. As visible in Figure 4, they apparently form individual Rydberg series labeled by the index $N$, the hydrogen-like principle quantum number of the energetically lower electron. However, closer inspection of the energy region approaching complete fragmentation (i.e. the border to regime (iii)) shows that neighboring Rydberg series perturb each other more and more. Figure 4: Helium energy level diagram (from Tanner et al. 2000) With further increasing energy, these states eventually form a rather dense set of energy levels with seemingly irregular spacings, and the specification of the two-electron states in terms of the quantum numbers (N,n) looses its meaning at such high excitations: At these energies electron-electron interaction gets increasingly important, and hence the concept of quantum numbers (N,n) labeling independent electron states breaks down. The labels (N,n) can be partly replaced by new, though approximate, quantum numbers representing the collective dynamics of the electron pair. However, due to the non-integrability of the three-body Coulomb problem, a clear-cut classification is no longer possible (see Tanner et al. 2000). The increasing complexity of the energy spectrum close to the helium double ionisation threshold can be experimentally revealed in photo-ionisation measurements. The single photo-ionisation cross section is proportional to the probability of ionising a helium atom by a photon at a given frequency \(\omega\ .\) It can be compared directly to experimental data measuring the electron flux obtained from shining a laser (at sufficiently weak intensity to avoid effects due to multi-photon ionisation) onto a helium target; a typical photoionisation signal for highly doubly excited helium states is shown in Figure 5 (Jiang et al 2008), exhibiting irregular sequences of peaks from overlapping resonances. Figure 5: Total photoionisation cross section of helium; Ix refers to the ionisation threshold for the xth Rydberg series (from Jiang et al 2008). The helium atom - a semiclassical approach The three-body Coulomb system helium is one of the most complex systems which has been treated fully semiclassically using Gutzwiller's trace formula (Wintgen et al. 1992). The challenge is to describe quantum spectra or photoionisation cross sections of this few particle system in terms of classical trajectories of the nucleus and the two electrons alone. It turns out that the structure of the spectrum is closely linked to features of the underlying classical few-body dynamics such as invariant subspaces in phase space, chaotic or nearly integrable behaviour and the influence of collision events. The bound and resonance spectrum as depicted in Figure 4 is linked via Gutzwiller's trace formula to the set of all periodic orbits of the system. Furthermore, it can be shown that photoionisation or absorption spectra in atoms are related to a set of returning trajectories, that is, trajectories which start and end at the origin (Du et al. 1988). Note that these orbits are in general only closed in position space and thus not periodic. Interestingly, in helium these are triple-collision orbits, that is, orbits for which both electrons hit the nucleus simultaneously. A good knowledge of the phase space dynamics is necessary to classify and determine these sets of trajectories. Classical dynamics The classical three body system can be reduced to four degrees of freedom (dof) after eliminating the centre of mass motion and incorporating the conservation of the total angular momentum. As the nucleus is about 1800 times heavier than an electron, one can work in the infinite nucleus mass approximation without loosing any essential features. After rescaling and making all quantities dimensionless, one can write the classical Hamiltonian in the form \[\tag{2} H = \frac{{\mathbf p}_1^2}{2} + \frac{{\mathbf p}_2^2}{2} - \frac{Z}{r_{1}} - \frac{Z}{r_{2}} + \frac{1}{r_{12}} = \left\{\begin{array}{rcl} +1 & : & E > 0 \\ 0 & : & E = 0 \\ -1 & : & E < 0 \end{array} \right. \] with nucleus charge \(Z = 2\) for helium (Richter et al. 1993). The phase space in Eq. (2) has 6 dof, the dynamics for fixed angular momentum takes place on 4 dof. The \(H = +1\) regime corresponds to the region of positive energy where double ionisation is possible. There exist no periodic orbits of the electron pair and one does not find quantum resonance states in this energy regime, see Figure 4. It is the classical dynamics for negative energies, that is H= -1, which shows complex behaviour, chaos, unstable periodic orbits and is linked to the bound and resonance spectrum of helium in Figure 4. Only one electron can escape classically in this energy regime and it will do so for most initial conditions. Symmetries and invariant subspaces Figure 6: The collinear eZe configuration The equations of motion derived from the Hamiltonian (2) are invariant under the transformation \(({\mathbf r}_1, {\mathbf r}_2) \rightarrow (-{\mathbf r}_1, -{\mathbf r}_2)\), as well as \(({\mathbf r}_1, {\mathbf r}_2) \rightarrow ({\mathbf r}_2, {\mathbf r}_1)\ .\) The symmetries give rise to invariant subspaces in the full phase space. Trajectories which start in such a subspace will remain there for all times thus reducing the relevant degrees of freedom of the dynamics. Invariant subspaces are thus an extremely useful tool to study classical dynamics in a high dimensional phase space. The subspace most important for a semiclassical treatment is the collinear eZe space where the electrons move along a common axis at different sides of the nucleus, see Figure 6. The dynamics in this space describes the spectrum near the ground state as well as some of the Rydberg series in the energy spectrum, Figure 4. Furthermore, the photoionisation spectrum is dominated by the collinear dynamics. Heisenberg's early success is indeed related to the similarity of his 'periodic orbit' in Figure 3 with the shortest periodic orbit in the eZe space. We will discuss the most important properties of the dynamics in this subspace in more detail below. Other subspaces are, for example, the collinear dynamics of both electrons on the same side of the nucleus giving rise to 'frozen planet states' (Richter et al. 1992) and the so-called Wannier ridge space with \({\mathbf r}_1= {\mathbf r}_2; {\mathbf p}_1 = {\mathbf p}_2 \), which is, however, unstable with respect to perturbations away from the subspace and thus less relevant for the spectrum. It plays an important role as a gate-way for ionisation processes, see Lee et al. 2005, Byun et al. 2007. For a more detailed description of the dynamics in other invariant subspaces, see Tanner et al. 2000 and references therein. Figure 7: a) A typical orbit in the eZe - space; b) trajectory in the Poincaré surface of section \( r_2 = 0 \) (from Tanner et al. 2000) Symbolic dynamics in the eZe collinear space The dynamics in the eZe collinear space turns out to be fully chaotic with a binary symbolic dynamics. The two degrees of freedom are the distances \( r_i, i=1,2 \) of electron \( i \) from the nucleus - a typical trajectory is shown in Figure 7. Note that the axis \(r_i = 0\) corresponds to binary collisions, that is, the electron "i" collides with the nucleus - see also the next section for a discussion of collision events. One electron can escape (ionise) to infinity leaving the other electron in a regular Kepler ellipse around the nucleus. Interestingly, escape can only occur after both electron come close to the nucleus simultaneously to allow for momentum transfer between the light particles. The triple collision (discussed below) serves thus as the gateway to electron ionisation. The dynamics is nearly regular having a small, but positive Lyapunov exponent, if the electrons are far apart (that is, \(r_1 \gg r_2\) or vice versa), see the Poincaré surface of section in Figure 7b). The symbolic dynamics for the chaotic eZe - configuration maps each trajectory one-to-one onto a binary symbol string. The symbols are defined through binary collisions, that is, • 1 if a trajectory crosses the line \(r_1=r_2\) between two collisions with the nucleus, (i.e. \(r_1 = 0\) or \(r_2 = 0\)); • 0 otherwise. Figure 8: Representative periodic orbits of the helium electron pair in the eZe - space (from Wintgen et al. 1992) Note that the symbolic dynamics is closely related to the triple collision, that is, the boundaries of the partition are given by trajectories starting in or ending at the singular point \(r_1 = r_2 = 0\) (triple collision manifolds). The symbolic dynamics fully describes the topological properties of the phase space; periodic orbits, for example, can be characterised by a periodic symbol string \(\overline{a} = \ldots aaaa\ldots\) where \(a\) is a finite binary symbol string. There are infinitely many periodic orbits and they are all unstable with respect to the dynamics "in" the collinear plane. Some examples are shown in Figure 8. The number of periodic orbits increases exponentially with the code length and thus with the period of the orbits. The 'asymmetric stretch' orbit \(\overline{1}\) is the shortest orbit in this subspace. The asymptotic periodic orbit \(r_1 \equiv\infty\ ,\) \(p_1\equiv 0\) corresponds to the notation \(\overline{0}\) in the binary code. Collisions, regularisation and the triple collision Collisions are an important feature in few-body dynamics as described above. There is in particular a fundamental difference between two-body (or binary) collisions and many-body collisions where more than two particles collide simultaneously. Binary collisions can be regularised, that is, the dynamics can be continued through the singularity after a suitable transformation of the time and space variables. A popular regularisation scheme is the Kustaanheimo-Stiefel transformation which preserves the Hamiltonian structure of the equations. Binary collisions do not add instability to the classical dynamics. This is in contrast to triple collisions where both electrons hit the nucleus simultaneously. The triple collision is a non-regularisable singularity, that is, there is no unique way to determine the fate of a trajectory after it has encountered a triple collision. The manifold of all orbits coming out of or going into a triple collisions - the so-called triple collision manifold (Waldvogel 2002) - plays an important role in tessellating the full phase space and provides the symbolic dynamics in the eZe space. Triple collision orbits always move along the so-called Wannier orbit \( r_1 = r_2 \) when encountering the singularity. The triple collision singularity thus acts as an infinitely unstable fixed point; a closer analysis shows that the singularity itself has a non-trivial structure and topology which can be illuminated using McGehee transformation techniques. For a discussion of the Kustaanheimo-Stiefel and McGehee transformations in the context of three body Coulomb problems, see Richter et al. 1993 and Lee et al. 2005, respectively. Semiclassical periodic orbit quantisation Figure 9: The Fourier transformed part of the spectrum associated with the eZe space (here denoted \( K_{max} \)) - the binary code (+,-) refers to the code (0,1) introduced above (from Qiu et al. 1996) The Gutzwiller trace formula marked a milestone in the development of semiclassical theories. It relates the spectrum of a quantum system to the set of all periodic orbits of the corresponding classical system in terms of a Fourier-type relation where the eigenenergies and the actions of the classical periodic orbits act as Fourier-pairs. The classical dynamics of the eZe collinear configuration can be used for a quantisation of an important part of the helium spectrum due to a 'lucky' coincidence: It turns out that the electron motion in the vicinity of the collinear space is stable in all degrees of freedom perpendicular to the eZe space. The electrons carry out a regular bending-type vibration while performing chaotic motion in the collinear degrees of freedom. This makes it possible to use the periodic orbits of the eZe configuration for a semiclassical description of parts of the spectrum for angular momentum L=0 including the ground state. The existence of this connection can be shown by Fourier methods. By inverting the Gutzwiller trace formula using Fourier transformation, one obtains an action spectrum related to the full quantum energy spectrum as shown in Figure 9 (Qui et al. 1996). The energy scaling relation for the classical actions \[\tag{3} S_{po} = \frac{1}{\sqrt{|E|}} \tilde{S}_{po}, \] has been used here, where \( \tilde{S}_{po} \) is the action of a periodic orbit (po) at fixed energy \( E=-1\), see (2). The quantum spectrum used in Figure 9 has been obtained from full 3D numerical calculations (Bürgers et al. 1995) and semi-empirical formulas based on approximate quantum numbers. (For more details on approximate quantum numbers, see Tanner et al. 2000). Figure 10: Quantum eigenvalues obtained from cycle expansion techniques using periodic orbits up to length j; the exact quantum results are given in the last column (in atomic units), from Wintgen et al. (1992). Each of the peaks in Figure 9 can be identified with a periodic orbit of the classical two-electron dynamics; furthermore all these periodic orbits lie in the eZe space confirming the statement that large parts of the quantum spectrum are determined by this invariant lower dimensional subspace - a truly amazing result. At last the periodic orbits Niels Bohr was looking for have been found and they are quite close to the solution proposed by Heisenberg which he himself did not dare to publish! For a full blown semiclassical quantisation, one needs information of as many periodic orbits as possible - these can be obtained systematically using the symbolic dynamics in the eZe space. The most extensive semiclassical calculations so far made use of all periodic orbits up to length 16 (\( 2^{16} = 65536\) orbits ) together with cycle expansion techniques to obtain energies as listed in Tab 9 (Wintgen et al. 1992). Pushing the semiclassical calculation to even higher energies is hampered by the exponential increase of the number of periodic orbits with increasing (symbol) length in chaotic systems - a general obstacle for semiclassical quantisation techniques. Photoionisation cross sections Information about atomic spectra is often experimentally obtained through measurements of the photo excitation or ionisation, see Figure 5 for helium. An expression for the photo-ionisation cross section can be written in terms of the retarded Green function G(E) of the full three particle problem, that is, \[\tag{4} \sigma(E) = -\frac{4 \pi}{c} \, \omega \, \Im \langle D \phi_{0}| G(E) |D \phi_{0}\rangle \] where c is the speed of light, \(\phi_{0} \) is the initial state wave function and \( D = {\mathbf \Pi} \cdot ({\mathbf r}_1 + {\mathbf r}_2)\) is the dipole operator with \(\mathbf \Pi\), the polarization of the incoming photon. Using again Gutzwiller's expression for the Green function in terms of classical trajectories, one can relate the cross section to classical trajectories of the three-body dynamics. Semiclassical methods are particularly useful when considering the cross section in the limit \( E \to 0\ ,\) that is, at the double ionisation threshold. Especially the regime just below the threshold with \( E<0 \) is not accessible both to experiments and to fully numerical calculations due to the large density of resonances. Using a semiclassical closed orbit theory together with a semiclassical treatment of triple collision orbits, one can make detailed predictions here; in particular, the cross section can be written in the form (Byun et al 2007, Lee et al 2010) \[\tag{5} \sigma(E) \approx \sigma_0 + \frac{8 \pi^2 \omega}{c}\; |E|^\mu \; \Re \left[2 \pi i\sum_{{\rm CTCO}_\gamma} a_\gamma e^{i \tilde{S}_\gamma/\sqrt{E} - i \pi \nu_\gamma/2}\right] \, , \] where \( \sigma_0 \) gives a smooth background contribution and the sum is taken over all closed triple collision orbits (CTCO), that is, trajectories which start and end in the triple collision. It can be shown that CTCOs are part of the eZe sub-space. Furthermore, \(\tilde S\) is the classical action at energy \(E = -1\) as given in (3) and \(a_\gamma\) is an energy independent coefficient related to the stability of a given CTCO away from the triple collision. Most remarkably is the energy scaling due to the exponent \( \mu \), (for details see Lee et al. 2010), \[\tag{6} \mu= \mu_{eZe} + 2 \mu_{wr} = \frac{1}{4}\left[\sqrt{\frac{100 Z-9}{4Z-1}} + 2\sqrt{\frac{4 Z - 9}{4Z -1}}\right], \] Figure 11: Fourier transform of cross section data; the peaks can be related to the CTCOs depicted in the insets (from Byun et al. 2007). which can be obtained through a stability analysis of the triple collision itself. Here, \(wr\) relates to a contribution from the so-called Wannier Ridge dynamics, an invariant subspace of the full dynamics where the two electrons are always at the same distance from the nucleus. The exponents are related to Siegel exponents (see Waldvogel 2002) or Wannier exponents (Wannier 1953). The energy scaling describes the decay of the fluctuations in the photoionisation cross section towards the threshold as can be seen in Figure 5. The CTCOs can in fact be seen in cross section data using a Fourier transformation of Eqn. (5). The data shown in Figure 11 are obtained from a 1D eZe cross section calculations (Byun et al. 2007) and show a nice one-to-one correspondence between peaks and triple collision trajectories. Experimental and numerical studies confirm that the dominant contribution to the cross section signal is given by the collinear eZe dynamics (Jiang et al. 2008) as predicted by the semiclassical analysis. Recent developments and open questions Exploring the full phase space - approximate symmetries and global structures Helium has provided a prime example where experimental and numerical results of the quantum 3-body problem give clear hints about interesting structures in the phase space of the classical dynamics. However, the story is not finished yet - at the time of writing (2013), large areas of the full 7 dimensional classical phase space are unexplored and the connection between approximate quantum numbers (Herrick's quantum numbers - see Lee et al. 2005, Sano 2010) is still unclear. This also opens up interesting links to celestial mechanics and triple collision encounters in three-body gravitational problems as discussed at the workshops on Few Body Dynamics in Atoms, Molecules and Planetary Systems in Dresden in 2010 and Celestial, Molecular, and Atomic Dynamics (CEMAD) in Victoria in 2013. Highly doubly excited states - recent advances The world record of experimentally accessing and numerically calculating highly doubly excited states in helium is currently held (in 2013) by Jiang et al. 2008 for total cross sections reaching helium resonances up to the ionisation thresholds N=17 and Czasch et al. 2005 for partial cross sections reaching N=13. Going even higher in the spectrum or considering helium under electromagnetic driving (Madronero et al. (2008)) is a formidable challenge asking for new numerical techniques to deal with the large basis sets necessary and experimental techniques to reach the resolutions required. Unusually for atomic physicists, the rewards may lie in looking at the Fourier transforms of their data. Double ionisation of helium for strong laser fields and ultra-short pulses - probing correlated electron-electron dynamics Studying double ionisation (DI) of helium by looking at the classical dynamics of the two electrons as they escape form the nucleus has a long history: Already in 1953, Wannier predicted an unexpected energy scaling of the DI cross section near the threshold governed by exponents similar to those found in Eq.(6). Interesting recent effects being considered are electron-electron correlation effects in strong laser fields and in attosecond pulses. In the strong field case, rescattering can lead to a large contribution to the DI cross section from ionisation events where both electrons escape from the nucleus along the same direction (Prauzner-Bechcicki et al. 2007). Two-photon DI in ultra-short pulses, on the other hand, shows a preference for back-to-back electron escape due to electron-electron repulsion (Feist et al. 2009). These and many other scenarios can be studied using classical electron dynamics. Semiclassics for many-body problems While helium represents a prime example for the success of semiclassics for an interacting few body system, generalizations to other many-body problems remain as a future challenge. • A Bürgers, D Wintgen, and J-M Rost, Highly doubly excited S states of the helium atom, J Phys B 28:3163 (1995). • C W Byun, N N Choi, M-H Lee, and G Tanner, Scaling Laws for the Photoionization Cross Section of Two-Electron Atoms, Phys Rev Lett 98:113001 (2007). • A Czasch et al, Partial Photoionization Cross Sections and Angular Distributions for Double Excitation of Helium up to the N=13 Threshold, Phys Rev Lett 95:243003 (2005). • M L Du and J B Delos, Effect of closed classical orbits on quantum spectra: Ionization of atoms in a magnetic field. I. Physical picture and calculations, Phys Rev A 38:1896 (1988). • J Feist et al., Probing Electron Correlation via Attosecond xuv Pulses in the Two-Photon Double Ionization of Helium, Phys Rev Lett 103:063002 (2009). • Y H Jiang, R Püttner, D Delande, and G Kaindl, Explicit analysis of chaotic behavior in radial and angular motion in doubly excited helium, Phys Rev A 78:021401(R) (2008). • M-H Lee, G Tanner, und N N Choi, Classical dynamics in two-electron atoms near the triple collision, Phys. Rev. E 71:056208 (2005). • M-H Lee, N N Choi, and G Tanner, Classical dynamics of two-electron atoms at zero energy, Phys Rev E 72:066215 (2005). • M-H Lee, C W Byun, N N Choi, and G Tanner, Photoionization of two-electron atoms via highly doubly excited states: Numerical and semiclassical results, Phys Rev A 81:043419 (2010). • J Madronero and A Buchleitner, Ab initio quantum approach to planar helium under periodic driving, Phys Rev A 77:053402 (2008). • J S Prauzner-Bechcicki, K Sacha, B Eckhardt and J Zakrzewski, Time-Resolved Quantum Dynamics of Double Ionization in Strong Laser Fields, Phys Rev Lett 98:203002 (2007). • Y Qiu, J Müller, and J Burgdörfer, Periodic-orbit spectra of hydrogen and helium, Phys Rev A 54:1922 (1996). • K. Richter, J. S. Briggs, D. Wintgen, and E. A. Solov'ev, J. Phys. B 25, 3929 (1992). • K Richter, G Tanner, and D Wintgen, Classical mechanics of two electron atoms, Phys Rev A 48:4182 (1993). • M M Sano, Semiclassical Interpretation of Electron Correlation in Helium, J Phys Soc Japan 79:034003 (2010). • J Waldvogel, Triple Collisions and Close Triple Encounters, in Singularities in Gravitational Systems, Lecture Notes in Physics, 590:81 (2002). • G H Wannier, The Threshold Law for Single Ionization of Atoms or Ions by Electrons, Phys Rev 90:817 (1953). • D Wintgen, K Richter, and G Tanner, The semiclassical helium atom, CHAOS 2:19 (1992); Recommended reading • G Tanner, K Richter, and J-M Rost, The theory of two electron atoms: Between ground state and complete fragmentation, Rev Mod Phys 72:497 (2000). • P Cvitanović, R Artuso, R Mainieri, G Tanner, G Vattay, N Whelan and A Wirzba, Chaos: Classical and Quantum,; see in particular • M C Gutzwiller, Chaos in Classical and Quantum Mechanics, Springer-Verlag, New York (1990). See also Personal tools Focal areas
270d83a35cfd7d08
previous   home  next  PDF More Scattering: the Partial Wave Expansion Michael Fowler, UVa Plane Waves and Partial Waves We are considering the solution to Schrödinger’s  equation for scattering of an incoming plane wave in the z-direction by a potential localized in a region near the origin, so that the total wave function beyond the range of the potential has the form ψ( r,θ,φ )= e ikrcosθ +f( θ,φ ) e ikr r . The overall normalization is of no concern, we are only interested in the fraction of the ingoing wave that is scattered.  Clearly the outgoing current generated by scattering into a solid angle dΩ  at angle θ,φ  is | f( θ,φ ) | 2 dΩ  multiplied by a velocity factor that also appears in the incoming wave. Many potentials in nature are spherically symmetric, or nearly so, and from a theorist’s point of view it would be nice if the experimentalists could exploit this symmetry by arranging to send in spherical waves corresponding to different angular momenta rather than breaking the symmetry by choosing a particular direction.  Unfortunately, this is difficult to arrange, and we must be satisfied with the remaining azimuthal symmetry of rotations about the ingoing beam direction.  In fact, though, a full analysis of the outgoing scattered waves from an ingoing plane wave yields the same information as would spherical wave scattering.   This is because a plane wave can actually be written as a sum over spherical waves: e i k . r = e ikrcosθ = l i l (2l+1) j l (kr) P l (cosθ) Visualizing this plane wave flowing past the origin, it is clear that in spherical terms the plane wave contains both incoming and outgoing spherical waves.  As we shall discuss in more detail in the next few pages, the real function j l (kr)  is a standing wave, made up of incoming and outgoing waves of equal amplitude. We are, obviously, interested only in the outgoing spherical waves that originate by scattering from the potential, so we must be careful not to confuse the pre-existing outgoing wave components of the plane wave with the new outgoing waves generated by the potential.   The radial functions j l (kr)  appearing in the above expansion of a plane wave in its spherical components are the spherical Bessel functions, discussed below.  The azimuthal rotational symmetry of plane wave + spherical potential around the direction of the ingoing wave ensures that the angular dependence of the wave function is just P l (cosθ) , not Y lm ( θ,φ ) .  The coefficient i l ( 2l+1 )  is derived in Landau and Lifshitz, §34, by comparing the coefficient of ( krcosθ ) n  on the two sides of the equation: as we shall see below, ( kr ) n  does not appear in j l (kr)  for l  greater than n,  and ( cosθ ) n  does not appear in P l (cosθ)  for l  less than n,  so the combination ( krcosθ ) n  only occurs once in the n th  term, and the coefficients on both sides of the equation can be matched.  (To get the coefficient right, we must of course specify the normalizations for the Bessel function see below and the Legendre polynomial.) Mathematical Interval: The Spherical Bessel and Neumann Functions The plane wave e i k . r  is a trivial solution of Schrödinger’s equation with zero potential, and therefore, since the P l (cosθ)  form a linearly independent set, each term j l (kr) P l ( cosθ )  in the plane wave series must be itself a solution to the zero-potential Schrödinger’s equation.  It follows that j l (kr)  satisfies the zero-potential radial Schrödinger equation: d 2 d r 2 R l ( r )+ 2 r d dr R l ( r )+( k 2 l( l+1 ) r 2 ) R l ( r )=0. The standard substitution R l ( r )= u l ( r )/r  yields d 2 u l ( r ) d r 2 +( k 2 l( l+1 ) r 2 )u( r )=0 For the simple case l=0  the two solutions are u 0 ( r )=sinkr,coskr .  The corresponding radial functions R 0 ( r )  are (apart from overall constants) the zeroth-order Bessel and Neumann functions respectively.  The standard normalization for the zeroth-order Bessel function is j 0 ( kr )= sinkr kr , and the zeroth-order Neumann function n 0 ( kr )= coskr kr . Note that the Bessel function is the one well-behaved at the origin: it could be generated by integrating out from the origin with initial boundary conditions of value one, slope zero. Here is a plot of j 0 ( kr ) and  n 0 (kr)  from kr=0.1 to 20:   For nonzero l,  near the origin R l (r) r l  or  r ( l+1 ) .  The well-behaved r l  solution is the Bessel function, the singular function the Neumann function.  The standard normalizations of these functions are given below. Here are j 5 (kr) and  j 50 (kr) : Detailed Derivation of Bessel and Neumann Functions This subsection is just here for completeness.  We use the dimensionless variable ρ=kr.    To find the higher l  solutions, we follow a clever trick given in Landau and Lifshitz (§33). Factor out the ρ l  behavior near the origin by writing R l = ( ρ ) l χ l ( ρ ). The function χ l ( ρ )  satisfies d 2 d ρ 2 χ l ( ρ )+ 2( l+1 ) ρ d dρ χ l ( ρ )+ χ l ( ρ )=0. The trick is to differentiate this equation with respect to ρ:   d 3 d ρ 3 χ l ( ρ )+ 2( l+1 ) ρ d 2 d ρ 2 χ l ( ρ )+( 1 2( l+1 ) ρ 2 ) d dρ χ l ( ρ )=0. Writing purely formally d dρ χ l ( ρ )=ρ χ l+1 ( ρ ) , the equation becomes d 2 d ρ 2 χ l+1 ( ρ )+ 2( l+2 ) ρ d dρ χ l+1 ( ρ )+ χ l+1 ( ρ )=0. But this is the equation that χ l+1 ( ρ )  must obey!  So we have a recursion formula for generating all the j l (ρ)  from the zeroth one: χ l+1 ( ρ )= 1 ρ d dρ χ l ( ρ ),  and j l ( ρ )= ( ρ ) l χ l ( ρ ) , up to a  normalization constant fixed by convention. In fact, the standard normalization is j l ( ρ )= ( ρ ) l ( 1 ρ d dρ ) l ( sinρ ρ ). ( sinρ )/ρ= 0 ( 1 ) n ρ 2n /( 2n+1 )! This is a sum of only even powers of ρ.   It is easily checked that operating on this series with ( 1 ρ d dρ ) l  can never generate any negative powers of ρ.   It follows that j l ( ρ ),  written as a power series in ρ,  has leading term proportional to ρ l .   The coefficient of this leading term can be found by applying the differential operator to the series for ( sinρ )/ρ, j l ( ρ ) ρ l ( 2l+1 )!!   as  ρ0. This r l  behavior near the origin is the usual well-behaved solution to Schrödinger’s equation in the region where the centrifugal term dominates.  Note that the small ρ  behavior is not immediately evident from the usual presentation of the j l (ρ)  ’s, written as a mix of powers and trigonometric functions, for example j 1 ( ρ )= sinρ ρ 2 cosρ ρ , j 2 ( ρ )=( 3 ρ 3 1 ρ )sinρ 3cosρ ρ 2 ,  etc. Turning now to the behavior of the j l ( ρ )  ’s for large ρ,  from it is evident that the dominant term in the large ρ  regime (the one of order 1/ρ  ) is generated by differentiating only the trigonometric function at each step.  Each such differentiation can be seen to be equivalent to multiplying by (-1) and subtracting π/2  from the argument, so j l ( ρ ) 1 ρ sin( ρ lπ 2 ) as ρ. These j l ( ρ ),  then, are the physical partial-wave solutions to the Schrödinger equation with zero potential.  When a potential is turned on, the wave function near the origin is still ρ l  (assuming, as we always do, that the potential is negligible compared with the l( l+1 )/ ρ 2  term sufficiently close to the origin).  The wave function beyond the range of the potential can be found numerically in principle by integrating out from the origin, and in fact will be like j l ( ρ )  above except that there will be an extra phase factor, called the “phase shift” and denoted by δ  ) in the sine.  The significance of this is that in the far region, the wave function is a linear combination of the Bessel function and the Neumann function (the solution to the zero-potential Schrödinger equation singular at the origin).  It is therefore necessary to review the Neumann functions as well. As stated above, the l=0  Neumann function is n 0 ( ρ )= cosρ ρ , the minus sign being the standard convention. An argument parallel to the one above for the Bessel functions establishes that the higher-order Neumann functions are given by: n l ( ρ )= ( ρ ) l ( 1 ρ d dρ ) l ( cosρ ρ ). Near the origin n l ( ρ ) ( 2l1 )!! ρ l+1  as ρ0 and for large ρ n l ( ρ ) 1 ρ cos( ρ lπ 2 ) as ρ, so a function of the form 1 ρ sin( ρ lπ 2 +δ )  asymptotically can be written as a linear combination of Bessel and Neumann functions in that region. Finally, the spherical Hankel functions are just the combinations of Bessel and Neumann functions that look like outgoing or incoming plane waves in the asymptotic region: h l ( ρ )= j l ( ρ )+i n l ( ρ ), h l *( ρ )= j l ( ρ )i n l ( ρ ), so for large ρ, h l ( ρ ) e i( ρlπ/2 ) iρ , h l *( ρ ) e i( ρlπ/2 ) iρ . The Partial Wave Scattering Matrix Let us imagine for a moment that we could just send in a (time-independent) spherical wave, with θ  variation given by P l ( cosθ ).   For this l th  partial wave (dropping overall normalization constants as usual) the radial function far from the origin for zero potential is j l ( kr ) 1 kr sin( kr lπ 2 )= i 2k ( e i( krlπ/2 ) r e +i( krlπ/2 ) r ). If now the (spherically symmetric) potential is turned on, the only possible change to this standing wave solution in the faraway region (where the potential is zero) is a phase shift δ:   sin( kr lπ 2 )sin( kr lπ 2 + δ l ( k ) ). This is what we would find on integrating the Schrödinger equation out from nonsingular behavior at the origin. But in practice, the ingoing wave is given, and its phase cannot be affected by switching on the potential.  Yet we must still have the solution to the same Schrödinger equation, so to match with the result above we multiply the whole partial wave function by the phase factor e i δ l ( k ) .  The result is to put twice the phase change onto the outgoing wave, so that when the potential is switched on the change in the asymptotic wave function must be i 2k ( e i( krlπ/2 ) r e +i( krlπ/2 ) r ) i 2k ( e i( krlπ/2 ) r S l ( k ) e +i( krlπ/2 ) r ). This equation introduces the scattering matrix S l ( k )= e 2i δ l ( k ) , which must lie on the unit circle | S |=1  to conserve probability the outgoing current must equal the ingoing current.  If there is no scattering, that is, zero phase shift, the scattering matrix is unity. It should be noted that when the radial Schrödinger’s equation is solved for a nonzero potential by integrating out from the origin, with ψ=0 and  ψ =1  initially, the real function thus generated differs from the wave function given above by an overall phase factor e i δ l ( k ) . Scattering of a Plane Wave We’re now ready to take the ingoing plane wave, break it into its partial wave components corresponding to different angular momenta, have the partial waves individually phase shifted by l-  dependent phases, and add it all back together to get the original plane wave plus the scattered wave.  We are only interested here in the wave function far away from the potential.  In this region, the original plane wave is e i k . r = e ikrcosθ = l i l (2l+1) j l (kr) P l (cosθ) = l i l (2l+1) i 2k ( e i( krlπ/2 ) r e +i( krlπ/2 ) r ) P l (cosθ). Switching on the potential phase shifts factor the outgoing wave: The actual scattering by the potential is the difference between these two terms.  The complete wave function in the far region (including the incoming plane wave) is therefore: ψ( r,θ,φ )= e ikrcosθ +( l ( 2l+1 ) ( S l ( k )1 ) 2ik P l ( cosθ ) ) e ikr r . The i l  factor cancelled the e ilπ/2 .   The -1 in ( S l ( k )1 )  is there because zero scattering means S=1.   Alternatively, it could be regarded as subtracting off the outgoing waves already present in the plane wave, as discussed above.  There is no φ-  dependence since with the potential being spherically-symmetric the whole problem is azimuthally-symmetric about the direction of the incoming wave. It is perhaps worth mentioning that for scattering in just one partial wave, the outgoing current is equal to the ingoing current, whether there is a phase shift or not.  So, if switching on the potential does not affect the total current scattered in any partial wave, how can it cause any scattering?  The point is that for an ingoing plane wave with zero potential, the ingoing and outgoing components have the right relative phase to add to a component of a plane wave a tautology, perhaps.  But if an extra phase is introduced into the outgoing wave only, the ingoing + outgoing will no longer give a plane wave there will be an extra outgoing part proportional to ( S l ( k )1 ) . Recall that the scattering amplitude f( θ,φ )  was defined in terms of the solution to Schrödinger’s equation having an ingoing plane wave by We’re now ready to express the scattering amplitude in terms of the partial wave phase shifts (for a spherically symmetric potential, of course): f( θ,φ )=f( θ )= l ( 2l+1 ) ( S l ( k )1 ) 2ik P l ( cosθ ) = l ( 2l+1 ) f l ( k ) P l ( cosθ ) f l ( k )= 1 k e i δ l ( k ) sin δ l ( k ) is called the partial wave scattering amplitude, or just the partial wave amplitude.    So the total scattering amplitude is the sum of these partial wave amplitudes: f( θ )= 1 k l ( 2l+1 ) e i δ l ( k ) sin δ l ( k ) P l ( cosθ ) . The total scattering cross-section σ= | f( θ ) | 2 dΩ =2π 0 π | f( θ ) | 2 sinθdθ =2π 0 π | 1 k l ( 2l+1 ) e i δ l ( k ) sin δ l ( k ) P l ( cosθ ) | 2 sinθdθ σ=4π l=0 ( 2l+1 ) | f l ( k ) | 2 = 4π k 2 l=0 ( 2l+1 ) sin 2 δ l . So the total cross-section is the sum of the cross-sections for each l  value. This does not mean, though, that the differential cross-section for scattering into a given solid angle is a sum over separate l  values the different components interfere.  It is only when all angles are integrated over that the orthogonality of the Legendre polynomials guarantees that the cross-terms vanish. Notice that the maximum possible scattering cross-section for particles in angular momentum state l  is   ( 4π/ k 2 )( 2l+1 ),  which is four times the classical cross section for that partial wave impinging on, say, a hard sphere!   (Imagine semiclassically particles in an annular area: angular momentum L=rp,  say, but L=l  and p=k   so l=rk.   Therefore the annular area corresponding to angular momentum “between” l  and l+1  has inner and outer radii l/k  and ( l+1 )/k  and therefore area π( 2l+1 )/ k 2 .  )   The quantum result is essentially a diffractive effect, we’ll discuss it more later. It’s easy to prove the optical theorem for a spherically-symmetric potential: just take the imaginary part of each side of the equation at θ=0,  using P l ( 1 )=1,   Imf( θ=0 )= 1 k l ( 2l+1 ) sin 2 δ l ( k )   from which the optical theorem Imf( 0 )=kσ/4π  follows immediately. It’s also worth noting what the unitarity of the l th  partial wave scattering matrix S l S l =1  implies for the partial wave amplitude f l ( k )= 1 k e i δ l ( k ) sin δ l ( k ) .  Since S l ( k )= e 2i δ l ( k ) ,  it follows that S l ( k )=1+2ik f l ( k ). From this, S l S l =1  gives: Im f l ( k )=k | f l ( k ) | 2 . This can be put more simply: Im 1 f l ( k ) =k. In fact, f l ( k )= 1 k( cot δ l ( k )i ) . Phase Shifts and Potentials: Some Examples We assume in this section that the potential can be taken to be zero beyond some boundary radius b.  This is an adequate approximation for all potentials found in practice except the Coulomb potential, which will be discussed separately later. Asymptotically, then, ψ l ( r )= i 2k ( e i( krlπ/2 ) r e 2i δ l ( k ) e +i( krlπ/2 ) r ) = e i δ l ( k ) kr sin( kr+ δ l ( k )lπ/2 ) = e i δ l ( k ) kr ( sin( krlπ/2 )cos δ l ( k )+cos( krlπ/2 )sin δ l ( k ) ). This expression is only exact in the limit r,  but since the potential can be taken zero beyond r=b,  the wave function must have the form ψ l ( r )= e i δ l ( k ) ( cos δ l ( k ) j l ( kr )sin δ l ( k ) n l ( kr ) ) for r>b.     (The - sign comes from the standard convention for Bessel and Neumann functions see earlier.) The Hard Sphere The simplest example of a scattering potential: V( r )=  for  r<R, V( r )=0   for  rR. The wave function must equal zero at r=R,  so from the above form of ψ l ( r ), tan δ l ( k )= j l ( kR ) n l ( kR ) . For l=0,    tan δ 0 ( k )= ( sinkR )/kR ( coskR )/kR =tankR, so δ 0 ( k )=kR.  This amounts to the wave function being effectively moved over to begin at R   instead of at the origin: sinkr kr sink( r+δ ) kr = sink( rR ) kr for r>R,  of course ψ=0  for r<R.    For higher angular momentum states at low energies ( kR<<1  ), tan δ l ( k )= j l ( kR ) n l ( kR ) ( kR ) l /( 2l+1 )!! ( 2l1 )!!/ ( kR ) l+1 = ( kR ) 2l+1 ( 2l+1 ) ( ( 2l1 )!! ) 2 . Therefore at low enough energy, only  l=0  scattering is important as is obvious, since an incoming particle with momentum p=k  and angular momentum l  is most likely at a distance l/k  from the center of the potential at closest approach, so if this is much greater than R,  the phase shift will be small. The Born Approximation for Partial Waves From the definition of f( θ,φ ) ψ k ( r )= e i k r +f(θ,φ) e ikr r ψ k ( r )= e i k r m 2π 2 e ikr r d 3 r e i k f r V( r ) ψ k ( r ) recall the Born approximation amounts to replacing the wave function ψ k ( r )  in the integral on the right by the incoming plane wave, therefore ignoring rescattering. To translate this into a partial wave approximation, we first take the incoming k  to be in the z-  direction, so in the integrand we replace ψ k ( r )  by e ik r cosθ = l i l (2l+1) j l (k r ) P l (cos θ ).  Labeling the angle between k f  and r  by γ,   e i k f r = l ( i ) l (2l+1) j l (k r ) P l (cosγ). Now k f   is in the direction ( θ,φ )  and r  in the direction ( θ , φ ) , and γ  is the angle between them. For this situation, there is an addition theorem for spherical harmonics: P l ( cosγ )= 4π 2l+1 m=l l Y lm * ( θ , φ ) Y lm ( θ,φ ).  On inserting this expression and integrating over θ , φ ,  the nonzero m  terms give zero, in fact the only nonzero term is that with the same l  as the term in the ψ k ( r )  expansion, giving f( θ )= 2m 2 l=0 ( 2l+1 ) P l ( cosθ ) 0 r 2 drV( r ) ( j l ( kr ) ) 2 and remembering it follows that for small phase shifts (the only place it’s valid) the partial-wave Born approximation reads δ l ( k ) 2mk 2 0 r 2 drV( r ) ( j l ( kr ) ) 2 . Low Energy Scattering: the Scattering Length the l=0  cross section is σ l=0 = 4π k 2 | cot δ 0 ( k )i | 2 . At energy E0,  the  radial Schrödinger equation for u=rψ  away from the potential becomes d 2 u/d r 2 =0 , with a straight line solution u( r )=C( ra ).   This must be the k0  limit of u( r )= C sin( kr+ δ 0 ( k ) ) , which can only be correct if δ 0  is itself linear in k  for sufficiently small k,  and then it must be δ 0 ( k )=ka,a  being the point at which the extrapolated external wavefunction intersects the axis (maybe at negative r !)   So, as k  goes to zero, the cot term dominates in the denominator and σ l=0 ( k0 )=4π a 2 . The quantity a  is called the scattering length. Integrating the zero-energy radial Schrödinger equation out from u( r )=0  at the origin for a weak (spherical) square well potential, it is easy to check that a  is positive for a repulsive potential, negative for an attractive potential. Repulsive potential, zero-energy wave function (so it’s a straight line outside of the well!): Attractive potential: On increasing the strength of the repulsive potential, still solving for the zero-energy wave function, a  tends to the potential wall here’s the zero-energy wavefunction for a barrier of height 6: For an infinitely high barrier, the wave function is pushed out of the barrier completely, and the hard sphere result is recovered: scattering length a,  cross-section 4π a 2 .     On increasing the strength of the attractive well, if there is a phase change greater that π/2  within the well, a  will become positive. In fact, right at π/2,a  is infinite!   And a little more depth to the well gives a positive scattering length: In fact, a well deep enough to have a positive scattering length will also have a bound state. This becomes evident when one considers that the depth at which the scattering length becomes infinite can be thought of as formally having a zero energy bound state, in that although the wave function outside is not normalizable, it is equivalent to an exponentially decaying function with infinite decay length.  If one now deepens the well a little, the zero-energy wave function inside the well curves a little more rapidly, so the slope of the wave function at the edge of the well becomes negative, as in the picture above.  With this slightly deeper well, we can now lower the energy slightly to negative values.  This will have little effect on the wave function inside the well, but make possible a fit at the well edge to an exponential decay outside a genuine bound state, with wave function e κr  outside the well.  If the binding energy of the state is really low, the zero-energy scattering wave function inside the well is almost identical to that of this very low energy bound state, and in particular the logarithmic derivative at the wall will be very close, so κ1/a,  taking a  to be much larger than the radius of the well. This connects the large scattering length to the energy of the weakly bound state, B. E= 2 k 2 /2m= 2 /2m a 2 . (Sakurai, p 414.) Wigner was the first to use this to estimate the binding energy of the deuteron from the observed cross section for low energy neutron-proton scattering. previous   home  next  PDF
e85559ff6f36bf5b
previous  home  next   PDF Classical Wave Equations Michael Fowler, University of Virginia The aim of this section is to give a fairly brief review of waves in various shaped elastic media—beginning with a taut string, then going on to an elastic sheet, a drumhead, first of rectangular shape then circular, and finally considering elastic waves on a spherical surface, like a balloon. The reason we look at this material here is that these are “real waves”, hopefully not too difficult to think about, and yet mathematically they are the solutions of the same wave equation the Schrödinger wave function obeys in various contexts, so should be helpful in visualizing solutions to that equation, in particular for the hydrogen atom. We begin with the stretched string, then go on to the rectangular and circular drumheads.  We derive the wave equation from F = ma for a little bit of string or sheet.  The equation corresponds exactly to the Schrödinger equation for a free particle with the given boundary conditions. The most important section here is the one on waves on a sphere.  We find the first few standing wave solutions.  These waves correspond to Schrödinger’s wave function for a free particle on the surface of a sphere.  This is what we need to analyze to understand the hydrogen atom, because using separation of variables we split the electron’s motion into radial motion and motion on the surface of a sphere.  The potential only affects the radial motion, so the motion on the sphere is free particle motion, described by the same waves we find for vibrations of a balloon.  (There is the generalization to complex non-standing waves, parallel to the one-dimensional extension from sinkx and coskx to eikx and e-ikx, but this does not affect the structure of the equations.) Waves on a String Let’s begin by reminding ourselves of the wave equation for waves on a taut string, stretched between  x = 0 and  x = L, tension T newtons, density ρ kg/meter.  Assuming the string’s equilibrium position is a straight horizontal line (and, therefore, ignoring gravity), and assuming it oscillates in a vertical plane, we use f(x,t) to denote its shape at instant t, so f(x,t) is the instantaneous upward displacement of the string at position x.  We assume the amplitude of oscillation remains small enough that the string tension can be taken constant throughout. The wave equation is derived by applying F = ma to an infinitesimal length dx of string (see the diagram below).  We picture our little length of string as bobbing up and down in simple harmonic motion, which we can verify by finding the net force on it as follows.  At the left hand end of the string fragment, point x, say, the tension T is at a small angle df(x)/dx to the horizontal, since the tension acts necessarily along the line of the string.  Since it is pulling to the left, there is a downward force component Tdf(x)/dx.  At the right hand end of the string fragment there is an upward force Tdf(x + dx)/dx.   Putting f(x + dx) = f(x) + (df/dx)dx, and adding the almost canceling upwards and downwards forces together, we find a net force T(d2f/dx2)dx on the bit of string.  The string mass is ρ dx, so F = ma becomes giving the standard wave equation with wave velocity given by  c2 = T/ρ.  (A more detailed discussion is given in my Physics 152 Course,  plus an animation here.) This equation can of course be solved by separation of variables, f(x,t) = f(x)g(t), and the equation for f(x) is identical to the time independent Schrödinger equation for a particle confined to (0, L) by infinitely high walls at the two ends.  This is why the eigenfunctions (states of definite energy) for a Schrödinger particle confined to (0, L) are identical to the modes of vibration of a string held between those points.  (However, it should be realized that the time dependence of the string wave equation and the Schrödinger time-dependent equation are quite different, so a nonstationary state, one corresponding to a sum of waves of different energies, will develop differently in the two systems.) Waves on a Rectangular Drumhead Let us now move up to two dimensions, and consider the analogue to the taut string problem, which is waves in a taut horizontal elastic sheet, like, say, a drumhead.  Let us assume a rectangular drumhead to begin with.  Then, parallel to the argument above, we would apply F = ma to a small square of elastic with sides parallel to the x and y axes.  The tension from the rest of the sheet tugs along all four sides of the little square, and we realize that tension in a sheet of this kind must be defined in newtons per meter, so the force on one side of the little square is given by multiplying this “tension per unit length” by the length of the side.  Following the string analysis, we take the vertical displacement of the sheet at instant t to be given by f(x, y, t).  We assume this displacement is quite small, so the tension itself doesn’t vary, and that each bit of the sheet oscillates up and down (the sheet is not tugged to one side).  Suppose the bottom left-hand corner (so to speak) of the square is (x, y), the top right-hand corner (x + dx, y + dy). Then the left and right edges of the square have lengths dy.  Now, what is the total force on the left edge?  The force is Tdy, in the local plane of the sheet, perpendicular to the edge dy.  Factoring in the slope of the sheet in the direction of the force, the vertically downward component of the force must be Tdyf(x,y,t)/∂x.  By the same argument, the force on the right hand edge has to have an upward component Tdyf(x+dx, y, t)/∂x Thus the net upward force on the little square from the sheet tension tugging on its left and right sides is The net vertical force from the sheet tension on the other two sides is the same with x and y interchanged.  The mass of the little square of elastic sheet is r dxdy, and its upward acceleration is ∂2f/∂t2.  Thus F = ma becomes: with c2 = T/ρ This equation can be solved by separation of variables, and the time independent part is identical to the Schrödinger time independent equation for a free particle confined to a rectangular box. Waves on a Circular Drumhead A similar argument gives the wave equation for a circular drumhead, this time in (r, φ) coordinates (we use φ rather than θ here because of its parallel role in the spherical case, to be discussed shortly).  This time, instead of a tiny square of elastic, we take the small area rdrdφ bounded by the circles of radius r and r + dr and lines through the origin at angles φ and φ + .  Now, the downward force from the tension T in the sheet on the inward curved edge, which has length rdφ, is Trdφ∂f(r, φ, t)/∂r.  On putting this together with the upward force from the other curved edge, it is important to realize that the r in Trdφ  varies as well as ∂f/∂r on going from r to r + dr, so the sum of the two terms is Tdφ∂/∂r(rf/∂r)dr.  To find the vertical elastic forces from the straight sides, we need to find how the sheet slopes in the direction perpendicular to those sides. The measure of length in that direction is not φ, but , so the slope is 1/r.∂f/∂φ, and the net upward elastic force contribution from those sides (which have length dr) is Tdrdφ∂/∂φ (1/r.∂f/∂φ). Writing F = ma for this small area of elastic sheet, of mass ρrdrdφ, gives then which can be written This is the wave equation in polar coordinates.  Separation of variables gives a radial equation called Bessel’s equation, the solutions are called Bessel functions.  The corresponding electron standing waves have actually been observed for an electron captured in a circular corral on a surface.  Waves on a Spherical Balloon Finally, let us consider elastic waves on the surface of a sphere, such as an inflated spherical balloon. The natural coordinate system here is spherical polar coordinates, with θ measuring latitude, but counting the north pole as zero, the south pole as π.  The angle φ measures longitude from some agreed origin. We take a small elastic element bounded by longitude lines φ and φ + and latitude θ and θ + .  For a sphere of radius r, the sides of the element have lengths rsinθ dφ, rdθ etc.  Beginning with one of the longitude sides, length rdθ, tension T, the only slightly tricky point is figuring its deviation from the local horizontal, which is 1/rsinθ.(∂f/∂φ), since increasing φ by means moving an actual distance rsinθ dφ on the surface, just analogous with the circular case above.  Hence, by the usual method, the actual vertical force from tension on the two longitude sides is Trdθ dφ. (∂/∂φ)1/rsinθ.(∂f/∂φ).  To find the force on the latitude sides, taking the top one first, the slope is given by 1/r.∂f/∂θ, so the force is just Trsinθ dφ.1/r.∂f/∂θ.  On putting this together with the opposite side, it is necessary to recall that sinθ as well as f varies with θ, so the sum is given by:  Trdφdθ∂/∂θ sinθ.1/r.∂f/∂θ.  We are now ready to write down F = ma once more, the mass of the element is ρr2sinθ dθ dφ.  Canceling out elements common to both sides of the equation, we find: Again, this wave equation is solved by separation of variables.  The time-independent solutions are called the Legendre functions.  They are the basis for analyzing the vibrations of any object with spherical symmetry, for example a planet struck by an asteroid, or vibrations in the sun generated by large solar flares. Simple Solutions to the Spherical Wave Equation Recall that for the two dimensional circular case, after separation of variables the angular dependence was all in the solution to ∂2f/∂φ2 = −λf, and the  physical solutions must fit smoothly around the circle (no kinks, or it would not satisfy the wave equation at the kink), leading to solutions sin and cos (or ei) with m an integer, and λ = m2 (this is why we took λ with a minus sign in the first equation). For the spherical case, the equation containing all the angular dependence is The standard approach here is, again, separation of variables. Taking the first term on the left hand side over to the right, and multiplying throughout by sin2θ isolates the φ term: Writing now in the above equation, and dividing throughout by f, we find as usual that the left hand side depends only on φ, the right hand side only on θ, so both sides must be constants.  Taking the constant as –m2, the φ solution is e±i, and one can insert that in the θ equation to give What about possible solutions that don’t depend on φ?  The equation would be the simpler Obviously, f = constant is a solution (for m = 0) with eigenvalue λ = 0. Try f = cosθ.  It is easy to check that this is a solution with λ = 2. Try f = sinθ.  This is not a solution. In fact, we should have realized it cannot be a solution to the wave equation by visualizing the shape of the elastic sheet near the north pole.  If f = sinθf = 0 at the pole, but rises linearly (for small θ ) going away from the pole. Thus the pole is at the bottom of a conical valley. But this conical valley amounts to a kink in the elastic sheet—the slope of the sheet has a discontinuity if one moves along a line passing through the pole, so the shape of the sheet cannot satisfy the wave equation at that point. This is somewhat obscured by working in spherical coordinated centered there, but locally the north pole is no different from any other point on the sphere, we could just switch to local (x,y) coordinates, and the cone configuration would clearly not satisfy the wave equation.  However, f = sinθ sinφ is a solution to the equation.  It is a worthwhile exercise to see how the φ term gets rid of the conical point at the north pole by considering the value of f as the north pole is approached for various values of φ: φ = 0, π/2, π, 3π/2 say.  The sheet is now smooth at the pole!  We find f = sinθ cosφ, sinθ sinφ (and so sinθ eiφ) are solutions with λ = 2. It is straightforward to verify that f = cos2θ – 1/3 is a solution with λ = 6. Finally, we mention that other λ = 6 solutions are sinθ cosθ sinφ and sin2θ sin2φ. We do not attempt to find the general case here, but we have done enough to see the beginnings of the pattern.  We have found the series of eigenvalues 0, 2, 6, … .  It turns out that the complete series is given by λ = l(l + 1), with l = 0, 1, 2, … .  This integer l is the analogue of the integer m in the wave on a circle case.  Recall that for the wave on the circle, if we chose real wave functions (cos, sin,  not ei) then 2m gave the number of nodes the wave had (that is, m complete wavelengths fitted around the circle).  It turns out that on the sphere l gives the number of nodal lines (or circles) on the surface.  This assumes that we again choose the φ-component of the wave function to be real, so that there will be m nodal circles passing through the two poles corresponding to the zeros of the cos term.  We find that there are lm nodal latitude circles corresponding to zeros of the function of θ. Summary: First Few Standing Waves on the Balloon                                     λ                      l                       m         form of solution (unnormalized)                                     0                      0                      0          constant                                         2                      1                      0          cosθ                                     2                      1                      1          sinθ e                                     2                      1                      -1         sinθ e-iφ                                     6                      2                      0          cos2θ – 1/3                                     6                      2                      ±1        cosθ sinθ e±                                     6                      2                      ±2        sin2θ e±2 The Schrödinger Equation for the Hydrogen Atom: How Do We Separate the Variables? In three dimensions, the Schrödinger equation for an electron in a potential can be written: This is the obvious generalization of our previous two-dimensional discussion, and we will later be using the equation in the above form to discuss electron wave functions in metals, where the standard approach is to work with standing waves in a rectangular box. Recall that in our original “derivation” of the Schrödinger equation, by analogy with the Maxwell wave equation for light waves, we argued that the differential wave operators arose from the energy-momentum relationship for the particle, that is so that the time-independent Schrödinger wave equation is nothing but the statement that E = K.E. + P.E. with the kinetic energy expressed as the equivalent operator. To make further progress in solving the equation, the only trick we know is separation of variables.  Unfortunately, this won’t work with the equation as given above in (x, y, z) coordinates, because the potential energy term is a function of x, y and z in a nonseparable form.   The solution is, however, fairly obvious:  the potential is a function of radial distance from the origin, independent of direction. Therefore, we need to take as our coordinates the radial distance r and two parameters fixing direction, θ and φ. We should then be able to separate the variables, because the potential only affects radial motion.  No potential term will appear in the equations for θ, φ motion, that will be free particle motion on the surface of a sphere.  Momentum and Angular Momentum with Spherical Coordinates It is worth thinking about what are the natural momentum components for describing motion in spherical polar coordinates (r, θ, φ).  The radial component of momentum, pr, points along the radius, of course. The θ-component pθ points along a line of longitude, away from the north pole if positive (remember θ itself measures latitude, counting the north pole as zero).  The φ-momentum component, pφ, points along a line of latitude.  It will be important in understanding the hydrogen atom to connect these momentum components (pr, pθ, pφ) with the angular momentum components of the atom.  Evidently, momentum in the r-direction, which passes directly through the center of the atom, contributes nothing to the angular momentum.  Consider now a particle for which pr = pθ = 0, only pφ being nonzero.  Classically, such a particle is circling the north pole at constant latitude θ, say, so it is moving in space in a circle or radius rsinθ  in a plane perpendicular to the north-south axis of the sphere.  Therefore, it has an angular momentum about that axis (The standard transformation from (x, y, z) coordinates to (r, θ, φ) coordinates is to take the north pole of the θ, φ sphere to be on the z-axis.) As we shall see in detail below, the wave equation describing the φ motion is a simple one, with solutions of the form ei with integer m, just as in the two-dimensional circular well. This just means that the component of angular momentum along the z-axis is quantized, Lz = mħ, with m an integer. Total Angular Momentum and Waves on a Balloon The total angular momentum is , where  is the component of the particle’s momentum perpendicular to the radius, so Thus the square of the total angular momentum is (apart from a constant factor) the kinetic energy of a particle moving freely on the surface of a sphere.  The equivalent Schrödinger equation for such a particle is the wave equation given in the last section for waves on a balloon. (This can be established by the standard change of variables routine on the differential operators).  Therefore, the solutions we found for elastic waves on a sphere actually describe the angular momentum wave function of the hydrogen atom.  We conclude that the total angular momentum is quantized, L2 = l(l + 1)ħ2,  with l an integer. Angular Momentum and the Uncertainly Principle The conclusions of our above waves on a sphere analysis of the angular momentum of a quantum mechanical particle are a little strange.  We found that the component of angular momentum in the z-direction must be a whole number of ħ units, yet the square of the total angular momentum L2 = l(l + 1)ħ2 is not a perfect square!  One might wonder if the component of angular momentum in the x-direction isn’t also a whole number of ħ units as well, and if not, why not?   The key is that in questions of this type we are forgetting the essentially wavelike nature of the particle’s motion, or, equivalently, the uncertainty principle.  Recall first that the z-component of angular momentum, that is, the angular momentum about the z-axis, is the product of the particle’s momentum in the xy-plane and the distance of the line of that motion from the origin.  There is no contradiction in specifying that momentum and that position simultaneously, because they are in perpendicular directions.  However, we cannot at the same time specify either of the other components of the angular momentum, because that would involve measuring some component of momentum in a direction in which we have just specified a position measurement.  We can measure the total angular momentum, that involves additionally only the component pθ  of momentum perpendicular to the pφ needed for the z-component. Thus the uncertainty principle limits us to measuring at the same time only the total angular momentum and the component in one direction.  Note also that if we knew the z-component of angular momentum to be , and the total angular momentum were L2 = l2ħ2  with  l = m, then we would also know that the x and y components of the angular momentum were exactly zero. Thus we would know all three components, in contradiction to our uncertainly principle arguments. This is the essential reason why the square of the total angular momentum is greater than the maximum square of any one component. It is as if there were a “zero point motion” fuzzing out the direction. Another point related to the uncertainty principle concerns measuring just where in its circular (say) orbit the electron is at any given moment. How well can that be pinned down?  There is an obvious resemblance here to measuring the position and momentum of a particle at the same time, where we know the fuzziness of the two measurements is related by ΔpΔx ~ h.  Naïvely, for a circular orbit of radius r in the xy-plane, pr = Lz and distance measured around the circle is , so ΔpΔx ~ h suggests ΔLzΔθ ~ h.  That is to say, precise knowledge of Lz implies no knowledge of where on the circle the particle is. This is not surprising, because we have found that for Lz =  the wave has the form ei, and so |ψ|2, the relative probability of finding the particle, is the same anywhere in the circle.  On the other hand, if we have a time-dependent wave function describing a particle orbiting the nucleus, so that the probability of finding the particle at a particular place varies with time, the particle cannot be in a definite angular momentum state.  This is just the same as saying that a particle described by a wave packet cannot have a definite momentum. The Schrödinger Equation in (r, θ, φ) Coordinates It is worth writing first the energy equation for a classical particle in the Coulomb potential: This makes it possible to see, term by term, what the various parts of the Schrödinger equation signify.  In spherical polar coordinates, Schrödinger’s equation is: Separating the Variables: the Messy Details We look for separable solutions of the form We now follow the standard technique.  That is to say, we substitute RΘΦ for ψ in each term in the above equation.  We then observe that the differential operators only actually operate on one of the factors in any given part of the expression, so we put the other two factors to the left of these operators. We then divide the entire equation by RΘΦ, to get Separating Out and Solving the Φ(φ) Equation The above equation can be rearranged to give: Further rearrangement leads to: At this point, we have achieved the separation of variables!  The left hand side of this equation is a function only of φ, the right hand side is a function only of r and θ.  The only way this can make sense is if both sides of the equation are in fact constant (and of course equal to each other).  Taking the left hand side to be equal to a constant we denote for later convenience by  ‑m2, We write the constant –m2 because we know that as a factor in a wave function Φ(φ)  must be single valued as φ increases through 2π, so an integer number of oscillations must fit around the circle, meaning Φ is sin, cos or ei with m an integer. These are the solutions of the above equation. Of course, this is very similar to the particle in the circle in two dimensions, m signifies units of angular momentum about the z-axis. Separating Out the Θ(θ) Equation Backing up now to the equation in the form we can replace the  term by –m2, and move the r term over to the right, to give We have again managed to separate the variables—the left hand side is a function only of θ, the right hand side a function of r.  Therefore both must be equal to the same constant, which we set equal to -λ This gives the Θ(θ) equation: This is exactly the wave equation we discussed above for the elastic sphere, and the allowed eigenvalues λ are l(l+1), where l = 0, 1, 2, .. with lm The R(r) Equation Replacing the θ, φ operator with the value found just above in the original Schrödinger equation gives the equation for the radial wave function: The first term in this radial equation is the usual radial kinetic energy term, equivalent to pr2/2m in the classical picture. The third term is the Coulomb potential energy.  The second term is an effective potential representing the centrifugal force.  This is clarified by reconsidering the energy equation for the classical case, The angular momentum squared   Thus for fixed angular momentum, we can write the above “classical” equation as The parallel to the radial Schrödinger equation is then clear. We must find the solutions of the radial Schrödinger equation that decay for large r.  These will be the bound states of the hydrogen atom.  In natural units, measuring lengths in units of the first Bohr radius, and energies in Rydberg units Finally, taking u(r) = rR(r), the radial equation becomes previous  home  next   PDF
027e8a7eb0421544
zbMATH — the first resource for mathematics a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term Gap solitons in periodic discrete nonlinear Schrödinger equations. (English) Zbl 1220.35163 Summary: It is shown that the periodic discrete nonlinear Schrödinger equation, with cubic nonlinearity, possesses gap solutions, i.e. standing waves, with the frequency in a spectral gap, that are exponentially localized in the spatial variable. The proof is based on the linking theorem in combination with periodic approximations. 35Q55NLS-like (nonlinear Schrödinger) equations 35Q51Soliton-like equations 39A12Discrete version of topics in analysis 39A70Difference operators 78A40Waves and radiation (optics)
8231a6ab5ecad58d
About this Journal Submit a Manuscript Table of Contents Abstract and Applied Analysis Volume 2013 (2013), Article ID 256324, 13 pages Research Article Existence of Nontrivial Solutions and High Energy Solutions for a Class of Quasilinear Schrödinger Equations via the Dual-Perturbation Method 1Department of Mathematics, Honghe University, Mengzi, Yunnan 661100, China 2Department of Mathematics, Yunnan Normal University, Kunming, Yunnan 650092, China Received 23 June 2013; Accepted 12 September 2013 Academic Editor: Mihai Mihǎilescu Copyright © 2013 Yu Chen and Xian Wu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We study the quasilinear Schrödinger equation of the form , . Under appropriate assumptions on and , existence results of nontrivial solutions and high energy solutions are obtained by the dual-perturbation method. 1. Introduction and Preliminaries In this paper we consider the quasilinear Schrödinger equation of the form where and . Solutions of (1) are standing waves of the following quasilinear Schrödinger equation: where is a given potential, is a real constant, and and are real functions. The quasilinear Schrödinger equations (2) are derived as models of several physical phenomena; for example, see [15]. Several methods can be used to solve (1). For instance, the existence of a positive ground state solution has been proved in [6, 7] by using a constrained minimization argument; the problem is transformed to a semilinear one in [811] by a change of variables (dual approach); Nehari method is used to get the existence results of ground state solutions in [12, 13]. Recently, some new methods have been applied to these equations. In [14], the authors prove that the critical points are functions by the Moser’s iteration; then the existence of multibump type solutions is constructed for this class of quasilinear Schrödinger equations. In [15], by analysing the behavior of the solutions for subcritical case, the authors pass to the limit as the exponent approaches to the critical exponent in order to establish the existence of both one-sign and nodal ground state solutions. Another new method which works for these equations is perturbations. In [16] 4-Laplacian perturbations are led into these equations; then high energy solutions are obtained on bounded smooth domain. In this paper, the perturbation, combined with dual approach, is applied to search the existence of nontrivial solution and sequence of high energy solutions of (1) on the whole space . For simplicity we call this method the dual-perturbation method. We need the following several notations. Let be the collection of smooth functions with compact support. Let with the inner product and the norm Let the following assumption hold: satisfies and . Set with the inner product and the norm Then both and are Hilbert spaces. By the continuity of the for we know that, for each , there exists constant such that where denotes the -norm. In the following, we use or to denote various positive constants. Moreover, we need the following assumptions: there if and if such that uniformly in , there exist and such that for all and , where . By Lemma 3.4 in [17] we know that, under the assumption , the embedding is compact for each . Equation (1) is the Euler-Lagrange equation of the energy functional where . Due to the presence of the term , is not well defined in . To overcome this difficulty, a dual approach is used in [9, 10]. Following the idea from these papers, let be defined by on , and on . Then has the following properties: is uniquely defined function and invertible; for all ; for all ;;, ; for all and for all ; for all ; the function is strictly convex; there exists a positive such that there exist positive constants and such that for all ; for all ; for each , there exists such that . The properties have been proved in [811]. It suffices to prove . Indeed, by , , and , there exist and such that, for , and for , Since there exists a such that (see [10]), we can assume that . For , we have , and hence for , one has , and hence and for , there exist and such that and . Then we have Hence , where . After the change of variable, can be reduced to From [8, 9, 11] we know that if is a critical point of , that is, for all , then is a weak solution of (1). Particularly, if is a critical point of , then is a classical solution of (1). A sequence is called a Cerami sequence of if is bounded and in . We say that satisfies the Cerami condition if every Cerami sequence possesses a convergent subsequence. 2. Some Lemmas Consider the following perturbation functional defined by where . We have the following lemmas. Lemma 1. If assumptions , , and hold, then the functional is well defined on and . Proof. By conditions and , the properties , , , and imply that there exists such that Hence for all . By (26) and the continuity of the embedding (), Hence is well defined in . Now, we prove that . It suffices to prove that For any and , by the mean value theorem, (25) and -, we have The Hölder inequality implies that Hence, by the Lebesgue theorem, we have for all . Now, we show that , , are continuous. Indeed, if in , then in for all . On the space , we define the norm Then Moreover, on the space , we define the norm By (25), we have where and . Then Theorem A.4 in [18] implies as . If with and , one has Hence and hence as . Therefore, . Define with the norm . On the space , we define the norm On the space , we define the norm From in , one has and as . Since , by the following Lemma 2, we have If with and , one has Hence and hence as . Therefore, . This completes the proof. Lemma 2. Assume that , and Then, for every , , and the operator is continuous. Proof. Let be a smooth cut-off function such that for and for . Define We can assume that . Hence for all . Assume in . Then in and in . As in the proof of Lemma A.1 in [18], there exists a subsequence of and such that and for a.e. . Hence, from (51), one has a.e. on . It follows from the Lebesgue theorem that in . Consequently, in . Similarly, we can prove in . Since it follows that in . This completes the proof. Lemma 3. Let , , and hold. Then every bounded sequence with possesses a convergent subsequence. Proof. Since is bounded, then, by the compactness of the embedding (), passing to a subsequence, one has in , in for all , and for a.e. . By (25) as . Similarly, as . Hence, by the property of , we have where as . This shows that as . This completes the proof. The following Lemma 4 has been proved in [10] (see Proposition 2.1(3) in [10]). Lemma 4. If a.e. in and , then as . 3. Main Results Theorem 5. Assume conditions , hold. Let be such that . Let be a critical point of with for some constant independent of . Then, up to subsequence, one has in , and is a critical point of . Proof. By , for , there exists such that By , for ( is the constant appearing in condition ), we have where is the constant appearing in condition . Hence Since , there exists such that for all . Hence Since is a critical point of , for all . Consequently, taking , by and we have and hence for some constant independent of . By the boundedness of , there exists such that for all . Hence, by the Sobolev embedding theorem, one has Next, we prove that and , where the positive constant is independent of . Setting , , define , where is a smooth function satisfying for , ; for , and is decreasing in . This means that , for ; , for ; , for , where . Let ; then . By (61) . Hence where For , . By the properties of and , the mean value theorem implies Hence Consequently, Combining (67) and (68), we have For any , by and , there exists such that Combining (66), (72), and (73), one has By the Hölder inequality and (65), Moreover, HenceSince , . Set . Then Take such that . Since , . Hence, from (65), we have Since as , taking in (78) with , we have Set . Then Inductively, we have where   , and is convergent as . Let . Then as . Hence Let ; by (65), we have Hence, by and (85), we have By (63) we know that is bounded, and hence is bounded in . Up to subsequence, one has in , in for , and a.e. . Now, we show that is a critical point of . For any with , by (85), we know that . Take as the test function in (61); we have By , one has
b1bf97a1a8571256
Saturday, December 31, 2016 Necessary Existence Josh Rasmussen's and my Necessary Existence book is now complete. We just sent the final manuscript to Oxford. We're both quite happy with the book. Freezing a hard drive I had a hard drive that's around 15 years old fail to start a couple of months ago--I tried many times with no luck. Most but not all of the stuff was backed up, but not all (though what wasn't backed up wasn't very important). So yesterday I stuck the drive in a freezer, in two freezer bags without much air. Today I plugged it into an IDE-USB adapter. It didn't start up at first, but after a few minutes of warming it up, it started and I got all the data off without any difficulty.  This is the second time in my life that I've rescued data from an old hard drive using a freezer. (Of course, there is always the chance that this time it would have worked without the freezer. I didn't actually check yesterday if the drive was still not working.) Friday, December 30, 2016 Use 3D printer as a plotter/cutter My 3D printer is fun, but I like to extend functionality, so I designed some additional snap-on parts that lets me also use it as a pen plotter and cutter. For instance, I had it draw a butterfly coloring sheet on a blank T-shirt for our four-year-old to color with fabric markers. Here are instructions. Tuesday, December 27, 2016 Some weird languages Platonism would allow one to reduce the number of predicates to a single multigrade predicate Instantiates(x1, ..., xn, p), by introducing a name p for every property. The resulting language could have one fundamental quantifier ∃, one fundamental predicate Instantiates(x1, ..., xn, p), and lots of names. One could then introduce a “for a, which exists” existential quantifier ∃a in place of every name a, and get a language with one fundamental multigrade predicate, Instantiates(x1, ..., xn, p), and lots of fundamental quantifiers. In this language, we could say that Jim is tall as follows: ∃Jimx Instantiates(x, tallness). On the other hand, once we allow for a large plurality of quantifiers we could reduce the number of predicates to one in a different way by introducing a new n-ary existential quantifier ∃F(x1, …, xn) (with the corresponding ∀P defined by De Morgan duality) in place of each n-ary predicate F other than identity. The remaining fundamental predicate is identity. Then instead of saying F(a), one would say ∃Fx(x = a). One could then remove names from the language by introducing quantifiers for them as before. The resulting language would have many fundamental quantifiers, but only only one fundamental binary predicate, identity. In this language we would say that Jim is tall as follows: ∃JimxTally(x = y). We have two languages, in each of which there is one fundamental predicate and many quantifiers. In the Platonic language, the fundamental predicate is multigrade but the quantifiers are all unary. In the identity language, the fundamental predicate is binary but the quantifiers have many arities. And of course we have standard First Order Logic: one fundamental quantifier (say, ∃), many predicates and many names. We can then get rid of names by introducing an IsX(x) unary predicate for each name X. The resulting language has one quantifier and many predicates. So in our search for fundamental parsimony in our language we have a choice: • one quantifier and many predicates • one predicate and many quantifiers. Are these more parsimonious than many quantifiers and many predicates? I think so: for if there is only one quantifier or only one predicate, then we can collapse levels—to be a (fundamental) quantifier just is to be ∃ and to be a (fundamental) predicate just is to be Instantiates or identity. I wonder what metaphysical case one could make for some of these weird fundamental language proposals. Life science and physical science I've been thinking that in a nutshell one could put much of the distinctiveness of Aristotelian philosophy as follows: life science is at least as fundamental as physical science. Friday, December 23, 2016 Double Effect in daily life Wednesday, December 21, 2016 What is this? Consider the black item to the right here on your screen. Is it a token of the Latin alphabet letter pee, the Greek letter rho or the Cyrillic letter er? The question cannot be settled by asking which font, and where in the font, the glyph is taken from, because I drew the drawing in Inkscape rather than using any font, precisely to block such an answer. Nor will my intentions answer the question, since I drew the thing precisely to pose such a philosophical question rather than to express any one of the three options. There are two interesting questions here. The first is an ontological one. Is a token on screen something different from the pattern of light? If it's the same as the pattern of light, then there is at most one token, there being at most one relevant pattern of light (perhaps none, if our ontology doesn't include patterns of light), though this token is a token of pee, and a token of rho and a token of er. If a token is not identical with a pattern of light, then we might as well keep on multiplying entities, and say that there is a pattern of light and three tokens, of pee, rho and er, respectively, with the first entity constituting the latter three. The second one is a philosophy of language one. What determines whether or not the pattern of light is or constitutes a token of, say, rho? Is it my intentions? If so, then indeed we have tokens of pee, rho and er, as making these was my intention, but we do not have a token of the Coptic letter ro or a token of the letter qof in 15th century Italian Hebrew cursive, since I didn't think of these when I was doing the drawing. Is it the linguistic context? But then it's not a token of any letter, since a displayed png file in an analytic philosophy post is not a the kind of linguistic context that determines a token. Or is it that the pattern of light is or constitutes tokens of all the letters it geometrically matches, whether or not it was intended as such? If so, then we also have a letter dee (just turn your screen). But now suppose a new alphabet is created, and it contains a letter that looks just like the drawing. It would be odd to say that if a new language were created on another planet this instantly would multiply the entities on earth (at the speed of light? faster?). So it seems that on this view, we should say that the pattern of light is or constitutes tokens of all the letters in all the alphabets that will ever exist. But future actions shouldn't affect how many things there now are. So on this view, we should be even more pluralistic: the pattern of light is or constitutes tokens of all the letters in all possible alphabets. We thus have two questions: one about ontology and one about what is being tokened. Both questions have parsimonious and profligate answers. The parsimonious answer to the ontology question is that there is one thing, which can be a token of multiple things. The profligate one is that we have many tokens. The parsimonious answers to the language question are that intentions and/or context determines what's been tokened. The profligate answer has an infinite amount of tokening. We probably shouldn't combine the two profligate answers. For then on your screen there are infinitely many physical things, all co-located (and some perhaps even with the same modal profile). That's too much. That still leaves three combinations. I think there is reason to reject the combination of ontological profligacy with parsimony on the philosophy of language side. The reason is that tokens get repurposed. Consider a Russian who has a Scrabble set and loses an er tile. She then buys a replacement pee tile, as it looks pretty much the same (I looked at online pictures--both have value 1 and look the same). Then it seems that a new entity, a token of er, comes into existence if we have ontological profligacy and linguistic parsimony. Does a mere intention to use the tile for an er what magically creates a new physical object, a token? That seems not very plausible. That leaves two combinations: • ontological and linguistic parsimony • ontological parsimony and linguistic profligacy. Tuesday, December 20, 2016 Bestowing harms and benefits A virtuous person happily confers justified benefits and unhappily bestows even justified harms. Moreover, it is not just that the virtuous person is happy about someone being benefitted and unhappy about someone being harmed, though she does have those attitudes. Rather, the virtuous person is happy to be the conferrer of justified benefits and unhappy to be the bestower even of justified harms. These attitudes on the part of the virtuous person are evidence that it is non-instrumentally good for one to confer justified benefits and non-instrumentally bad for one to bestow even justified harms. Of course, the bestowal of justified harms can be virtuous, and virtuous action is non-instrumentally good for one. But an action can be good for one qua virtuous and bad for one in another way—cases of self-sacrifice are like that. Virtuously bestowing justified harms is a case of self-sacrifice on the part of the virtuous agent. When multiple agents are necessary and voluntary causes of a single harm, the total bad of being a bestower of harm is not significantly diluted between the agents. Each agent non-instrumentally suffers from the total bad of bestowing harm, though the contingent psychological effects may—but need not—be diluted. (A thought experiment: One person hits a criminal in an instance of morally justified and legally sentenced corporal punishment while the other holds down the punishee. Both agents are equally responsible. It makes no difference to the badness of being the imposer of corporal punishment if instead of the other holding down the punishee, the punishee is simply tied down. Interestingly, one may have a different intuition on the other side—it might seem worse to hold down the punishee to be hit by a robot than by a person. But that’s a mistake.) If this is right, then we have a non-instrumental reason to reduce the number of people involved in the justified imposition of a harm, though in particular cases there may also be reasons, instrumental and otherwise, to increase the number of people involved (e.g., a larger number of people involved in punishing may better convey societal disapprovat). This in turn gives a non-instrumental reason to develop autonomous fighting robots for the military, since the use of such robots decreases the number of people who are non-instrumentally (as well as psychologically) harmed by killing. Of course, there are obvious serious practical problems there. Monday, December 19, 2016 Intending material conditionals and dispositions, with an excursus on lethally-armed robots If Bob’s intention is (1), then I think he’s no different from Alice. But Bob’s intention could simply be (2), whereas Alice’s intention couldn’t simply be to dissuade the thief, since if that were simply her intention, she wouldn’t have fired. (Note: the promise to shoot to kill is not morally binding.) Rather, when offering the threat, Alice intended to dissuade and shoot to kill as a backup, and then when she shot in fulfillment of the threat, she intended to kill. If Bob’s intention is simply (2), then Bob may be guilty of some variety of endangerment, but he’s not a murderer. I am inclined to think this can be true even if Bob trained the crocodiles to be man-eaters (in which case it becomes much clearer that he’s guilty of a variety of endangerment). But let’s think a bit more about (2). The means to dissuading thieves is to put the shed in a place where there are crocodiles with a disposition to eat intruders. So Bob is also intending something like this: 1. There be a dispositional state of affairs where any thieves (and maybe other intruders) tend to die. However, in intending this dispositional state of affairs, Bob need not be intending the disposition’s actuation. He can simply intend the dispositional state of affairs to function not by actuation but by dissuasion. Moreover, if the thief dies, that’s not an accomplishment of Bob’s. On the other hand, if Bob intended the universal conditional 1. All thieves die or even: 1. Most thieves die then he would be accomplishing the deaths of thieves if any were eaten. Thus there is a difference between the logically complex intention that (4) or (5) be true, and the intention that there be a dispositional state of affairs to the effect of (4) or (5). This would seem to be the case even if the dispositional state of affairs entailed (4) or (5). Here’s why there is such a difference. If many thieves come and none die, then that constitutes or grounds the falsity of (4) and (5). But it does not constitute or ground the falsity of (3), and that would be true even if it entailed the falsity of (3). This line of thought, though, has a curious consequence. Automated lethally-armed guard robots are in principle preferable to human lethally-armed guards. For the human guard either has a policy of killing if the threat doesn’t stop the intruder or has a policy of deceiving the intruder that she has such a policy. Deception is morally problematic and a policy of intending to kill is morally problematic. On the other hand, with the robotic lethally-armed guards, nobody needs to deceive and nobody needs to have a policy of killing under any circumstances. All that’s needed is the intending of a dispositional state of affairs. This seems preferable even in circumstances—say, wartime—where intentional killing is permissible, since it is surely better to avoid intentional killing. But isn’t it paradoxical to think there is a moral difference between setting up a human guard and a robotic guard? Yet a lethally-armed robotic guard doesn’t seem significantly different from locating the guarded location on a deadly crocodile farm. So if we think there is no moral difference here, then we have to say that there is no difference between Alice’s policy of shooting intruders dead and Bob’s setup. I think the moral difference between the human guard and the robotic guard can be defended. Think about it this way. In the case of the robotic guard, we can say that the death of the intruder is simply up to the intruder, whereas the human guard would still have to make a decision to go with the lethal policy in response to the intruder’s decision not to comply with the threat. The human guard could say “It’s on the intruder’s head” or “I had no choice—I had a policy”, but these are simply false: both she and the intruder had a choice. None of this should be construed as a defence in practice of autonomous lethal robots. There are obvious practical worries about false positives, malfunctions, misuse and lowering the bar to a country’s initiating lethal hostilities. Friday, December 16, 2016 The sharpness of the Platonic realm I feel an intellectual pull to a view that also repels me. The view is that all contingent vague truths are grounded in contingent definite truths and necessary vague truths. For instance, that Jim is bald might be grounded in a contingent definite truth about the areal density of hair on his scalp and a necessary vague truth that anyone with that areal density of hair is bald. On this view, any vague differences between possible worlds are grounded in definite differences between possible worlds. But the view also repels me. I have the Platonic intuition that the realm of necessary truth should be clean, unchanging, sharp and definite. Plato would be very surprised to think that fuzziness in the physical world is grounded in fuzziness in the Platonic realm. Epistemicism, of course, nicely reconciles the Platonic intuition about necessary truths with the intellectual pull of the grounding claim. For it is no surprise that there be things in the Platonic realm that are not accessible to us. If vagueness is merely epistemic, then there is no difficulty about vagueness in the Platonic realm. Wednesday, December 14, 2016 Knowledge of vague truths Suppose that we know in lottery cases—i.e., if a lottery has enough tickets and one winner, then we know ahead of time that we won’t win. I know it’s fashionable to deny such knowledge, but such denial leads either to scepticism or to having to say things like “I agree that I have better evidence for p than for q, but I know q and I don’t know p” (after all, if a lottery has enough tickets, I can have better evidence that I won’t win than that I have two hands). Suppose also that classical logic holds even in vagueness cases. This is now a mainstream assumption in the vagueness literature, I understand. Finally, suppose that once the number of tickets in a lottery reaches about a thousand, I know I won’t win. (The example can be modified if a larger number is needed.) Now for each positive natural number n, let Tn be the proposition that a person whose height is n microns is tall but a person whose height is n−1 is not tall. At most one of the Tn propositions is true, since anybody taller than a tall person is tall, and anybody shorter than a non-tall person is short. Moreover, since there is a non-tall person and there is a tall person, classical logic requires that at least one of the Tn is true. Hence, exactly one of the Tn is true. Now, some of the Tn are definitely false. For instance, T1000000 is definitely false (since someone a meter tall is definitely not tall) and T2000000 is definitely false (since someone a micron short of two meters tall is definitely tall). But if anything is vague, it will be vague where exactly the cut-off between non-tall and tall lies. And if that is vague, then in the vague area between non-tall and tall, it will be vague whether Tn is true. That vague area is at least a millimeter long (in fact, it’s probably at least five centimeters long), and since there are a thousand microns to the millimeter, there will be at least a thousand values n such that Tn is vague. Moreover, these thousand Tn are pretty much epistemically on par. Let n be any number within that vague range, and suppose that in fact Tn is false. Then this is a lottery case with at least a thousand tickets. So, if in the lottery case I know I didn’t win, in this case I know that Tn is false. Hence, some vague truths can be known—assuming that we know in lottery cases and that classical logic holds. Of course, as usual, some philosophers will want to reverse the argument, and take this to be another argument that we don’t know in lottery cases, or that classical logic doesn’t hold, or that there is no vagueness. Doing things fast I was thinking about deadlines--papers and exams to grade--and realized that doing things fast is a similar kind of challenge to making things small. Companies try to fit phone electronics into as spatially thin a region of spacetime as possible, while runners try to fit a run of a particular distance into as temporally thin a region of spacetime as possible. (And while sometimes small spatial and temporal size has "utilitarian value", as in the case of getting my grades in, in the phone and running cases, the reasons are mainly of the aesthetic variety.) Tuesday, December 13, 2016 Vague propositions Suppose Jim says, in English, “2+2=4”. Then: 1. What Jim said is such that it is contigent that it is true, because it is contingent that “4” means four rather than five 1. What Jim said is a necessary truth, because it cannot but be true that 2+2=4. Here the apparent contradiction is resolved by disambiguating “what Jim said” between the uttered sounds and the expressed meaning. But when talking about vagueness, this straightfoward point can be a bit less clear. Suppose that it’s vaguely true that “4” in Jim’s dialect means four, rather than five, and Jim says “2+2=4” (and suppose that all the other relevant stuff is definite). Then: 1. What Jim said is vaguely true, because it’s vaguely true that “4” is four. 2. What Jim said is not vaguely true, because what Jim said is definitely true or definitely false, depending on what “4” means. Again, make the same move as in (1)-(2): in (3), “what Jim says” is the uttered sounds or words and in (4) it’s the proposition. This line of thought suggests one of two possibilities. Either, propositions are never vague, or there are two interestingly different kinds of vagueness. If propositions are never vague, then in the proposition sense of “what was said” it is never correct to say that what was said is vague. That’s a bit counterintuitive, but some counterintuitive things are true. But if some propositions are vague, then it seems that we have two interestingly different kinds of vagueness an utterance could suffer from. It could be vague which non-vague proposition an utterance expresses or it could be definite which vague proposition an utterance expresses—or one could have combinations, as when it’s vague which vague proposition is expressed. In the case above, I claimed that it was vaguely the case that Jim expressed the non-vague proposition that 2+2=4. But presumably if there are vague propositions, there will be one that has the kind of vagueness that makes the non-vague propositions that 2+2=4 and that 2+2=5 be its admissible precifications. So now we would have this interesting question: What determines whether Jim’s case was a case of vaguely expressing a non-vague proposition or non-vaguely expressing a vague proposition or some combination? Maybe there is a good answer to this question, but I have some doubts. In light of these doubts, I think that the proponent of vague and non-vague propositions should say is something like this. There are at least three senses of “what was said”: the sounds or words (and that makes for two, but I won’t be interested in this distinction in this post), the non-vague proposition and the vague proposition. What Jim said is vaguely true in the first and third sense, but not in the second. This is sufficiently complicated that one might prefer to go back to the less intuitive option, that in the proposition sense “what was said” is never vague. I am dreadfully confused. Monday, December 12, 2016 Actions that are gravely wrong for qualitative reasons Some types of wrongdoing vary in degree of seriousness from minor to grave. Stealing a dollar from a billionaire is trivially wrong while stealing a thousand dollars from someone poor is gravely wrong. A poke in the back with a finger and breaking someone’s leg with a carefully executed kick can both be instances of battery, but the former is likely to be a minor wrong while the latter is apt to be grave. On the other hand, there are types of wrongdoing that are always grave. An uninteresting (for my purposes) case is where the gravity is guaranteed because the description of wrongdoing includes a grave-making quantitative feature as in the case of “grand theft” or “grevious bodily harm”. The more interesting case is where for qualitative reasons the wrongdoing is always grave. For instance, murder and rape. There are no trivial murders or minor rapes. Of course, even if a type of act is always seriously wrong, the degree of culpability might be slight, say due to lack of freedom or invincible ignorance. Think of someone brainwashed into murder, but who still has a slight sense of moral discomfort—although her action is gravely wrong, she may be only slightly culpable. My interest right now, however, is in the degree of wrongness rather than of culpability. We can now distinguish types of wrongdoing that are always grave for qualitative reasons from those that are always grave merely for quantitative reasons. Here is a fairly precise characterization: if W is a type of wrongdoing that is always grave for qualitative reasons, then there is no sequence of acts, starting with a case of W, and with merely quantitative differences between the acts, such that the sequence ends with an act that isn’t grave. Grand theft and grevious bodily harm are examples of types of wrongdoings that are always grave merely for quantitative reasons. On the other hand, it is intuitively plausible that murder and rape are not gravely wrong for merely quantitative reasons. If this intuition is correct, then we get some very interesting substantive consequences. In the case of rape, I’ve explored some relevant issues in a past post, so I want to focus on murder here. The first consequence of taking murder to be always gravely wrong for qualitative reasons is that there is no continuous scale of mental abilities (whether of first or second potentiality) that takes us from people to lower animals. An unjustified killing of a lower animal is only a minor wrong (take this to constrain what “lower” means). If there were a continuous scale of mental abilities from people to lower animals, then murder would be gravely wrong only for quantitative reasons: because the victim’s mental abilities lie on such-and-such a position on the scale. So once we admit that murder is gravely wrong for qualitative reasons, we have to suppose a qualitative gap in the spectrum of mental abilities. This probably requires the rejection of naturalism. A second consequence is that if killing a consenting adult in normal health is murder—which it is—then euthanasia is gravely wrong. For variation in health and comfort is merely quantitative, and one cannot go from a case of murder to something that isn’t gravely wrong by merely quantitative variation, since murder is always gravely wrong for qualitative reasons. I suspect there are a number of other very interesting consequences of taking murder to be gravely wrong for qualitative reasons. I think these consequences will motivate some people to give up on the claim that murder is gravely wrong for qualitative reasons. But I think we should hold on to that claim and accept the consequences. Friday, December 9, 2016 Love and reasons Humans are fundamentally loving beings. This is more fundamental than their being rational, because the nature of reasons, and hence of rationality, is to be accounted for in terms of the nature of love. A sketchy approximation to a love-based account of external reasons is this: • A fact F is an external reason for ϕing if and only if F partially grounds ϕing being in some respect loving towards something or someone or not ϕing being in some respect unloving towards something or someone. • A plurality of facts is a conclusive external reason for ϕing if and only if the plurality grounds its being unloving not to ϕ. If I am right that love has the three fundamental aspects of benevolence, appreciation and union, these probably also provide the three basic kinds of reasons. There are reasons to do good and to prevent bad: these come from the benevolence aspect. There are reasons to, e.g., admire and be grateful that come from appreciation. Interestingly, I think appreciation also provides reasons for things like criticism and punishment. In criticism and punishment we appreciate someone or something qua someone or something that ought to do better: we appreciate nature over actual activity. And finally there is union, which needs to be appropriate to the love (I develop this at greater length in One Body). Internal reasons are occurrent beliefs that are in some sense about what there is external reason to do and that enter into the right way into choice. These beliefs come in a broad variety, and are not always explicitly about reasons as such. Tuesday, December 6, 2016 3D-printable cookie cutters with Inkscape and OpenSCAD We thought that our 4-year-old would enjoy a Pikachu cookie cutter for Christmas, but I didn't like the existing designs on Thingiverse. So I wrote a Python script, eventually packaged into an Inkscape extension, that generates a 3D-printable OpenSCAD file from a color-coded SVG path file. Instructions are here. Monday, December 5, 2016 A Trinitarian structure in love In One Body, I identified three crucial aspects in every form of love: benevolence, appreciation and unity. But I did not have an argument that there are no further equally central aspects. I still don’t. But I now have some suggestive evidence: There is a Trinitarian structure to these three aspects. The Father eternally conferring his divine nature—the nature of being the Good Itself—on the Son and, through the Son, on the Holy Spirit. The Son in turn eternally and gratefully contemplates the Father. And the Holy Spirit joins Father with Son. This makes for benevolence, appreciation and unity, respectively, all perichoretically interconnected. That there are only three Persons in the most blessed Trinity is thus evidence that these three aspects are what love is at base. Wednesday, November 30, 2016 No-collapse interpretations without a dynamically evolving wavefunction in reality Bohm’s interpretation of quantum mechanics has two ontological components: It has the guiding wave—the wavefunction—which dynamically evolves according to the Schrödinger equation, and it has the corpuscles whose movements are guided by that wavefunction. Brown and Wallace criticize Bohm for this duality, on the grounds that there is no reason to take our macroscopic reality to be connected with the corpuscles rather than the wavefunction. I want to explore a variant of Bohm on which there is no evolving wavefunction, and then generalize the point to a number of other no-collapse interpretations. So, on Bohm’s quantum mechanics, reality at a time t is represented by two things: (a) a wavefunction vector |ψ(t)⟩ in the Hilbert space, and (b) an assignment of values to hidden variables (e.g., corpuscle positions). The first item evolves according to the Schrödinger equation. Given an initial vector |ψ(0)⟩, the vector at time t can be mathematically given as |ψ(t)⟩ = Ut|ψ(0)⟩ where Ut is a mathematical time-evolution operator (dependent on the Hamiltonian). And then by a law of nature, the hidden variables evolve according to a differential equation—the guiding equation—that involves |ψ(t)⟩. But now suppose we change the ontology. We keep the assignment of values to hidden variables at times. But instead of supposing that reality has something corresponding to the wavefunction vector at every time, we merely suppose that reality has something corresponding to an initial wavefunction vector |ψ0⟩. There is nothing in physical reality corresponding to the wavefunction at t if t > 0. But nonetheless it makes mathematical sense to talk of the vector Ut|ψ0⟩, and then the guiding equation governing the evolution of the hidden variables can be formulated in terms of Ut|ψ0⟩ instead of |ψ(t)⟩. If we want an ontology to go with this, we could say that the reality corresponding to the initial vector |ψ0⟩ affects the evolution of the hidden variables for all subsequent times. We now have only one aspect of reality—the hidden variables of the corpuscles—evolving dynamically instead of two. We don’t have Schrödinger’s equation in the laws of nature except as a useful mathematical property of the Ut operator described by the initial vector). We can talk of the wavefunction Ut|ψ0⟩ at a time t, but that’s just a mathematical artifact, just as m1m2 is a part of the equation expressing Newton’s law of gravitation rather than a direct representation of physical reality. Of course, just as m1m2 is determined by physical things—the two masses—so too the wavefunction Ut|ψ0⟩ is determined by physical reality (the initial vector, the time, and the Hamiltonian). This seems to me to weaken the force of the Brown and Wallace point, since there no longer is a reality corresponding to the wavefunction at non-initial times, except highly derivatively. Interestingly, the exact same move can be made for a number of other no-collapse interpretations, such as Bell’s indeterministic variant of Bohm, other modal interpretations, the many-minds interpretation, the traveling minds interpretation and the Aristotelian traveling forms interpretation. There need be no time-evolving wavefunction in reality, but just an initial vector which transtemporally affects the evolution of the other aspects of reality (such as where the minds go). Or one could suppose a static background vector. It’s interesting to ask what happens if one plugs this into the Everett interpretation. There I think we get something rather implausible: for then all time-evolution will disappear, since all reality will be reduced to the physical correlate of the initial vector. So my move above is only plausible for those no-collapse interpretations on which there is something more beyond the wavefunction. There is also a connection between this approach and the Heisenberg picture. How close the connection is is not yet clear to me. Material conditionals and quantifiers 1. Every G is H it seems we should be able to infer for any x: 1. If x is G, then x is H. This pretty much forces one to read “If p, then q” as a material conditional, i.e., as q or not p. For the objection to reading the indicative conditional as a material conditional is that this leads to the paradoxes of material implication, such as that if it’s not snowing in Fairbanks, Alaska today, then it’s correct to say: 1. If it’s snowing in Fairbanks today, then it’s snowing in Mexico City today even if it’s not snowing in Mexico City, which just sounds wrong. But if we grant the inference from (1) to (2), we can pretty much recover the paradoxes of material implication. For instance, suppose it’s snowing neither in Fairbanks nor in Mexico City today. Then: 1. Every truth value of the proposition that it’s snowing in Fairbanks today is a truth value of the proposition that it’s snowing in Mexico City today. So, by the (1)→(2) inference: 1. If a truth value of the proposition that it’s snowing today in Fairbanks is true, then a truth value of the proposition that it’s snowing today in Mexico City is true. Or, a little more smoothly: 1. If it’s true that it’s snowing in Fairbanks today, then it’s true that it’s snowing in Mexico City today. It would be very hard to accept (6) without accepting (3). With a bit of work, we can tell similar stories about the other standard paradoxes. The above truth-value-quantification technique works equally well for both the true⊃true and the false⊃false paradoxes. The remaining family of paradoxes are the false⊃true ones. For instance, it’s paradoxical to say: 1. If it’s warm in the Antarctic today, it’s a cool day in Waco today even though the antecedent is false and the consequent is true, so the corresponding material conditional is true. But now: 1. Every day that’s other than today or on which it’s warm in the Antarctic is a day that’s other than today or on which it’s cool in Waco. So by (1)→(2): 1. If today is other than today or it’s warm in the Antarctic today, then today is other than today or today it’s cool in Waco. And it would be hard to accept (9) without accepting (7). (I made the example a bit more complicated than it might technically need to be in order not to have a case of (1) where there are no Fs. One might think for Aristotelian logic reasons that that case stands apart.) This suggests that if we object to the “material conditional” reading of “If… then…”, we should object to the “material quantification” reading of “Every F is G”. But many object to the first who do not object to the second. Monday, November 28, 2016 Are we all seriously impaired? When I taught calculus, the average grade on the final exam was around 55%. One could make the case that this means that our grading system is off: that everybody’s grades should be way higher. But I suspect that’s mistaken. The average grasp of calculus in my students probably really wasn’t good enough for one to be able to say with a straight face that they “knew calculus”. Now, I think I was a pretty rotten calculus teacher. But such grades are not at all unusual in calculus classes. And if one didn’t have the pre-selection that colleges have, but simply taught calculus to everybody, the grades would be even lower. Yet much of calculus is pretty straightforward. Differential calculus is just a matter of ploughing through and following simple rules. Integral calculus is definitely harder, and exceling at it requires real creativity, but one can presumably do decently just by internalizing a number of heuristics and using trial and error. I find myself with the feeling that a normal adult human being should be able to do calculus, understand basic Newtonian physics, write a well-argued essay, deal well with emotions, avoid basic formal and informal fallacies, sing decently, have a good marriage, etc. But I doubt that the average adult human being can learn all these things even with excellent teachers. Certainly the time investment would be prohibitive. There are two things one can say about this feeling. The first is that the feeling is simply mistaken. We’re all apes. A 55% grade in calculus from an ape is incredible. The kind of logical reasoning that an average person can demonstrate in an essay is super-impressive for an ape. There is little wrong with average people intellectually. Maybe the average human can’t practically learn calculus, but if so that’s no more problematic than the facts that the average human can’t practically learn to climb a 5.14 or run a four-minute mile. These things are benchmarks of human excellence rather than of human normalcy. That may in fact be the right thing to say. But I want to explore another possibility: the possibility that the feeling is right. If it is right, then all of us fall seriously short of what normal human beings should be able to do. We are all seriously impaired. How could that be? We are, after all, descendants of apes, and the average human being is, as far as we can tell, an order of magnitude intellectually ahead of the best non-human apes we know. Should the standards be another order of magnitude ahead of that? I don’t think there is a plausible naturalistic story that would do justice to the feeling that the average human falls that far short of where humans should be at. But the Christian doctrine of the Fall allows for a story to be told here. Perhaps God miraculously intervened just before the first humans were conceived, and ensured that these creatures would be significantly genetically different from their non-human parents: they would have capacities enabling them to do calculus, understand Newtonian physics, write a well-argued essay, deal well with emotions, avoid fallacies, sing decently, have a good marriage, etc. (At least once calculus, physics and writing are invented.) But then the first humans misused their new genetic gifts, and many of them were taken away, so that now only statistically exceptional humans have many of these capacities, and none have them all. And so we have more geneticaly in common with our ape forebears than would have been the case if the first humans acted better. However, in addition to genetics, on this story, there is the human nature, which is a metaphysical component of human beings defining what is and what is not normal for humans. And this human nature specifies that the capacities in question are in fact a part of human normalcy, so that we are all objectively seriously impaired. Of course, this isn’t the only way to read the Fall. Another way—which one can connect in the text of Genesis with the Tree of Life—is that the first humans had special gifts, but these gifts were due to miracles beyond human nature. This may in fact be the better reading of the story of the Fall, but I want to continue exploring the first reading. If this is right, then we have an interesting choice-point for philosophy of disability. One option will be to hold that everyone is disabled. If we take this option then for policy reasons (e.g., disability accommodation) we will need a more gerrymandered concept than disability, say disability*, such that only a minority (or at least not an overwhelming majority) is disabled*. This concept will no doubt have a lot of social construction going into it, and objective impairment will be at best a necessary condition for disability*. The second option is to say only a minority (or not an overwhelming majority) is disabled, which requires disability to differ significantly from impairment. Again, I suspect that the concept will have a lot of social construction in it. So, either way, if we accept the story that we are all seriously impaired, for policy reasons we will need a disability-related concept with a lot more social construction in it. Should we accept the story that we are all seriously impaired? I think there really is an intuition that we should do many things that we can’t, and that intuition is evidence for the story. But far from conclusive. Still, maybe we are all seriously impaired, in multiple intellectual dimensions. We may even be all physically impaired. Monday, November 21, 2016 The identity of countries and persons Suppose Canada is dissolved, and a country is created, with the same people, in the same place, with the same name, symbols, and political system. Moreover, the new country isn’t like the old one by mere happenstance, but is deliberately modeled on the old. Then very little has been lost, even if it turns out that on the correct metaphysics of countries the new country is a mere replica of Canada. On the other hand, suppose Jean Vanier is dissolved, and a new person is created, with the same matter and shape, in the same place, with the same name, apparent memories and character. Moreover, the new person isn’t like the old one by mere happenstance, but is deliberately modeled on the old. Then if on the correct metaphysics of persons the new person is a mere replica of Jean Vanier, much has been lost, even if Vanier’s loving contributions continue through the new person. This suggests an interesting asymmetry between social entities and persons. For social entities, the causal connections and qualitative and material similarities across time matter much more than identity itself. For persons, the identity itself matters at least as much as these connections and similarities. Perhaps the explanation of this fact is that for social entities there is nothing more to the entity than the persons and relationships caught up in them, while for persons there is something more than temporal parts and their relationships. Friday, November 18, 2016 An Aristotelian picture of set theory There are some sets we need just because of the fundamental axioms of set theory, whatever these are (ZF? ZFC?). Probably, we could satisfy the fundamental axioms of set theory with a collection of sets that in some sense is countable. But then we need to add some sets because the world is arranged thus and so. For instance, we may need to add a real number representing the exact distance between my thumbs in Planck units. (If the world is describable as a vector in a separable Hilbert space, all we need to add can be encoded as a single real number.) This is a very Aristotelian paper: the sets are an abstraction from the concrete reality of the world. On this Aristotelian picture, what sets exist might well have been different had I wiggled my thumb. Perhaps, then, some of the non-fundamental axioms of set theory are contingent. Thursday, November 17, 2016 Against isotropy We think of Euclidean space as isotropic: any two points in space are exactly alike both intrinsically and relationally, and if we rotated or translated space, the only changes would be to the bare numerical identities to the points—qualitatively everything would stay the same, both at the level of individual points and of larger structures. But our standard mathematical models of Euclidean space are not like that. For instance, we model Euclidean space on the set of triples (x, y, z) of real numbers. But that model is far from isotropy. For instance, some points, like (2, 2, 2) have the property that all three of their coordinates are the same, while others like (2, 3, 2) have the property that they have exactly two coordinates that are the same, and yet others like (3, 1, 2) have the property that their coordinates are all different. Even in one-dimension, say that of time, when we represent the dimension by real numbers we do not have isotropy. For instance, if we start with the standard set-theoretic construction of the natural numbers as 0 = ⌀, 1 = {0}, 2 = {0, 1}, 3 = {0, 1, 2}, ... and ensure that the natural numbers are a subset of the reals, then 0 will be qualitatively very different from, say, 3. For instance, 0 has no members, while 3 has three members. (Perhaps, though, we do not embed the set-theoretic natural numbers into the reals, but make all reals—including those that are natural—into Dedekind cuts. But we will still have qualitative differences, just buried more deeply.) The way we handle this in practice is that we ignore the mathematical structure that is incompatible with isotropy. We treat the Cartesian coordinate structure of Euclidean space as a mere aid to computation, while the set-theoretic construction of the natural numbers is ignored completely. Imagine the look of incomprehension we’d get from a scientist if one said something like: “At a time t2, the system behaved thus-and-so, because at a time t1 that is a proper subset of t2, it was arranged thus-and-so.” Times, even when represented mathematically as real numbers, just don’t seem the sort of thing to stand in subset relations. But on the Dedekind-cut construction of real numbers, an earlier time is indeed a proper subset of a later time. But perhaps there is something to learn from the fact that our best mathematical models of isotropic space and time themselves lack true isotropy. Perhaps true isotropy cannot be achieved. And if so, that might be relevant to solving some problems. First, probabilities. If a particle is on a line, and I have no further information about it except that the line is truly isotropic, so should my probabilities for the particle’s position be. But that cannot be coherently modeled in classical (countably additive and normalized) probabilities. This is just one of many, many puzzles involving isotropy. Well, perhaps there is no isotropy. Perhaps points differ qualitatively. These differences may not be important to the laws of nature, but they may be important to the initial conditions. Perhaps, for instance, nature prefers the particles to start out at coordinates that are natural numbers. Second, the Principle of Sufficient Reason. Leibniz argued against the substantiality of space on the grounds that there could be no explanation of why things are where they are rather than being shifted or rotated by some distance. But that assumed real isotropy. But if there is deep anisotropy, there could well be reasons for why things are where they are. Perhaps, for instance, there is a God who likes to put particles at coordinates whose binary digits encode his favorite poems. Of course, one can get out of Leibniz’s own problem by supposing with him that space is relational. But if the relation that constitutes space is metric, then the problem of shifts and rotations can be replaced by a problem of dilation—why aren’t objects all 2.7 times as far apart as they are? Again, that problem assumes that there isn’t a deep qualitative structure underneath numbers. Wednesday, November 16, 2016 Universal countable numerosity: A hypothesis worth taking seriously? Here’s a curious tale about sets and possible worlds: What sets there are varies between metaphysically possible worlds and for any possible world w1, the sets at w1 satisfy the full ZFC axioms and there is also a possible world w2 at which there exists a set S such that: 1. At w2, there is a bijection of S onto the natural numbers (i.e., a function that is one-to-one and whose range is all of the natural numbers). 2. The members of S are precisely the sets that exist at w1. Suppose that this tale is true. Then assume S5 and this further principle: 1. If two sets A and B are such that possibly there is a bijection between them, then they have the same numerosity. (Here I distinguish between “numerosity” and “cardinality”: to have the same cardinality, they need to actually have a bijection.) Then: 1. Necessarily, all infinite sets have the same numerosity, and in particular necessarily all infinite sets have the same numerosity as the set of natural numbers. For if A and B are infinite sets in w1, then at w2 they are subsets of the countable-at-w2 set S, and hence at w2 they have a bijection with the naturals, and so by (3) they have the same numerosity. Given the tale, there is then an intuitive sense in which all infinite sets are the same size. But it gets more fun than that. Add this principle: 1. If two pluralities are such that possibly there is a bijection between them, then the two pluralities have the same numerosity. (Here, a bijection between the xs and the ys is a binary relation R such that each of the xs stands in R to a unique one of the ys, and vice versa.) Then: 1. Necessarily, the plurality of sets has the same numerosity as the plurality of natural numbers. For if the xs are the plurality of sets of w1, then there will be a world w2 and a countable-at-w2 set S such that the xs are all and only the members of S. Hence, there will be a bijection between the xs and the natural numbers at w2, and hence at w1 they will have the same numerosity by (5). So if my curious tale is true, not only does each infinite set have the same numerosity, but the plurality of sets has the same numerosity as each of these infinite sets. We can now say that a set or plurality has countable numerosity provided that it is either finite or has the same numerosity as the naturals. Then the conclusion of the tale is that each set (finite and infinite), as well as the plurality of sets, has countable numerosity. I.e., universal countable numerosity. But hasn’t Cantor proved this is all false? Not at all. Cantor proved that this is false if we put “cardinality” in place of “numerosity”, where cardinality is defined in terms of actual bijections while numerosity is defined in terms of possible bijections. And I think that possible bijections are a better way to get at the intuitive concept of the count of members. Still, is my curious tale mathematically consistent? I think nobody knows. Will Brian, a colleague in the Mathematics Department, sent me a nice proof which, assuming my interpretation of its claims is correct, shows that if ZFC + “there is an inaccessible cardinal” is consistent, then so is my tale. And we have no reason to doubt that ZFC + “there is an inaccessible cardinal” is consistent. So we have no reason to doubt the consistency of the tale. As for its truth, that's a different matter. One philosophically deep question is whether there could in fact be so much variation as to what the sets are in different metaphysically possible worlds. Monday, November 14, 2016 From a principle about looking down on people to some controversial consequences It’s wrong to look down on people simply for having physical or intellectual disabilities. But it doesn’t seem wrong to look down on, say, someone who has devoted her life to the pursuit of money above all else. Where is the line to be drawn? Whom is it permissible for people to look down on? Before answering that question, I need to qualify it. I think that a plausible case can be made that it is not permissible for us to look down on anyone. The reason for that is that (a) we have all failed morally in many ways, (b) we would very likely have failed in many more had we been in certain other circumstances that we are lucky not to have been in, and (c) we are not epistemically in a position to judge that a specific other person’s failures are morally worse than our own would likely be in circumstances that it is only our luck (or divine providence) not to be in, especially when we take into account the fact that we know much less about other people’s responsibility than about our own. So I want to talk, instead, about when it is intrinsically permissible to look down on people—when it would be permissible if we were in a position to throw the first mental stone. Let’s go back to the person who has devoted her life to the pursuit of money above all else. Suppose that it turns out that she suffered from a serious intellectual disability that rendered her incapable of grasping values. But her parents, with enormous but misguided rehabilitative effort, have managed to instill in her the grasp of one value: that of money. Given this backstory, it’s clear that looking down on her for pursuing money above all else is not relevantly different from looking down on her for having a disability. On the other hand, it still doesn’t seem wrong to look down on a person of normal intellectual capacities in normal circumstances who has devoted her life to the pursuit of money through making greedy choice after greedy choice. This suggests a plausible principle: 1. It is only permissible to look down on someone if one is looking down on her for morally wrong choices she is responsible for and conditions that are caused by these choices in a relevant way. If so, then it is wrong to look down on people for reasoning badly, unless this bad reasoning is a function of morally wrong choices they are responsible for. This has some interesting implications. It sure seems typically intrinsically permissible to look down on someone who reasons badly because she is trying to avoid finding out that she’s wrong about something. If this is right, then typically trying to avoid finding out that one is wrong is itself morally wrong. This suggests that typically: 1. We typically have a moral duty (an imperfect one, to be sure) to strive to avoid error. Moreover, I think it is implausible to think that this moral duty holds simply in virtue of the practical consequences of error. Suppose that Sally has an esoteric astronomical theory that she isn’t going to share with anybody but you and you tell her that the latest issue of Nature has an article refuting the theory. Sally, however, refuses to look at the data. This seems like the kind of avoidance of finding out that one is wrong that it seems intrinsically permissible to look down on, even though it has no negative practical consequences for Sally or anybody else. Thus: 1. We typically have a moral duty (an imperfect one) to strive for its own sake to avoid error. But the intrinsic bad in being wrong is primarily to oneself (there might be some derivative bad to the community, but this does not seem strong enough to ground the duty in question). Hence: 1. We have duties to self. Thus, the principle (1), together with some plausible considerations, leads to a controversial conclusion about the morals of the intellectual life, namely (3), and to the controversial conclusion that we have duties to self. Friday, November 11, 2016 Cambridge events and objects This post is inspired by John Giannini's dissertation. Tuesday, November 8, 2016 A Traveling Forms Interpretation of Quantum Mechanics Paper is here. Abstract: The Traveling Minds interpretation of Quantum Mechanics is a no-collapse interpretation on which the wavefunction evolves deterministically like in the Everett branching multiple-worlds interpretation. As in the Many Minds interpretation, minds navigate the Everett branching structure following the probabilities given by the Born rule. However, in the Traveling Minds interpretation (a variant by Squires and Barrett of the single-mind interpretation), the minds are guaranteed to all travel together--they are always found in the same branch. The Traveling Forms interpretation extends the Traveling Minds interpretation in an Aristotelian way by having forms of non-minded macroscopic entities that have forms, such as plants, lower animals, bacteria and planets, travel along the branching structure together with the minds. As a result, while there is deterministic wavefunction-based physics in the branches without minds, non-reducible higher-level structures like life are found only in the branch with minds. Ontological grounding nihilism Some people are attracted to nihilism about proper parthood: no entity has proper parts. I used to be rather attracted to that myself, but I am now finding that a different thesis fits better with my intuitions: no entity is (fully) grounded. Or to put it positively: only fundamental entities exist. This has some of the same consequences that nihilism about proper parthood would. For instance, on nihilism about proper parthood, there are no artifacts, since if there were any, they'd have proper parts. But on nihilism about ontological grounding, we can also argue that there are no artifacts, since the existence of an artifact would be grounded in social and physical facts. Moreover, nihilism about ontological grounding implies nihilism about mereological sum: for the existence of a mereological sum would be grounded in the existence of its proper parts. However, nihilism about ontological grounding is compatible with some things having parts--but they have to be things that go beyond their parts, things whose existence is not grounded in the existence and relations of their parts. Monday, November 7, 2016 The direction of fit for belief It’s non-instrumentally good for me to believe truly and it’s non-instrumentally bad for me to believe falsely. Does that give you non-instrumental reason to make p true? Saying “Yes” is counterintuitive. And it destroys the direction-of-fit asymmetry between beliefs and desires. But it’s hard to say “No”, given that surely if something is non-instrumentally good for me, you thereby have have non-instrumental reason to provide it. Here is a potential solution. We sometimes have desires that we do not want other people to take into account in their decision-making. For instance, a parent might want a child to become a mathematician, but would nonetheless be committed to having the child to decide on their life-direction independently of the parent’s desires. In such a case, the parent’s desire that the child become a mathematician might provide the child with a first-order reason to become a mathematician, but this reason might be largely or completely excluded by the parent’s higher-order commitment. And we can explain why it is good to have such an exclusion: if a parent couldn’t have such an exclusion, she’d either have to exercise great self-control over her desires or would have to have hide them from their children. Perhaps we similarly have a blanket higher-order reason that excludes promoting p on the grounds that someone believes p. And we can explain why it is good to have such an exclusion, in order to decrease the degree of conflict of interest between epistemic and pragmatic reasons. For instance, without such an exclusion, I’d have pragmatic reason to avoid pessimistic conclusions because as soon as we came to them, we and others would have reason to make the conclusions true. By suggesting that exclusionary reasons are more common than I previously thought, this weakens some of my omnirationality arguments. Friday, November 4, 2016 My new toy I've acquired a 3D printer (a used DaVinci 1.0a, hacked by the previous user to have custom firmware), and I've been having fun with it. I still don't have the ideal printing parameters figured out (ABS works OK, but PLA is dodgy, because the printer wasn't designed for it--I had to add a heat sink to help it), but I'm learning how to design 3D solid objects. Wednesday, November 2, 2016 Cessation of existence and theories of persistence Suppose I could get into a time machine and instantly travel forward by a hundred years. Then over the next hundred (external) years I don’t exist. But this non-existence is not intrinsically a harm to me (it might be accidentally a harm if over these ten years I miss out on things). So a temporary cessation of existence is not an intrinsic harm to me. On the other hand, a permanent cessation of existence surely is an intrinsic harm to me. These observations have interesting connections with theories of persistence and time. First, observe that whether a cessation of existence is bad for me depends on whether I will come back into existence. This fits neatly with four-dimensionalism and less neatly with three-dimensionalism. If I am a four-dimensional entity, it makes perfect sense that as such I would have an overall well-being, and that this overall well-being should depend on the overall shape and size of my four-dimensional life, including my future life. Hence it makes sense that whether I undergo a permanent or impermanent cessation of existence makes a serious difference to me. But suppose I am three-dimensional and consider these two scenarios: 1. In 2017 I will permanently cease to exist. 2. In 2017 I will temporarily cease to exist and come back into existence in 2117. I am surely worse off in (1). But if I am three-dimensional, then to be worse off, I need to be worse off as a three-dimensional being, at some time or other. Prior to 2117, I’m on par as a three-dimensional being in the two scenarios. So if there is to be a difference in well-being, it must have something to do with my state after 2117. But it seems false that, say, in 2118, I am worse off in (1) than in (2). For how can I be better or worse off when I don’t exist? The three-dimensionalist’s best move, I think, is to say that I am actually worse off prior to 2017 in scenario (1) than in scenario (2). For, prior to 2017, it is true in scenario (1) that I will permanently cease to exist while in (2) it is false that I will do so. It can indeed happen that one is worse off at time t1 in virtue of how things will be at a later time t2. Perhaps the athlete who attains a world-record that won’t be beaten for ten years is worse off at the time of the record than the athlete who attains a world-record that won’t be beaten for a hundred years. Perhaps I am worse off when publishing a book that will be ignored than when publishing a book that will be taken seriously. But these are differences in external well-being, like the kind of well-being we have in virtue of our friends doing badly or well. And it is counterintuitive that permanent cessation of existence is only a harm to one’s external well-being. (The same problem afflicts Thomas Nagel’s theory that the badness of death has to do with unfinished projects.) The problem is worst on open future views. For on open future views, prior to the cessation of existence there may be no fact of the matter of whether I will come back into existence, and hence no difference in well-being. The problem is also particularly pressing on exdurantist views on which I am a three-dimensional stage, and future stages are numerically different from me. For then the difference, prior to 2017, between the two scenarios is a difference about what will happen to something numerically different from me. The problem is also particularly pressing on presentist and growing block views, for it is odd to say that I am better or worse off in virtue of non-existent future events. Of the three-dimensionalists, probably the best off is the eternalist endurantist. But even there the assimilation of the difference between (1) and (2) to external well-being is problematic. Tuesday, November 1, 2016 I was doing logic problems on the board in class and thinking about rock climbing, and I was struck by the joy of knowing one's made progress on a finite task. You can be pretty confident that if you've got an existential premise and you've set up an existential elimination subproof then you've made progress. You can be pretty confident that if you've got to a certain position on the wall and there is no other way to be at that height then you've made progress. And there is a delight in being really confident that one has made progress. Moreover, the value of the progress doesn't seem here to be merely instrumental. Even if in the end you fail, still having made progress feels valuable in and of itself. One can try to say that what's valuable is the practice one gets, or what the progress indicates about one's skills, but that doesn't seem right. It seems that the progress itself is valuable. Of course, it has to be genuine progress, not mere going down a blind alley (though recognizing a blind alley, in a scenario where there are only finitely many options, is itself progress). The value of progress (as such) at a task derives from the value of fulfilling the task, much as the value of striving at a task derives from the value of fulfilling it. But in both cases this is not a case of end-to-means value transfer. Maybe this has something to do with the idea developed by Robert M. Adams of standing for a good. Striving and a fortiori progress are ways of standing and moving in favor of a task. And that's worthwhile even if one does not accomplish the task. Monday, October 31, 2016 Realism and anti-reductionism If there is such a broad range of fundamental ontologies that "There are four chairs in my office" is compatible with, it seems that the sentence should also be compatible with various sceptical scenarios, such as that I am a brain in a vat being fed data from a computer simulation. In that case, the chair sentence would be true due to facts about the computer simulation, in much the way that "There are four chairs in this Minecraft house" is true. It would be very difficult to be open to a wide variety of fundamental physics stories about the chair sentence without being open to the sentence being true in virtue of facts about a computer simulation. In order for the sceptical question to make sense, we need the possibility of saying things that cannot simply be made true by a very wide variety of physical theories, since such things will also be made true by computer simulations. This gives us an interesting anti-reductionist argument. If the statement "I have two hands" is to be understood reductively (and I include non-Aristotelian functionalist views as reductive), then it could still be literally true in the brain-in-a-vat scenario. But if anti-reductionism about hands is true, then the statement wouldn't be true in the brain-in-a-vat scenario. And so I can deny that I am in that scenario simply by saying "I have two hands." Friday, October 28, 2016 Accretion, excretion and four-dimensionalism Suppose we are four-dimensional. Parthood simpliciter then is an eternal relation between, typically, four-dimensional entities. My heart is a four-dimensional object that is eternally a part of me, who am another four-dimensional object. But there is surely also such a thing as having a part at a time t. Thus, in utero my umbilical cord was a part of me, but it no longer is. What does it mean to have a part at a time? Here is the simplest thing to say: 1. x is a part of y at t if and only if x is a part of y and both x and y exist at t. But (1) then has a very interesting metaphysical consequence that only a few Aristotelian philosophers endorse: parts cannot survive being accreted by or excreted from the whole. For if, say, my finger survived its removal from the whole (and not just because I became a scattered object), there would be a time at which my finger would exist but wouldn’t be a part of me. And that violates (1) together with the eternality of parthood simpliciter. This may seem to be a reductio of (1). But if we reject (1), what do we put in its place, assuming four-dimensionalism? I suspect we will have to posit a second relation of parthood, parthood-at-a-time, which is not reducible to parthood simpliciter. And that seems to be unduly complex. So I propose that the four-dimensionalist embrace (1) and conclude to the thesis that parts cannot survive their accretion or excretion. Dualist survivalism According to dualist survivalism, at death our bodies perish but we continue to exist with nothing but a soul (until, Christians believe, the resurrection of the dead, when we regain our bodies). A lot of the arguments against dualist survivalism focus on how we could exist as mere souls. First, such existence seems to violate weak supplementation: my souls is proper part of me, but if the body perished, my soul would be my only part—and yet it would still be a proper part (since identity is necessary). Second, it seems to be an essential property of animals that they are embodied, an essential property of humans that they are animals, and an essential property of us that we are humans. There are answers to these kinds of worries in the literature, but I want to note that things become much simpler for the dualist survivalist if she accepts a four-dimensionalism that says that we are four-dimensional beings (this won't be endurantist, but it might not be perdurantist either). First, there will be a time t after my death (and before the resurrection) such that the only proper part of mine that is located at t is my soul. However, the soul won’t be my only part. My arms, legs and brain are eternally my parts. It’s just that they aren’t located at t, as the only proper part of me that is located at t is my soul. There is no violation of weak supplementation. (We still get a violation of weak supplementation for the derived relation of parthood-at-t, where x is a part-at-t of y provided that x is a part of y and both x and y exist at t. But why think there is weak supplementation for parthood-at-t? We certainly wouldn’t expect weak supplementation to hold for parthood-at-z, where z is a spatial location and x is a part-at-z of y provided that x is a part of y and both x and y are located at z.) Second, it need not follow from its being an essential property of animals that they are embodied that they have bodies at every time at which they exist. Compare: It may be an essential property of a cell that it is nucleated. But the cell is bigger spatially than the nucleus, so it had better not follow that the nucleus exists at every spatial location at which the cell does. So why think that the body needs to exist at every temporal location at which the animal does? Why can’t the animal be bigger temporally than its body? Of course, those given to three-dimensional thinking will say that I am missing crucial differences between space and time. Thursday, October 27, 2016 Three strengths of desire Plausibly, having satisfied desires contributes to my well-being and having unsatisfied desires contributes to my ill-being, at least in the case of rational desires. But there are infinitely many things that I’d like to know and only finitely many that I do know, and my desire here is rational. So my desire and knowledge state contributes infinite misery to me. But it does not. So something’s gone wrong. That’s too quick. Maybe the things that I know are things that I more strongly desire to know than the things that I don’t know, to such a degree that the contribution to my well-being from the finite number of things I know outweighs the contribution to my ill-being from the infinite number of things I don’t know. In my case, I think this objection holds, since I take myself to know the central truths of the Christian faith, and I take that to make me know things that I most want to know: who I am, what I should do, what the point of my life is, etc. And this may well outweigh the infinitely many things that I don’t know. Yes, but I can tweak the argument. Consider some area of my knowledge. Perhaps my knowledge of noncommutative geometry. There is way more that I don’t know than that I know, and I can’t say that the things that I do know are ones that I desire so much more strongly to know than the ones I don’t know so as to balance them out. But I don’t think I am made more miserable by my desire and knowledge state with respect to noncommutative geometry. If I neither knew anything nor cared to know anything about noncommutative geometry, I wouldn’t be any better off. Thinking about this suggests there are three different strengths in a desire: 1. Sp: preferential strength, determined by which things one is inclined to choose over which. 2. Sh: happiness strength, determined by how happy having the desire fulfilled makes one. 3. Sm: misery strength, determined by how miserable having the desire unfulfilled makes one. It is natural to hypothesize that (a) the contribution to well-being is Sh when the desire is fulfilled and −Sm when it is unfulfilled, and (b) in a rational agent, Sp = Sh + Sm. As a result of (b), one can have the same preferential strength, but differently divided between the happiness and misery strengths. For instance, there may be a degree of pain such that the preferential strength of my desire not to have that pain equals the preferential strength of my desire to know whether the Goldbach Conjecture is true. I would be indifferent whether to avoid the pain or learn whether the Goldbach Conjecture is true. But they are differently divided: in the pain case Sm >> Sh and in the Goldbach case Sm << Sh. There might be some desires where Sm = 0. In those cases we think “It would be nice…” For instance, I might have a desire that some celebrity be my friend. Here, Sm = 0: I am in no way made miserable by having that desire be unfulfilled, although the desire might have significant preferential strength—there might be significant goods I would be willing trade for that friendship. On the other hand, when I desire that a colleague be my friend, quite likely Sm >> 0: I would pine if the friendship weren’t there. (We might think a hedonist has a story about all this: Sh measures how pleasant it is to have the desire fulfilled and Sm measures how painful the unfulfilled desire is. But that story is mistaken. For instance, consider my desire that people not say bad things behind my back in such a way that I never find out. Here, Sm >> 0, but there is no pain in having the desire unfulfilled, since when it’s unfulfilled I don’t know about it.) Wednesday, October 26, 2016 "Should know" I’ve been thinking about the phrase “x should know that s”. (There is probably a literature on this, but blogging just wouldn’t be as much fun if one had to look up the literature!) We use this phrase—or its disjunctive variant “x knows or should know that s”—very readily, without its calling for much evidence about x. • “As an engineer Alice should know that more redundancy was needed in this design.” • “Bob knows or should know that his behavior is unprofessional for a librarian.” • “Carl should have known that genocide is wrong.” Here’s a sense of “x should know that s”: x has some relevant role R and it is normal for those in R to know that s under the relevant circumstances. In that sense, to say that x should know that s we don’t need to know anything specific about x’s history or mental state, other than that x has role R. Rather, we need to know about R: it is normal engineering practice to build in sufficient redundancy; librarians have an unwritten code of professional behavior; human beings normally have a moral law written in their hearts. This role-based sense of “should know” is enough to justify treating x as a poor exemplar of the role R when x does not in fact know that s. When R is a contingent role, like engineer or librarian, it could be a sufficient for drumming x out of R. But we sometimes seem use a “should know” claim to underwrite moral blame. And the normative story I just gave about “should know” isn’t strong enough for that. Alice might have had a really poor education as an engineer, and couldn’t have known better. If the education was sufficiently poor, we might kick her out of the profession, but we shouldn’t blame her morally. Carl, of course, is a case apart. Carl’s ignorance makes him a defective human being, not just a defective engineer or librarian. Still a defective human being is not the same as a morally blameworthy human being. And in Carl’s case we can’t drum him out of the relevant role without being able to levy moral blame on him, as drumming him out of humanity is, presumably, capital punishment. However, we can lock him up for the protection of society. On the other hand, we could take “x should know that s” as saying something about x’s state, like that it is x’s own fault if x doesn’t know. But in that case, I think people often use the phrase without sufficient justification. Yes, it’s normal to know that genocide is wrong. But we live in a fallen world where people can fall very far short of what is normal through no fault of their own, by virtue of physical and mental disease, the intellectual influence of others, and so on. I worry that in common use the phrase “x should know that s” has two rationally incompatible features: • Our evidence only fits with the role-based normative reading. • The conclusions only fit with the personal fault reading. Monday, October 24, 2016 Two senses of "decide" 1. Alice sacrifices her life to protect her innocent comrades. 2. Bob decides that if he ever has the opportunity to sacrifice his life to protect his innocent comrades, he’ll do it. We praise Alice. But as for Bob, while we commend his moral judgment, we think that he is not yet in the crucible of character. Bob’s resolve has not yet been tested. And it’s not just that it hasn’t been tested. Alice’s decision not only reveals but also constitutes her as a courageous individual. Bob’s decision falls short both in the revealing but also in the constituting department (it’s not his fault, of course, that the opportunity hasn’t come up). Now compare Alice and Bob to Carl: 1. Carl knows that tomorrow he’ll have the opportunity to sacrifice his life to protect his innocent comrades, and he decides he will make the sacrifice. Carl is more like Bob than like Alice. It’s true that Carl’s decision is unconditional while Bob’s is conditional. But even though Carl’s decision is unconditional, it’s not final. Carl knows (at least on the most obvious way of spelling out the story) that he will have another opportunity to decide come tomorrow, just as Bob will still have to make a final decision once the opportunity comes up. I am not sure how much Bob and Carl actually count as deciding. They are figuring out what would or will (respectively) be the thing to do. They are making a prediction (hypothetical or future-oriented) about their action. They may even be trying by an act of will to form their character so as to determine that they would or will make the sacrifice. But if they know how human beings function, they know that their attempt is very unlikely to be successful: they would or will still have a real choice to make. And in the end it probably wouldn’t surprise us too much if, put to the test, Bob and Carl failed to make the sacrifice. Alice did something decisive. Bob and Carl have yet to do so. There is an important sense in which only Alice decided to sacrifice her life. The above were cases of laudable action. But what about the negative side? We could suppose that David steals from his employer; Erin decides that she will steal if she has the opportunity; and Frank knows he’ll have the opportunity to steal and decides he’ll take it. I think we’ll blame Erin and Frank much more than we’ll praise Bob and Carl (this is an empirical prediction—feel free to test it). But I think that’s wrong. Erin and Frank haven’t yet gone into the relevant crucible of character, just as Bob and Carl haven’t. Bob and Carl may be praiseworthy for their present state; Erin and Frank may be blameworthy for theirs. But the praise and the blame shouldn’t go quite as far as in the case of Alice and David, respectively. (Of course, any one of the six people might for some other reason, say ignorance, fail to be blameworthy or praiseworthy.) This is closely to connected to my previous post. Thursday, October 20, 2016 Two senses of "intend"? Consider these sentences: 1. Intending to kill the wolverine, Alice pulled the trigger 1. Intending to get to the mall, Bob started his car. If Alice pulls the trigger intending to kill the wolverine and the wolverine survives, then necessarily Alice’s action is a failure. But suppose that Bob intends to get to the mall, starts his car, changes his mind, and drives off for a hike in the woods. None of the actions described is a failure. He just changed his mind. If nanoseconds after the bullet leaving the muzzle Alice changed her mind, and it so happens the wolverine survived, it is still true that Alice’s action failed. Given her intention, she tried to kill the wolverine, and failed. In the change of mind case, Bob, however, didn’t try to get to the mall. Rather, he tried to start to get to the mall, and he also started trying to get to the mall. His trying to start was successful—he did start to get to the mall. But it makes no sense to attribute either success or failure to a mere start of trying. There seems to be a moral difference, too. Suppose that killing the wolverine and getting to the mall are both wrong (maybe the wolverine is no danger to Alice, and Bob has promised his girlfriend not to go to this mall). Then Alice gets the opprobrium of being an attempted wolverine killer by virtue of (1), while Bob isn’t yet an attempted mall visitor by virtue of (2)—only when he strives to propel his body through the door does he become an attempted mall visitor. Even if killing the wolverine and getting to the mall are equally wrong, Bob has done something less bad—for the action he took in virtue of (2) was open to the possibility of changing his mind, as bringing it to completion would require further voluntary decisions. What Bob did was still wicked, but less so than what Alice did. Action (1) commits Alice to killing the wolverine: if the wolverine fails to die, Alice is still an attempted wolverine killer. But Bob has undertaken no commitment to visiting the mall by starting the car. This suggests to me that perhaps “intends” may be used in different senses in (1) and (2). In (1), it may be an “intends” that commits Alice to wolverine killing; in (2), it may be an “intends” that only commits Bob to starting trying to visit the mall. In (1), we have an intending that p that constitutes an action as a trying to bring it about that p.
4edcdcfba3dc46df
From this link, I've read that An increased interatomic spacing decreases the potential seen by the electrons in the material, which in turn reduces the size of the energy bandgap. Can this statement be explained more clearly? up vote 5 down vote accepted Let us take it one step at a time, when the temperature increases the vibration energy of atoms increases causing the distance between them to increase. I hope that is clear. Now we know from solid state physics that electrons exist in bands rather than discrete levels for single atoms. The electrons in the valance band are the outermost electrons, which are responsible for chemical bonding and conduction of heat/current. All the characteristics of electronic bands can be obtained by solving Schrödinger equation for the electrons in that band. That requires extensive mathematical treatment. The electrons in avalanche band “feel” or “see” electric potential coming from the atoms/ions they are associated with and from repulsing each other. Let us focus at the potential coming from atoms/ions. Electrons when they are moving pass by atoms/ions periodically. So they experience what is called a periodic potential, have a look at the following figure. Now when the interatomic distance increases, the periodic potential becomes weaker on average (the parts in which the potential is zero become larger and wider). So the potential energy of electrons decreases. Keep in mind that the potential energy is the product of electron charge which is minus by the potential which is always less than or equal zero, so the minimum potential energy of electrons is when the potential is zero. enter image description here The potential is one of many factors determining the band characteristics. It is not easy to see the direct correlation between the potential and the band gap size. The best way to understand the impact of reducing potential on band gap can is by solving Schrodinger equation. It is difficult to solve it but luckily people have solved it for us. The effect of increasing the interatomic spacing is shown in the next figure. enter image description here As you can see when the interatomic distance increases we move toward the right in the above figure. The width of the band gap decreases when going to the right until it vanishes. If the intermolecular distance becomes very large, that corresponds to dissolving solid into separate atoms where atomic levels are restored. To put everything in one picture, have a look at this report. It might not be easy for you to understand the mathematical expressions if that is not your field of knowledge, but it will show you what forces are taken into account, what assumptions are made, and how band structure changes as function of interatomic forces. It will also show you how tricky it is to solve Schrodinger equations for electronic bands. I hope that was useful • Gotaquestion, thank you! You have a real talent for explaining things clearer and concisely! I was confused because I didn't draw the potential, everything follows logically from there. I like to think of increasing the atomic period, as the electron becoming 'more free' in a sense. – User 17670 Oct 13 '13 at 14:11 Your Answer
785f0c2e66cd8875
Just when you thought you'd heard every quantum mystery that was possible, out pops another one. Jeff Tollaksen mentioned it in passing during his talk at the recent Foundation Questions Institute conference. Probably Tollaksen assumed we'd all heard it before. After all, his graduate advisor, Yakir Aharonov—who has made an illustrious career of poking the Schrödinger equation to see what wild beasts come scurrying out—first discovered it in the 1990s and discussed it in chapter 17 of his 2005 book, Quantum Paradoxes. But it was new to me. The situation is an elaboration of Schrödinger's thought experiment. You have a cat. It is either purring or meowing. It is curled up in one of two boxes. As in Schrödinger's scenario, you couple the cat to some quantum system, like a radioactive atom, to make its condition ambiguous—a superposition of all possibilities—until you examine one of the boxes. If you reach into box 2, you feel the cat. If you listen to the boxes, you hear purring. But when you listen more closely, you notice that the purring is coming from box 1. The cat is in one box, the purring in the other. Like a Cheshire Cat, the animal has become separated from the properties that constitute a cat. What a cat does and what a cat is no longer coincide. In practice, you'd pull this stunt on an electron rather than a cat. You'd find the electron in one box, its spin in the other. Even by the standards of quantum mechanics, this is surprising. It requires what quantum physicists call "weak measurement," whereby you interact with a system so gently that you avoid collapsing it from a quantum state to a classical one. On the face of it, such an interaction scarcely qualifies as a measurement; any results get lost in the noise of Heisenberg's Uncertainty Principle. What Aharonov realized is that, if you sift through the results, you can find patterns buried within them. In practice, this means repeating the experiment on a large number of electrons (or cats) and then applying a filter or “postselection.” Only a few particles will pass through this filter, and among them, the result of the softly softly measurement will stand out. Because you avoid collapsing the quantum state, quintessentially quantum phenomena such as wave interference still occur. So, for a Cheshire Cat, you apply the following filter: you change the sign of one term in the superposition, causing the location and spin of the electron to interfere constructively in one box and destructively in the other, zeroing out the probability of finding the electron in box 1 and zeroing out the net spin of the electron in box 2. Voilà, the electron is in box 2 and its spin in box 1. If this leaves your head spinning, it should. The word “weak” describes not only the measurement but also my intuitive grasp for what's really going on. The best I can do is recommend the article on weak measurement by Aharonov, Tollaksen, and Sandu Popescu in last November's Physics Today, but be prepared to read it several times before you have the slightest idea of what they're saying. I've commissioned an article about Aharonov's work for an upcoming issue of Scientific American to collapse some of the uncertainty. In the meantime, try sitting in a different room from where your confusion is.
5646fc8d52a0ac72
UNM Physics 330: Modern Physics MWF 11:30-12:20 P&A room 184 Instructor: Prof. Keith Lidke Room 1164, Physics and Astronomy e-mail: klidke@unm.edu phone: (505)277-0302 Office Hours: MWF 10:30-11:30 PandA 1164, by appointment, or any time you can catch me around PandA. Teaching Assistant: Prabhakar Palni Modern Physics by Tipler and Llewellyn, 5th Edition Online Resources Course Contents Quantum Particles and Quantum Mechanics Review of relativistic energy and momentum , de Broglie wavelength, probability amplitude, elements of classical probability, diffraction of matter, uncertainty principle, Schrödinger equation: probability amplitudes, expectation values. Free particle wave equation: superposition, wave packets, uncertainty principle. Energy eigenstates: particle in a box, harmonic oscillator, tunneling. Atomic Physics Quantization of angular momentum, energy eigenstates, spin, fine structure of Hydrogen, atomic transitions, Zeeman effect, Lamb shift, Pauli exclusion principle, multielectron atoms and the periodic table, the hydrogen molecule. Classical and Quantum Statistics Ideal gas law, Maxwell-Boltzmann distribution, density of states, Fermi-Dirac distribution, Bose-Einstein distribution, black body radiation, radiation pressure, Bose-Einstein condensation. Conductors, Insulators, Semi-conductors Electronic energy bands, Fermi energy, heat capacity, Ohm's law, semiconductors, diode junction. Photon-Atom interactions, stimulated emission, Amplification of radiation, examples of modern Lasers Nuclear Physics Scattering and the cross section, the nuclear force: range, charge-independence, nuclear structure, radioactive decays and nuclear interactions, Mossbauer effect, interaction of radiation and bulk matter. Elementary Particles Particle accelerators, particle detectors, particles and antiparticles, quarks, QCD, weak interactions, Standard Model of Particle Physics, current topics in high energy physics. Astroparticle Physics and Cosmology What is General Relativity?, the Big Bang, particle physics and the early universe, dark matter, dark energy There will be four exams, three exams throughout the semester and a final exam. Each exam will contribute 25% to the final grade, with the lowest exam score dropped. The final exam will be a comprehensive exam. No make up exams will be offered. The remaing 25% of the grade will be written homework assignments. Homework will be assigned on Wednesdays and be due the following Wednesday before class. No late homework accepted. Week ofTopicsAssignments, etc 1Jan 21Relativistic Momementum and Energy, sections 2.1-2.2 Homework#1 Homework#1 Solutions 2Jan 26Rest Energy, Invariant Mass, Pair Production , sections 2.3-2.4. Chapter 3: JJ Thompson and e/m, Millikan experiement,Photoelectric effect, Blackbody Radiation, Compton Scattering. Homework#2 Homework#2 Solutions 3Feb. 2Chapter 4: Rutherford scattering: cross sections and scattering rates. Bohr model, photoelectric effect, Franck-Hertz experiment. Chapter 5: Particles as waves, Uncertainty Principle. Homework#3 Homework#3 Solutions 4Feb. 9Chapter 5: De Broglie waves, wave packets, uncertainty principle, phase and group velocities. Chapter 6: Schrodinger Equation. Homework#4 Homework#4 Solutions 5Feb. 16Chapter 6: Schrodinger Equation, potential wells, harmonic oscillator, operators, reflection/transmission. Homework#5 Homework#5 Solutions 6Feb. 23Chapter 7: Infinite square well in one, two and three dimensions. Hydrogen Atom. Practice Exam Practice Exam Solutions 7Mar. 2 Exam 1, Quantization of Angular Momentum, Electron Spin, Total Angular Momentum. Homework#6 Homework#6 Solutions Exam 1 Exam 1 Solutions 8Mar. 9 Zeeman Effect, Pachen-Bach Effect, (anti-) Symmetric wavefunctions, Pauli Exclusion Principle, Multi-electron Atoms. Homework#7 Homework#7 Solutions 9Mar. 16Spring Break 10Mar. 23Boltzmann, Bose, and Fermi Distributions. Density of States, Heat Capacities. Homework#8 Homework#8 Solutions 11Mar. 30Bose-Einstein condensation, Molecular bonds and energy levels. Homework#9 Homework#9 Solutions 12April 6Lasers, Fluorescence, Raman Scattering, Exam 2. Friday April 10. Exam 2 Solutions 13April 13Classical and quantum theory of conduction, band theory, pn-junctions. Homework#10 Homework#10 Solutions 14April 20Transistors. Nuclear size and structure, radioactive decay. Homework#11 Homework#11 Solutions 15April 27Stong Force, Fission, Fusion, Shell Structure, Radiation and matter. Homework#12 Homework#12 Solutions 16May 4Exam 3: Monday, May 4. Neutrino ocsillations, Fundamental particles, Weak interactions. Exam 3 Exam 3 Solutions 17May 11Final Exam: Wednesday, May 13, 10:00 am, Panda rm 184 FinalExam FinalExam Solutions Final Grades You may pick up uncollected exams and homework from me at anytime.
fa2e4b5d89fd7064
We combine quantum mechanical simulations with machine learning and optimization algorithms to computationally design materials with desired properties for various applications Our main research pursuits are: Through the portal of computer simulations we gain access to the vast configuration space of materials structure and composition. We can explore the uncharted territories of materials that have not been synthesized yet and predict their properties from first principles, based solely on the knowledge of their elemental composition and the laws of quantum mechanics. Since the Schrödinger equation can be solved exactly only for very small systems (=the hydrogen atom), we employ approximate methods within the framework of density functional theory (DFT) and many-body perturbation theory (MBPT) to apply quantum mechanics to systems, such as molecular crystals and interfaces, with up to several hundred atoms. The computational cost of quantum mechanical simulations increases rapidly with the accuracy of the method, the size of the system, and the number of trial structures sampled, therefore we run our calculations on some of the world’s most powerful supercomputers. To navigate the configuration space and identify the most promising candidates, we use optimization algorithms. For example, genetic algorithms are guided to the most promising regions by the evolutionary principle of survival of the fittest. Machine learning (ML) uses statistical models based on “training data” to make predictions for new data points. We employ ML to accelerate predictions for materials properties and unveil hidden correlations in data generated by our simulations. We apply several types of ML algorithms for different purposes, such as optimization, classification, clustering, feature selection, sampling, and finding structure-property correlations. ML algorithms are integrated with quantum mechanical simulations in fully automated complex workflows.
edfb63d55e51a367
Reactivity (chemistry) In chemistry, reactivity is the impetus for which a chemical substance undergoes a chemical reaction, either by itself or with other materials, with an overall release of energy. Reactivity refers to: • the chemical reactions of a single substance, • the chemical reactions of two or more substances that interact with each other, • the systematic study of sets of reactions of these two kinds, • methodology that applies to the study of reactivity of chemicals of all kinds, • experimental methods that are used to observe these processes • theories to predict and to account for these processes. The chemical reactivity of a single substance (reactant) covers its behavior in which it: • Decomposes • Forms new substances by addition of atoms from another reactant or reactants • Interacts with two or more other reactants to form two or more products The chemical reactivity of a substance can refer to the variety of circumstances (conditions that include temperature, pressure, presence of catalysts) in which it reacts, in combination with the: • Variety of substances with which it reacts • Equilibrium point of the reaction (i.e., the extent to which all of it reacts) • Rate of the reaction The term reactivity is related to the concepts of chemical stability and chemical compatibility. An alternative point of view Reactivity is a somewhat vague concept in chemistry. It appears to embody both thermodynamic factors and kinetic factors—i.e., whether or not a substance reacts, and how fast it reacts. Both factors are actually distinct, and both commonly depend on temperature. For example, it is commonly asserted that the reactivity of group one metals (Na, K, etc.) increases down the group in the periodic table, or that hydrogen's reactivity is evidenced by its reaction with oxygen. In fact, the rate of reaction of alkali metals (as evidenced by their reaction with water for example) is a function not only of position within the group but particle size. Hydrogen does not react with oxygen—even though the equilibrium constant is very large—unless a flame initiates the radical reaction, which leads to an explosion. Restriction of the term to refer to reaction rates leads to a more consistent view. Reactivity then refers to the rate at which a chemical substance tends to undergo a chemical reaction in time. In pure compounds, reactivity is regulated by the physical properties of the sample. For instance, grinding a sample to a higher specific surface area increases its reactivity. In impure compounds, the reactivity is also affected by the inclusion of contaminants. In crystalline compounds, the crystalline form can also affect reactivity. However, in all cases, reactivity is primarily due to the sub-atomic properties of the compound. Although it is commonplace to make statements that substance 'X is reactive', all substances react with some reagents and not others. For example, in making the statement that 'sodium metal is reactive', we are alluding to the fact that sodium reacts with many common reagents (including pure oxygen, chlorine, hydrochloric acid, water) and/or that it reacts rapidly with such materials at either room temperature or using a Bunsen flame. 'Stability' should not be confused with reactivity. For example, an isolated molecule of an electronically excited state of the oxygen molecule spontaneously emits light after a statistically defined period[citation needed]. The half-life of such a species is another manifestation of its stability, but its reactivity can only be ascertained via its reactions with other species. Causes of reactivity The second meaning of 'reactivity', that of whether or not a substance reacts, can be rationalised at the atomic and molecular level using older and simpler valence bond theory and also atomic and molecular orbital theory. Thermodynamically, a chemical reaction occurs because the products (taken as a group) are at a lower free energy than the reactants; the lower energy state is referred to as the 'more stable state'. Quantum chemistry provides the most in-depth and exact understanding of the reason this occurs. Generally, electrons exist in orbitals that are the result of solving the Schrödinger equation for specific situations. All things (values of the n and ml quantum numbers) being equal, the order of stability of electrons in a system from least to greatest is unpaired with no other electrons in similar orbitals, unpaired with all degenerate orbitals half filled and the most stable is a filled set of orbitals. To achieve one of these orders of stability, an atom reacts with another atom to stabilize both. For example, a lone hydrogen atom has a single electron in its 1s orbital. It becomes significantly more stable (as much as 100 kilocalories per mole, or 420 kilojoules per mole) when reacting to form H2. It is for this same reason that carbon almost always forms four bonds. Its ground state valence configuration is 2s2 2p2, half filled. However, the activation energy to go from half filled to fully filled p orbitals is so small it is negligible, and as such carbon forms them almost instantaneously. Meanwhile, the process releases a significant amount of energy (exothermic). This four equal bond configuration is called sp3 hybridization. The above three paragraphs rationalise, albeit very generally, the reactions of some common species, particularly atoms. One approach to generalise the above is the activation strain model[1][2][3] of chemical reactivity which provides a causal relationship between, the reactants' rigidity & their electronic structure, and the height of the reaction barrier. The rate of any given reaction, is governed by the rate law: where the rate is the change in the molar concentration in one second in the rate-determining step of the reaction (the slowest step), [A] is the product of the molar concentration of all the reactants raised to the correct order, known as the reaction order, and k is the reaction constant, which is constant for one given set of circumstances (generally temperature and pressure) and independent of concentration. The greater the reactivity of a compound the higher the value of k and the higher the rate. For instance, if, where n is the reaction order of A, m is the reaction order of B, is the reaction order of the full reaction, and k is the reaction constant. See also 1. ^ Wolters, L. P.; Bickelhaupt, F. M. (2015-07-01). "The activation strain model and molecular orbital theory". Wiley Interdisciplinary Reviews: Computational Molecular Science. 5 (4): 324–343. doi:10.1002/wcms.1221. ISSN 1759-0884. PMC 4696410. PMID 26753009. 2. ^ Bickelhaupt, F. M. (1999-01-15). "Understanding reactivity with Kohn–Sham molecular orbital theory: E2–SN2 mechanistic spectrum and other concepts". Journal of Computational Chemistry. 20 (1): 114–128. doi:10.1002/(sici)1096-987x(19990115)20:1<114::aid-jcc12>;2-l. ISSN 1096-987X. 3. ^ Ess, D. H.; Houk, K. N. (2007-08-09). "Distortion/Interaction Energy Control of 1,3-Dipolar Cycloaddition Reactivity". Journal of the American Chemical Society. 129 (35): 10646–10647. doi:10.1021/ja0734086. PMID 17685614.
7d7d84a1fc82d6ed
Correcting the U(1) error in the Standard Model of particle physics Fundamental particles in the SU(2)xU(1) part of the Standard Model Above: the Standard Model particles in the existing SU(2)xU(1) electroweak symmetry group (a high-quality PDF version of this table can be found here).  The complexity of chiral symmetry – the fact that only particles with left-handed spins (Weyl spinors) experience the weak force – is shown by the different effective weak charges for left and right handed particles of the same type.  My argument, with evidence to back it up in this post and previous posts, is that there are no real ‘singlets’: all the particles are doublets apart from the gauge bosons (W/Z particles) which are triplets.  This causes a major change to the SU(2)xU(1) electroweak symmetry.  Essentially, the U(1) group which is a source of singlets (i.e., particles shown in blue type in this table which may have weak hypercharge but have no weak isotopic charge) is removed!  An SU(2) symmetry group then becomes a source of electric and weak hypercharge, as well as its existing role in Standard Model as a descriptor of the isotopic spin.  It modifies the role of the ‘Higgs bosons’: some such particles are still be required to give mass, but the mainstream electroweak symmetry breaking mechanism is incorrect. There are 6 rather than 4 electroweak gauge bosons, the same 3 massive weak bosons as before, but 2 new charged massless gauge bosons in addition to the uncharged massless ‘photon’, B.  The 3 massless gauge bosons are all massless counterparts to the 3 massive weak gauge bosons.  The ‘photon’ is not the gauge boson of electromagnetism because, being neutral, it can’t represent a charged field.  Instead, the ‘photon’ gauge boson is the graviton, while the two massless gauge bosons are the charged exchange radiation (gauge bosons) of electromagnetism.  This allows quantitative predictions and the resolution of existing electromagnetic anomalies (which are usually just censored out of discussions). It is the U(1) group which falsely introduces singlets.  All Standard Model fermions are really doublets: if they are bound by the weak force (i.e., left-handed Weyl spinors) then they are doublets in close proximity.  If they are right-handed Weyl spinors, they are doublets mediated by only strong, electromagnetic and gravitational forces, so for leptons (which don’t feel the strong force), the individual particles in a doublet can be located relatively far from another (the electromagnetic and gravitational interactions are both long-range forces).  The beauty of this change to the understanding of the Standard Model is that gravitation automatically pops out in the form of massless neutral gauge bosons, while electromagnetism is mediated by two massless charged gauge bosons, which gives a causal mechanism that predicts the quantitative coupling constants for gravity and electromagnetism correctly.  Various other vital predictions are also made by this correction to the Standard Model. Fundamental vector boson charges of SU(2)  Above: the fundamental vector boson charges of SU(2).  For any particle which has effective mass, there is a black hole event horizon radius of 2GM/c2.  If there is a strong enough electric field at this radius for pair production to occur (in excess of Schwinger’s threshold of 1.3*1018 v/m), then pairs of virtual charges are produced near the event horizon.  If the particle is positively charged, the negatively charged particles produced at the event horizon will fall into the black hole core, while the positive ones will escape as charged radiation (see Figures 2, 3 and particularly 4 below for the mechanism for propagation of massless charged vector boson exchange radiation between charges scattered around the universe).  If the particle is negatively charged, it will similarly be a source of negatively charged exchange radiation (see Figure 2 for an explanation of why the charge is never depleted by absorbing radiation from nearby pair production of opposite sign to itself; there is simply an equilibrium of exchange of radiation between similar charges which cancels out that effect).  In the case of a normal (large) black hole or neutral dipole charge (one with equal and opposite charges, and therefore neutral as a whole), as many positive as negative pair production charges can escape from the event horizon and these will annihilate one another to produce neutral radiation, which produces the right force of gravity.  Figure 4 proves that this gravity force is about 1040 times stronger than electromagnetism.  Another earlier post calculates the Hawking black hole radiation rate and proves it creates the force strength involved in electromagnetism. (For a background to the elementary basics of quantum field theory and quantum mechanics, like the Schroedinger and Dirac equations and their consequences, see the earlier post on The Physics of Quantum Field Theory.  For an introduction to symmetry principles, see the previous post.) The SU(2) symmetry can model electromagnetism (in addition to isospin) because it models two types of charges, hence giving negative and positive charges without the wrong method U(1) uses (where it specifies there are only negative charges, so positive ones have to be represented by negative charges going backwards in time).  In addition, SU(2) gives 3 massless gauge bosons, two charged ones (which mediate the charge in electric fields) and one neutral one (which is the spin-1 graviton, that causes gravity by pushing masses together).  In addition, SU(2) describes doublets, matter-antimatter pairs.  We know that electrons are not produced individidually, only in lepton-antilepton pairs.  The reason why electrons can be separated a long distance from their antiparticle (unlike quarks) is simply the nature of the binding force, which is long range electromagnetism instead of a short-range force. Quantum field theory, i.e., the standard model of particle physics, is based mainly on experimental facts, not speculating.  The symmetries of baryons give SU(3) symmetry, those of mesons give SU(2) symmetry.  That’s experimental particle physics. The problem in the standard model SU(3)xSU(2)xU(1) is the last component, the U(1) electromagnetic symmetry.  In SU(3) you have three charges (coded red, blue and green) and form triplets of quarks (baryons) bound by 32-1 = 8 charged gauge bosons mediating the strong force.  For SU(2) you have two charges (two isospin states) and form doublets, i.e., quark-antiquark pairs (mesons) bound by 22-1 = 3 gauge bosons (one positively charged, one negatively charged and one neutral). One problem comes when electromagnetism is represented by U(1) and added to SU(2) to form the electroweak unification, SU(2)xU(1).  This means that you have to add a Higgs field which breaks the SU(2)xU(1) symmetry at low energy, by giving masses (at low energy only) to the 3 gauge bosons of SU(2).  At high energy, the masses of those 3 gauge bosons must disappear, so that they are massless, like the photon assumed to mediate the electromagnetic force represented by U(1).  The required Higgs field which adds mass in the right way for electroweak symmetry breaking to work in the Standard Model but adds complexity and isn’t very predictive. The other, related, problem is that SU(2) only acts on left-handed particles, i.e., particles whose spin is described by a left-handed Weyl spinor.  U(1) only has one electric charge, the electron.  Feynman represents positrons in the scheme as electrons going backwards in time, and this makes U(1) work, but it has many problems and a massless version of SU(2) is the correct electromagnetism-gravitational model. So the correct model for electromagnetism is really SU(2) which has two types of electric charge (positive and negative) and acts on all particles regardless of spin, and is mediated by three types of massless gauge bosons: negative ones for the fields around negative charges, positive ones for positive fields, and neutral ones for gravity. The question then is, what is the corrected Standard Model?  If we delete U(1) do we have to replace it with another SU(2) to get SU(3)xSU(2)xSU(2), or do we just get SU(3)xSU(2) in which SU(2) takes on new meaning, i.e., there is no symmetry breaking? Assume the symmetry group of the universe is SU(3)xSU(2).  That would mean that the new SU(2) interpretation has to do all the work and more of SU(2)xU(1) in the existing Standard Model.  The U(1) part of SU(2)xU(1) represented both electromagnetism and weak hypercharge, while SU(2) represented weak isospin. We need to dump the Higgs field as a source for symmetry breaking, and replace it with a simpler mass-giving mechanism that only gives mass to left-handed Weyl spinors.  This is because the electroweak symmetry breaking problem has disappeared. We have to use SU(2) to represent isospin, weak hypercharge, electromagnetism and gravity.   Can it do all that? Can the Standard Model be corrected by simply removing U(1) to leave SU(3)xSU(2) and having the SU(2) produce 3 massless gauge bosons (for electromagnetism and gravity) and 3 massive gauge bosons (for weak interactions)? Can we in other words remove the Higgs mechanism for electroweak symmetry breaking and replace it by a simpler mechanism in which the short range of the three massive weak gauge bosons distinguishes between electromagnetism (and gravity) from the weak force? The mass giving field only gives mass to gauge bosons that normally interact with left-handed particles. What is unnerving is that this compression means that one SU(2) symmetry is generating a lot more physics than in the Standard Model, but in the Standard Model U(1) represented both electric charge and weak hypercharge, so I don’t see any reason why SU(2) shouldn’t represent weak isospin, electromagnetism/gravity and weak hypercharge. The main thing is that because it generates the 3 massless gauge bosons, only half of which need to have mass added to them to act as weak gauge bosons, it has exactly the right field mediators for the forces we require. If it doesn’t work, the alternative replacement to the Standard Model is SU(3)xSU(2)xSU(2) where the first SU(2) is isospin symmetry acting on left-handed particles and the second SU(2) is electrogravity. Mathematical review Following from the discussion in previous posts, it is time to correct the errors of the Standard Model, starting with the U(1) phase or gauge invariance.  The use of unitary group U(1) for electromagnetism and weak hypercharge is in error as shown in various ways in the previous posts here, here, and here. The maths is based on a type of continuous group defined by Sophus Lie in 1873.  Dr Woit summarises this very clearly in Not Even Wrong (UK ed., p47): ‘A Lie group … consists of an infinite number of elements continuously connected together.  It was the representation theory of these groups that Weyl was studying. ‘A simple example of a Lie group together with a representation is that of the group of rotations of the two-dimensional plane.  Given a two-dimensional plane with chosen central point, one can imagine rotating the plane by a given angle about the central point.  This is a symmetry of the plane.  The thing that is invariant is the distance between a point on the plane and the central point.  This is the same before and after the rotation.  One can actually define rotations of the plane as precisely those transformations that leave invariant the distance to the central point.  There is an infinity of these transformations, but they can all be parametrised by a single number, the angle of rotation.  Not Even Wrong Argand diagram showing rotation by an angle on the complex plane.   Illustration credit: based on Fig. 3.1 in Not Even Wrong. ‘If one thinks of the plane as the complex plane (the plane whose two coordinates label the real and imaginary part of a complex number), then the rotations can be thought of as corresponding not just to angles, but to a complex number of length one.  If one multiplies all points in the complex plane by a given complex number of unit length, one gets the corresponding rotation (this is a simple exercise in manipulating complex numbers).  As a result, the group of rotations in the complex plane is often called the ‘unitary group of transformations of one complex variable’, and written U(1). ‘This is a very specific representation of the group U(1), the representation as transformations of the complex plane … one thing to note is that the transformation of rotation by an angle is formally similar to the transformation of a wave by changing its phase [by Fourier analysis, which represents a waveform of wave amplitude versus time as a frequency spectrum graph showing wave amplitude versus wave frequency by decomposing the original waveform into a series which is the sum of a lot of little sine and cosine wave contributions].  Given an initial wave, if one imagines copying it and then making the copy more and more out of phase with the initial wave, sooner or later one will get back to where one started, in phase with the initial wave.  This sequence of transformations of the phase of a wave is much like the sequence of rotations of a plane as one increases the angle of rotation from 0 to 360 degrees.  Because of this analogy, U(1) symmetry transformations are often called phase transformations. … ‘In general, if one has an arbitrary number N of complex numbers, one can define the group of unitary transformations of N complex variables and denote it U(N).  It turns out that it is a good idea to break these transformations into two parts: the part that just multiplies all of the N complex numbers by the same unit complex number (this part is a U(1) like before), and the rest.  The second part is where all the complexity is, and it is given the name of special unitary transformations of N (complex) variables and denotes SU(N).  Part of Weyl’s achievement consisted in a complete understanding of the representations of SU(N), for any N, no matter how large. ‘In the case N = 1, SU(1) is just the trivial group with one element.  The first non-trivial case is that of SU(2) … very closely related to the group of rotations in three real dimensions … the group of special orthagonal transformations of three (real) variables … group SO(3).  The precise relation between SO(3) and SU(2) is that each rotation in three dimensions corresponds to two distinct elements of SU(2), or SU(2) is in some sense a doubled version of SO(3).’ Hermann Weyl and Eugene Wigner discovered that Lie groups of complex symmetries represent quantum field theory.  In 1954, Chen Ning Yang and Robert Mills developed a theory of photon (spin-1 boson) mediator interactions in which the spin of the photon changes the quantum state of the matter emitting or receiving it via inducing a rotation in a Lie group symmetry. The amplitude for such emissions is forced, by an empirical coupling constant insertion, to give the measured Coulomb value for the electromagnetic interaction. Gerald ‘t Hooft and Martinus Veltman in 1970 argued that the Yang-Mills theory is renormalizable so the problem of running couplings having no limits can be cut off at effective limits to make the theory work (Yang-Mills theories use non-commutative algebra, usually called non-commutative geometry). The photon Yang-Mills theory is U(1). Equivalent Yang-Mills interaction theories of the strong force SU(3) and the weak force isospin group SU(2) in conjunction with the U(1) force result in the symmetry group  SU(3) x SU(2) x U(1) which is the Standard Model. Here the SU(2) group must act only on left-handed spinning fermions, breaking the conservation of parity. Dr Woit’s Not Even Wrong at pages 98-100 summarises the problems in the Standard Model.  While SU(3) ‘has the beautiful property of having no free parameters’, the SU(2)xU(1) electroweak symmetry does introduce two free parameters: alpha and the mass of the speculative ‘Higgs boson’.  However, from solid facts, alpha is not a free parameter but the shielding ratio of the bare core charge of an electron by virtual fermion pairs being polarized in the vacuum and absorbing energy from the field to create short range forces: “This shielding factor of alpha can actually obtained by working out the bare core charge (within the polarized vacuum) as follows.  Heisenberg’s uncertainty principle says that the product of the uncertainties in momentum and distance is on the order h-bar.  The uncertainty in momentum p = mc, while the uncertainty in distance is x = ct.  Hence the product of momentum and distance, px = (mc).(ct) = Et where E is energy (Einstein’s mass-energy equivalence).  Although we have had to assume mass temporarily here before getting an energy version, this is just what Professor Zee does as a simplification in trying to explain forces with mainstream quantum field theory (see previous post).  In fact this relationship, i.e., product of energy and time equalling h-bar, is widely used for the relationship between particle energy and lifetime.  The maximum possible range of the particle is equal to its lifetime multiplied by its velocity, which is generally close to c in relativistic, high energy particle phenomenology.  Now for the slightly clever bit: px = h-bar implies (when remembering p = mc, and E = mc2): so E = h-bar*c/x when using the classical definition of energy as force times distance (E = Fx): = h-bar*c/x2. “So we get the quantum electrodynamic force between the bare cores of two fundamental unit charges, including the inverse square distance law!  This can be compared directly to Coulomb’s law, which is the empirically obtained force at large distances (screened charges, not bare charges), and such a comparison tells us exactly how much shielding of the bare core charge there is by the vacuum between the IR and UV cutoffs.  So we have proof that the renormalization of the bare core charge of the electron is due to shielding by a factor of a.  The bare core charge of an electron is 137.036… times the observed long-range (low energy) unit electronic charge.  All of the shielding occurs within a range of just 1 fm, because by Schwinger’s calculations the electric field strength of the electron is too weak at greater distances to cause spontaneous pair production from the Dirac sea, so at greater distances there are no pairs of virtual charges in the vacuum which can polarize and so shield the electron’s charge any more. “One argument that can superficially be made against this calculation (nobody has brought this up as an objection to my knowledge, but it is worth mentioning anyway) is the assumption that the uncertainty in distance is equivalent to real distance in the classical expression that work energy is force times distance.  However, since the range of the particle given, in Yukawa’s theory, by the uncertainty principle is the range over which the momentum of the particle falls to zero, it is obvious that the Heisenberg uncertainty range is equivalent to the range of distance moved which corresponds to force by E = Fx.  For the particle to be stopped over the range allowed by the uncertainty principle, a corresponding force must be involved.  This is more pertinent to the short range nuclear forces mediated by massive gauge bosons, obviously, than to the long range forces. “It should be noted that the Heisenberg uncertainty principle is not metaphysics but is solid causal dynamics as shown by Popper: ‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. (Note: statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum contains gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum effect at high energy where nuclear forces occur.) “Experimental evidence: “In particular: As for the ‘Higgs boson’ mass that gives mass to particles, there is evidence there of its value.  On page 98 of Not Even Wrong, Dr Woit points out: ‘Another related concern is that the U(1) part of the gauge theory is not asymptotically free, and as a result it may not be completely mathematically consistent.’ He adds that it is a mystery why only left-handed particles experience the SU(2) force, and on page 99 points out that: ‘the standard quantum field theory description for a Higgs field is not asymptotically free and, again, one worries about its mathematical consistency.’ Another thing is that the 9 masses of quarks and leptons have to be put into the Standard Model by hand together with 4 mixing angles to describe the interaction strength of the Higgs field with different particles, adding 13 numbers to the Standard Model which you  want to be explained and predicted. Important symmetries: 1. ‘electric charge rotation’ would transform quarks into leptons and vice-versa within a given family: this is described by unitary group U(1).  U(1) deals with just 1 type of charge: negative charge, i.e., it ignores positive charge which is treated as a negative charge travelling backwards in time, Feynman’s fatally flawed model of a positron or anti-electron, and with solitary particles (which don’t actually exist since particles always are produced and annihilated as pairs).  U(1) is therefore false when used as a model for electromagnetism, as we will explain in detail in this post.  U(1) also represents weak hypercharge, which is similar to electric charge. 2. ‘isospin rotation’ would switch the two quarks of a given family, or would switch the lepton and neutrino of a given family: this is described by symmetry unitary group SU(2).  Isospin rotation leads directly to the symmetry unitary group SU(2), i.e., rotations in imaginary space with 2 complex co-ordinates generated by 3 operations: the W+, W, and Z0 gauge bosons of the weak force.  These massive weak bosons only interact with left-handed particles (left handed Weyl spinors).  SU(2) describes doublets, matter-antimatter pairs such as mesons and (as this blog post is arguing) lepton-antilepton charge pairs in general (electric charge mechanism as well as weak isospin). 3. ‘colour rotation’ would change quarks between colour charges (red, blue, green): this is described by symmetry unitary group SU(3).  Colour rotation leads directly to the Standard Model symmetry unitary group SU(3), i.e., rotations in imaginary space with 3 complex co-ordinates generated by 8 operations, the strong force gluons.  There is also the concept of ‘flavor’ referring to the different types of quarks (up and down, strange and charm, top and bottom).  SU(3) describes triplets of charges, i.e. baryons. U(1) is a relatively simple phase-transformation symmetry which has a single group generator, leading to a single electric charge.  (Hence, you have to treat positive charge as electrons moving backwards in time to make it incorporate antimatter!  This is false because things don’t travel backwards in time; it violates causality, because we can use pair-production – e.g. electron and positron pairs created by the shielding of gamma rays from cobalt-60 using lead – to create positrons and electrons at the same time, when we choose.)  Moreover, it also only gives rise to one type of massless gauge boson, which means it is a failure to predict the strength of electromagnetism and its causal mechanism of electromagnetism (attractions between dissimilar charges, repulsions between similar charges, etc.).  SU(2) must be used to model the causal mechanism of electromagnetism and gravity; two charged massless gauge bosons mediate electromagnetic forces, while the neutral massless gauge boson mediates gravitation.  Both the detailed mechanism for the forces and the strengths of the interactions (as well as various other predictions), arise automatically from SU(2) with massless gauge bosons replacing U(1). Fig. 1 - The imaginary U(1) interaction of a photon with an electron, which is fine for photons interacting with electrons, but doesn't adequately describe the mechanism by which electromagnetic gauge bosons produce electromagnetic forces! Fig. 1: The imaginary U(1) gauge invariance of quantum electrodynamics (QED) simply consists of a description of the interaction of a photon with an electron (e is the coupling constant, the effective electric charge after allowing for shielding by the polarized vacuum if the interaction is at high energy, i.e., above the IR cutoff).  When the electron’s field undergoes a local phase change, a gauge field quanta called a ‘virtual photon’ is produced, which keeps the Lagrangian invariant; this is how gauge symmetry is supposed to work for U(1). This doesn’t adequately describe the mechanism by which electromagnetic gauge bosons produce electromagnetic forces!  It’s just too simplistic: the moving electron is viewed as a current, and the photon (field phase) affects that current by interacting by the electron.  There is nothing wrong with this simple scheme, but it has nothing to do with the detailed causal, predictive mechanism for electromagnetic attraction and repulsion, and to make this virtual-photon-as-gauge-boson idea work for electromagnetism, you have to add two extra polarizations to the normal two polarizations (electric and magnetic field vectors) of ordinary photons.  You might as well replace the photon by two charged massless gauge bosons, instead of adding two extra polarizations!  You have so much more to gain from using the correct physics, than adding extra epicycles to a false model to ‘make it work’. This is Feynman’s explanation in his book QED, Penguin, 1990, p120: ‘Photons, it turns out, come in four different varieties, called polarizations, that are related geometrically to the directions of space and time. Thus there are photons polarized in the [spatial] X, Y, Z, and [time] T directions. (Perhaps you have heard somewhere that light comes in only two states of polarization – for example, a photon going in the Z direction can be polarized at right angles, either in the X or Y direction. Well, you guessed it: in situations where the photon goes a long distance and appears to go at the speed of light, the amplitudes for the Z and T terms exactly cancel out. But for virtual photons going between a proton and an electron in an atom, it is the T component that is the most important.)’ The gauge bosons of mainstream electromagnetic model U(1) are supposed to consist of photons with 4 polarizations, not 2.  However, U(1) has only one type of electric charge: negative charge.  Positive charge is antimatter and is not included.  But in the real universe there as much positive as negative charge around! We can see this error of U(1) more clearly when considering the SU(3) strong force: the 3 in SU(3) tells us there are three types of color charges, red, blue and green.  The anti-charges are anti-red, anti-blue and anti-green, but these anti-charges are not included.  Similarly, U(1) only contains one electric charge, negative charge.  To make it a reliable and complete theory predictive everything, it should contain 2 electric charges: positive and negative, and 3 gauge bosons: positive charged massless photons for mediating positive electric fields, negative charged massless photons for mediating negative electric fields, and neutral massless photons for mediating gravitation.  The way this correct SU(2) electrogravity unification works was clearly explained in Figures 4 and 5 of the earlier post: Basically, photons are neutral because if they were charged as well as being massless, the magnetic field generated by its motion would produce infinite self-inductance.  The photon has two charges (positive electric field and negative electric field) which each produce magnetic fields with opposite curls, cancelling one another and allowing the photon to propagate: Fig. 2 - Mechanism of gauge bosons for electromagnetism Fig. 2: charged gauge boson mechanism for electromagnetism, as illustrated by the Catt-Davidson-Walton work in charging up transmission lines like capacitors and checking what happens when you discharge the energy through a sampling oscilloscope.  They found evidence, discussed in detail in previous posts on this blog, that the existence of an electric field is represented by two opposite-travelling (gauge boson radiation) light velocity field quanta: while overlapping, the electric fields of each add up (reinforce) but the magnetic fields disappear because the curls of the magnetic field components cancel once there is equilibrium of the exchange radiation going along the same path in opposite directions.  Hence, electric fields are due to charged, massless gauge bosons with Poynting vectors, being exchanged between fermions.  Magnetic fields are cancelled out in certain configurations (such as that illustrated) but in other situations where you send two gauge bosons of opposite charge through one another (in the figure the gauge bosons modelled by electricity have the same charge), you find that the electric field vectors cancel out to give an electrically neutral field, but the magnetic field curls can then add up, explaining magnetism. The evidence for Fig. 2 is presented near the end of Catt’s March 1983 Wireless World article called ‘Waves in Space’ (typically unavailable on the internet, because Catt won’t make available the most useful of his papers for free): when you charge up x metres of cable to v volts, you do so at light speed, and there is no mechanism for the electromagnetic energy to slow down when the energy enters the cable.  The nearest page Catt has online about this is here: the battery terminals of a v volt battery are indeed at v volts before you connect a transmission line to them, but that’s just because those terminals have been charged up by field energy which is flowing in all directions at light velocity, so only half of the total energy, v/2 volts, is going one way and half is going the other way.  Connect anything to that battery and the initial (transient) output at light speed is only half the battery potential; the full battery potential only appears in a cable connected to the battery when the energy has gone to the far end of the cable at light speed and reflected back, adding to further in-flowing energy from the battery on the return trip, and charging the cable to v/2 + v/2 = v volts. Because electricity is so fast (light speed for the insulator), early investigators like Ampere and Maxwell (who candidly wrote in the 1873 edition of his Treatise on Electricity and Magnetism, 3rd ed., Article 574: ‘… there is, as yet, no experimental evidence to shew whether the electric current… velocity is great or small as measured in feet per second. …’) had no idea whatsoever of this crucial evidence which shows what electricity is all about.  So when you discharge the cable, instead of getting a pulse at v volts coming out with a length of x metres (i.e., taking a time of t = x/c seconds), you instead get just what is predicted by Fig. 2: a pulse of v/2 volts taking 2x/c seconds to exit.  In other words, the half of the energy already moving towards the exit end, exits first.  That gives a pulse of v/2 volts lasting x/c seconds.  Then the half of the energy going initially the wrong way has had time to go to the far end, reflect back, and follow the first half of the energy.  This gives the second half of the output, another pulse of v/2 volts lasting for another x/c seconds and following straight on from the first pulse.  Hence, the observer measures an output of v/2 volts lasting for a total duration of 2x/c seconds.  This is experimental fact.  It was Oliver Heaviside – who translated Maxwell’s 20 long-hand differential equations into the four vector equations (two divs, two curls) – who experimentally discovered the first evidence for this when solving problems with the Newcastle-Denmark undersea telegraph cable in 1875, using ‘Morse Code’ (logic signals).  Heaviside’s theory is flawed physically because he treated rise times as instantaneous, a flaw inherited by Catt, Davidson, and Walton, which blocks a complete understanding of the mechanisms at work.  The Catt, Davidson and Walton history is summarised here [The original Catt-Davidson-Walton paper can be found here (first page) and here (second page) although it contains various errors.  My discussion of it is here.  For a discussion of the two major awards Catt received for his invention of the first ever practical wafer-scale memory to come to market despite censorship such as the New Scientist of 12 June 1986, p35, quoting anonymous sources who called Catt ‘either a crank or visionary’ – a £16 million British government and foreign sponsored 160 MB ‘chip’ wafer back in 1988 – see this earlier post and the links it contains.  Note that the editors of New Scientist are still vandals today.  Jeremy Webb, current editor of New Scientist, graduated in physics and solid state electronics, so he has no good excuse for finding this stuff – physics and electronics – over his head.  The previous editor to Jeremy was Dr Alum M. Anderson who on 2 June 1997 wrote to me the following insult to my intelligence: ‘I’ve looked through the files and can assure you that we have no wish to suppress the discoveries of Ivor Catt nor do we publish only articles from famous people.  You should understand that New Scientist is not a primary journal and does not publish the first accounts of new experiments and original theories. These are better submitted to an academic journal where they can be subject to the usual scientific review.  New Scientist does not maintain the large panel of scientific referees necessary for this review process. I’m sure you understand that science is now a gigantic enterprise and a small number of scientifically-trained journalists are not the right people to decide which experiments and theories are correct. My advice would be to select an appropriate journal with a good reputation and send Mr Catt’s work there. Should Mr Catt’s theories be accepted and published, I don’t doubt that he will gain recognition and that we will be interested in writing about him.’  Both Catt and I had already sent Dr Anderson abstracts from Catt’s peer-reviewed papers such as IEEE Trans. on Electronic Computers, vol. EC-16, no. 6, Dec. 67. Also Proc. IEE, June 83 and June 87. Also a summary of the book “Digital Hardware Design” by Catt et. al., pub. Macmillan 1979.  I wrote again to Dr Anderson with this information, but he never published it; Catt on 9 June 1997 published his response on the internet which he carbon copied to the editor of New Scientist.  Years later, when Jeremy Webb had taken over, I corresponded with him by email.  The first time Jeremy responded was on an evening in Dec 2002, and all he wrote was a tirade about his email box being full when writing a last-minute editorial.  I politely replied that time, and then sent him by recorded delivery a copy of the Electronics World January 2003 issue with my cover story about Catt’s latest invention for saving lives.  He never acknowledged it or responded.  When I called the office politely, his assistant was rude and said she had thrown it away unread without him seeing it!  I sent another but yet again, Jeremy wasted time and didn’t publish a thing.  According to the Daily Telegraph, 24 Aug. 2005: ‘Prof Heinz Wolff complained that cosmology is “religion, not science.” Jeremy Webb of New Scientist responded that it is not religion but magic. … “If I want to sell more copies of New Scientist, I put cosmology on the cover,” said Jeremy.’  But even when Catt’s stuff was applied to cosmology in Electronics World Aug. 02 and Apr. 03, it was still ignored by New ScientistHelene Guldberg has written a ‘Spiked Science’ article called Eco-evangelism about Jeremy Webb’s bigoted policies and sheer rudeness, while Professor John Baez has publicised the decline of New Scientist due to the junk they publish in place of solid physics.  To be fair, Jeremy was polite to Prime Minister Tony Blair, however.  I should also add that Catt is extremely rude in refusing to discuss facts.  Just because he has a few new solid facts which have been censored out of mainstream discussion even after peer-reviewed publication, he incorrectly thinks that his vast assortment of more half-baked speculations are equally justified.  For example, he refuses to discuss or co-author a paper on the model here.  Catt does not understand Maxwell’s equations (he thinks that if you simply ignore 18 out of 20 long hand Maxwell differential equations and show that when you reduce the number of spatial dimensions from 3 to 1, then – since the remaining 2 equations in one spatial dimension contain two vital constants – that means that Maxwell’s equations are ‘shocking … nonsense’, and he refuses to accept that he is talking complete rubbish in this empty argument), and since he won’t discuss physics he is not a general physics  authority, although he is expert in experimental research on logic signals, e.g., his paper in IEEE Trans. on Electronic Computers, vol. EC-16, no. 6, Dec. 67.] Fig. 3 - Coulomb force mechanism for electric charged massless gauge bosons Fig. 3: Coulomb force mechanism for electric charged massless gauge bosons.  The SU(2) electrogravity mechanism.  Think of two flak-jacket protected soldiers firing submachine guns towards one another, while from a great distance other soldiers (who are receding from the conflict) fire bullets in at both of them.  They will repel because of net outward force on them, due to successive impulses both from bullet strikes received on the sides facing one another, and from recoil as they fire bullets.  The bullets hitting their backs have relatively smaller impulses since they are coming from large distances and so due to drag effects their force will be nearly spent upon arrival (analogous to the redshift of radiation emitted towards us by the bulk of the receding matter, at great distances, in our universe).  That explains the electromagnetic repulsion physically.  Now think of the two soldiers as comrades surrounded by a mass of armed savages, approaching from all sides.  The soldiers stand back to back, shielding one another’s back, and fire their submachine guns outward at the crowd.  In this situation, they attract, because of a net inward acceleration on them, pushing their backs toward towards one another, both due to the recoils of the bullets they fire, and from the strikes each receives from bullets fired in at them.  When you add up the arrows in this diagram, you find that attractive forces between dissimilar unit charges have equal magnitude to repulsive forces between similar unit charges.  This theory holds water! This predicts the right strength of gravity, because the charged gauge bosons will cause the effective potential of those fields in radiation exchanges between similar charges throughout the universe (drunkard’s walk statistics) to multiply up the average potential between two charges by a factor equal to the square root of the number of charges in the universe. This is so because any straight line summation will on average encounter similar numbers of positive and negative charges as they are randomly distributed, so such a linear summation of the charges that gauge bosons are exchanged between cancels out. However, if the paths of gauge bosons exchanged between similar charges are considered, you do get a net summation. Fig. 4 - Charged gauge bosons mechanism and how the potential adds up Fig. 4: Charged gauge bosons mechanism and how the potential adds up, predicting the relatively intense strength (large coupling constant) for electromagnetism relative to gravity according to the path-integral Yang-Mills formulation.  For gravity, the gravitons (like photons) are uncharged, so there is no adding up possible.  But for electromagnetism, the attractive and repulsive forces are explained by charged gauge bosons.  Notice that massless charge electromagnetic radiation (i.e., charged particles going at light velocity) is forbidden in electromagnetic theory (on account of the infinite amount of self-inductance created by the uncancelled magnetic field of such radiation!) only if the radiation is going solely in only one direction, and this is not the case obviously for Yang-Mills exchange radiation, where the radiant power of the exchange radiation from charge A to charge B is the same as that from charge B to charge A (in situations of equilibrium, which quickly establish themselves).  Where you have radiation going in opposite directions at the same time, the handedness of the curl of the magnetic field is such that it cancels the magnetic fields completely, preventing the self-inductance issue.  Therefore, although you can never radiate a charged massless radiation beam in one direction, such beams do radiate in two directions while overlapping.  This is of course what happens with the simple capacitor consisting of conductors with a vacuum dielectric: electricity enters as electromagnetic energy at light velocity and never slows down.  When the charging stops, the trapped energy in the capacitor travels in all directions, in equimibrium, so magnetic fields cancel and can’t be observed.  This is proved by discharging such a capacitor and measuring the output pulse with a sampling oscilloscope. The price of the random walk statistics needed to describe such a zig-zag summation (avoiding opposite charges!) is that the net force is not approximately 1080 times the force of gravity between a single pair of charges (as it would be if you simply add up all the charges in a coherent way, like a line of aligned charged capacitors, with linearly increasing electric potential along the line), but is the square root of that multiplication factor on account of the zig-zag inefficiency of the sum, i.e., about 1040 times gravity. Hence, the fact that equal numbers of positive and negative charges are randomly distributed throughout the universe makes electromagnetism strength only 1040/1080 = 10-40 as strong as it would be if all the charges were aligned in a row like a row of charged capacitors (or batteries) in series circuit. Since there are around 1080 randomly distributed charges, electromagnetism as multiplied up by the fact that charged massless gauge bosons are Yang-Mills radiation being exchanged between all charges (including all charges of similar sign) is 1040 times gravity. You could picture this summation by the physical analogy of a lot of charged capacitor plates in space, with the vacuum as the dielectric between the plates. If the capacitor plates come with two opposite charges and are all over the place at random, the average addition of potential works out as that between one pair of charged plates multiplied by the square root of the total number of pairs of plates. This is because of the geometry of the addition. Intuitively, you may incorrectly think that the sum must be zero because on average it will cancel out. However, it isn’t, and is like the diffusive drunkard’s walk where the average distance travelled is equal to the average length of a step multiplied by the square root of the number of steps. If you average a large number of different random walks, because they will all have random net directions, the vector sum is indeed zero. But for individual drunkard’s walks, there is the factual solution that a net displacement does occur. This is the basis for diffusion. On average, gauge bosons spend as much time moving away from us as towards us while being exchanged between the charges of the universe, so the average effect of divergence is exactly cancelled by the average convergence, simplifying the calculation. This model also explains why electromagnetism is attractive between dissimilar charges and repulsive between similar charges. For some of the many quantitative predictions and tests of this model, see previous posts such as this one. SU(2), as used in the SU(2)xU(1) electroweak symmetry group, applies only to left-handed particles.  So it’s pretty obvious that half the potential application of SU(2) is being missed out somehow in SU(2)xU(1). SU(2) is fairly similar to U(1) in Fig. 1 above, except that SU(2) involves 22 – 1 = 3 types of charges (positive, negative and neutral), which (by moving) generate 2 types of charged currents (positive and negative currents) and 1 neutral current (i.e., the motion of an uncharged particle produces a neutral current by analogy to the process whereby the motion of a charged particle produces a charged current), requiring 3 types of gauge boson (W+, W, and Z0). For weak interactions we need the whole of SU(2)xU(1) because SU(2) models weak isospin by using electric charges as generators, while U(1) is used to represent weak hypercharge, which looks almost identical to Fig. 1 (which illustrates the use of U(1) for quantum electrodynamics).  The SU(2) isospin part of the weak interaction SU(2)xU(1) applies to only left-handed fermions, while the U(1) weak hypercharge part applies to both types of handedness, although the weak hypercharges of left and right handed fermions are not the same (see earlier post for the weak hypercharges of fermions with different spin handedness). It is interesting that the correct SU(2) symmetry predicts massless versions of the weak gauge bosons (W+, W, and Z0).  Then the mainstream go to a lot of trouble to make them massive by adding some kind of speculative Higgs field, without considering whether the massless versions really exist as the proper gauge bosons of electromagnetism and gravity.  A lot of the problem is that the self-interaction of charged massless gauge bosons is a benefit in explaining the mechanism of electromagnetism (since two similar charged electromagnetic energy currents flowing through one another cancel out each other’s magnetic fields, preventing infinite self-inductance, and allowing charged massless radiation to propagate freely so long as it is exchange radiation in equilibrium with equal amounts flowing from charge A to charge B as flow from charge B to charge A; see Fig. 5 of the earlier post here).  Instead of seeing how the mutual interactions of charged gauge bosons allow exchange radiation to propagate freely without complexity, the mainstream opinion is that this might (it can’t) cause infinities because of the interactions.  Therefore, mainstream (false) consensus is that weak gauge bosons have to have a great mass, simply in order to remove an enormous number of unwanted complex interactions!  They simply are not looking at the physics correctly. U(2) and unification Dr Woit has some ideas on how to proceed with the Standard Model: ‘Supersymmetric quantum mechanics, spinors and the standard model’, Nuclear Physics, v. B303 (1988), pp. 329-42; and ‘Topological quantum theories and representation theory’, Differential Geometric Methods in Theoretical Physics: Physics and Geometry, Proceedings of NATO Advanced Research Workshop, Ling-Lie Chau and Werner Nahm, Eds., Plenum Press, 1990, pp. 533-45. He summarises the approach in ‘… [the theory] should be defined over a Euclidean signature four dimensional space since even the simplest free quantum field theory path integral is ill-defined in a Minkowski signature. If one chooses a complex structure at each point in space-time, one picks out a U(2) [is a proper subset of] SO(4) (perhaps better thought of as a U(2) [is a proper subset of] Spin^c (4)) and … it is argued that one can consistently think of this as an internal symmetry. Now recall our construction of the spin representation for Spin(2n) as A *(C^n) applied to a ‘vacuum’ vector. ‘Under U(2), the spin representation has the quantum numbers of a standard model generation of leptons… A generation of quarks has the same transformation properties except that one has to take the ‘vacuum’ vector to transform under the U(1) with charge 4/3, which is the charge that makes the overall average U(1) charge of a generation of leptons and quarks to be zero. The above comments are … just meant to indicate how the most basic geometry of spinors and Clifford algebras in low dimensions is rich enough to encompass the standard model and seems to be naturally reflected in the electro-weak symmetry properties of Standard Model particles…’ The SU(3) strong force (colour charge) gauge symmetry The SU(3) strong interaction – which has 3 color charges (red, blue, green) and 32 – 1 = 8 gauge bosons – is again virtually identical to the U(1) scheme in Fig. 1 above (except that there are 3 charges and 8 spin-1 gauge bosons called gluons, instead of the alleged 1 charge and 1 gauge boson in the flawed U(1) model of QED, and the 8 gluons carry color charge, whereas the photons of U(1) are uncharged).  The SU(3) symmetry is actually correct because it is an empirical model based on observed particle physics, and the fact that the gauge bosons of SU(3) do carry colour makes it a proper causal model of short range strong interactions, unlike U(1).  For an example of the evidence for SU(3), see the illustration and history discussion in this earlier post.SU(3) is based on an observed (empirical, experimentally determined) particle physics symmetry scheme called the eightfold way.  This is pretty solid experimentally, and summarised all the high energy particle physics experiments from about the end of WWII to the late 1960s.  SU(2) describes the mesons which were originally studied in natural cosmic radiation (pions were the first mesons discovered, and they were found in cosmic radiation from outer space in 1947, at Bristol University).  A type of meson, the pion, is the long-range mediator of the strong nuclear force between nucleons (neutrons and protons), which normally prevents the nuclei of atoms from exploding under the immense Coulomb repulsion of having many protons confined in the small space of the nucleus.  The pion was accepted as the gauge boson of the strong force predicted by Japanese physicist Yukawa, who in 1949 was awarded the Nobel Prize for predicting that meson right back in 1935.  So there is plenty of evidence for both SU(3) color forces and SU(2) isospin.  The problems all arise from U(1). To give an example of how SU(3) works well with charged gauge bosons, gluons, remember that this property of gluons is responsible for the major discovery of asymptotic freedom of confined quarks.  What happens is that the mutual interference of the 8 different types of charged gluons with pairs of virtual quarks and virtual antiquarks at very small distances between particles (high energy) weakens the color force.  The gluon-gluon interactions screen the color charge at short distances because each gluon contains two color charges.  If each gluon contained just one color charge, like the virtual fermions in pair production in QED, then the screening effect would be most significant at large, rather than short, distances.  Because the effective colour charge diminishes at very short distances, for a particular range of distances this color charge fall as you get closer offsets the inverse-square force law effect (the divergence of effective field lines), so the quarks are completely free – within given limits of distance – to move around within a neutron or a proton.  This is asymptotic freedom, an idea from SU(3) that was published in 1973 and resulted in Nobel prizes in 2004.  Although colour charges are confined in this way, some strong force ‘leaks out’ as virtual hadrons like neutral pions and rho particles which account for the strong force on the scale of nuclear physics (a much larger scale than is the case in fundamental particle physics): the mechanism here is similar to the way that atoms which are electrically neutral as a whole can still attract one another to form molecules, because there is a residual of the electromagnetic force left over.  The strong interaction weakens exponentially in addition to the usual fall in potential (1/distance) or force (inverse square law), so at large distances compared to the size of the nucleus it is effectively zero.  Only electromagnetic and gravitational forces are significant at greater distances.  The weak force is very similar to the electromagnetic force but is short ranged because the gauge bosons of the weak force are massive.  The massiveness of the weak force gauge bosons also reduces the strength of the weak interaction compared to electromagnetism. The mechanism for the fall in color charge coupling strength due to interference of charged gauge bosons is not the whole story.  Where is the energy of the field going where the effective charge falls as you get closer to the middle?  Obvious answer: the energy lost from the strong color charges goes into the electromagnetic charge.  Remember, short-range field charges fall as you get closer to the particle core, while electromagnetic charges increase; these are empirical facts.  The strong charge decreases sharply from about 137e at the greatest distances it extends to (via pions) to around 0.15e at 91 GeV, while over the same range of scattering energies (which are appriximately inversely proportional to the distance from the particle core), the electromagnetic charge has been observed to increase by 7%.  We need to apply a new type of continuity equation to the conservation of gauge boson exchange radiation energy of all types, in order to deduce vital new physical insights from the comparison of these figures for charge variation as a function of distance.  The suggested mechanism in a previous post is: ‘We have to understand Maxwell’s equations in terms of the gauge boson exchange process for causing forces and the polarised vacuum shielding process for unifying forces into a unified force at very high energy.  If you have one force (electromagnetism) increase, more energy is carried by virtual photons at the expense of something else, say gluons. So the strong nuclear force will lose strength as the electromagnetic force gains strength. Thus simple conservation of energy will explain and allow predictions to be made on the correct variation of force strengths mediated by different gauge bosons. When you do this properly, you learn that stringy supersymmetry first isn’t needed and second is quantitatively plain wrong.  At low energies, the experimentally determined strong nuclear force coupling constant which is a measure of effective charge is alpha = 1, which is about 137 times the Coulomb law, but it falls to 0.35 at a collision energy of 2 GeV, 0.2 at 7 GeV, and 0.1 at 200 GeV or so.  So the strong force falls off in strength as you get closer by higher energy collisions, while the electromagnetic force increases!  Conservation of gauge boson mass-energy suggests that energy being shielded form the electromagnetic force by polarized pairs of vacuum charges is used to power the strong force, allowing quantitative predictions to be made and tested, debunking supersymmetry and existing unification pipe dreams.’  Force strengths as a function of distance from a particle core I’ve written previously that the existing graphs showing U(1), SU(2) and SU(3) force strengths as a function of energy are pretty meaningless; they do not specify which particles are under consideration.  If you scatter leptons at energies up to those which so far have been available for experiments, they don’t exhibit any strong force SU(3) interactions.What should be plotted is effective strong, weak and electromagnetic charge as a function of distance from particles.  This is easily deduced because the distance of closest approach of two charged particles in a head-on scatter reaction is easily calculated: as they approach with a given initial kinetic energy, the repulsive force between them increases, which slows them down until they stop at a particular distance, and they are then repelled away.  So you simply equate the initial kinetic energy of the particles with the potential energy of the repulsive force as a function of distance, and solve for distance.  The initial kinetic energy is radiated away as radiation as they decelerate.  There is some evidence from particle collision experiments that the SU(3) effective charge really does decrease as you get closer to quarks, while the electromagnetic charge increases.  Levine and Koltick published in PRL (v.78, 1997, no.3, p.424) in 1997 that the electron’s charge increases from e to 1.07e as you go from low energy physics to collisions of electrons at an energy of 91 GeV, i.e., a 7% increase in charge.  At low energies, the experimentally determined strong nuclear force coupling constant which is a measure of effective charge is alpha = 1, which is about 137 times the Coulomb law, but it falls to 0.35 at a collision energy of 2 GeV, 0.2 at 7 GeV, and 0.1 at 200 GeV or so. The full investigation of running-couplings and the proper unification of the corrected Standard Model is the next priority for detailed investigation.  (Some details of the mechanism can be found in several other recent posts on this blog, e.g., here.) ‘The observed couping constant for W’s is much the same as that for the photon – in the neighborhood of j [Feynman’s symbol j is related to alpha or 1/137.036… by: alpha = j^2 = 1/137.036…]. Therefore the possibility exists that the three W’s and the photon are all different aspects of the same thing. [This seems to be the case, given how the handedness of the particles allows them to couple to massive particles, explaining masses, chiral symmetry, and what is now referred to in the SU(2)xU(1) scheme as ‘electroweak symmetry breaking’.] Stephen Weinberg and Abdus Salam tried to combine quantum electrodynamics with what’s called the ‘weak interactions’ (interactions with W’s) into one quantum theory, and they did it. But if you just look at the results they get you can see the glue [Higgs mechanism problems], so to speak. It’s very clear that the photon and the three W’s [W+, W, and W0 /Z0 gauge bosons] are interconnected somehow, but at the present level of understanding, the connection is difficult to see clearly – you can still the ’seams’ [Higgs mechanism problems] in the theories; they have not yet been smoothed out so that the connection becomes … more correct.’ [Emphasis added.] – R. P. Feynman, QED, Penguin, 1990, pp141-142.Mechanism for loop quantum gravity with spin-1 (not spin-2) gravitons Peter Woit gives a discussion of the basic principle of LQG in his book: I watched Lee Smolin’s Perimeter Institute lectures, “Introduction to Quantum Gravity”, and he explains that loop quantum gravity is the idea of applying the path integrals of quantum field theory to quantize gravity by summing over interaction history graphs in a network (such as a Penrose spin network) which represents the quantum mechanical vacuum through which vector bosons such as gravitons are supposed to travel in a standard model-type, Yang-Mills, theory of gravitation. This summing of interaction graphs successfully allows a basic framework for general relativity to be obtained from quantum gravity. It’s pretty evident that the quantum gravity loops are best thought of as being the closed exchange cycles of gravitons going between masses (or other gravity field generators like energy fields), to and fro, in an endless cycle of exchange.  That’s the loop mechanism, the closed cycle of Yang-Mills exchange radiation being exchanged from one mass to another, and back again,  continually. According to this idea, the graviton interaction nodes are associated with the ‘Higgs field quanta’ which generates mass.  Hence, in a Penrose spin network, the vertices represent the points where quantized masses exist. Some predictions from this are here. Professor Penrose’s interesting original article on spin networks, Angular Momentum: An Approach to Combinatorial Space-Time, published in ‘Quantum Theory and Beyond’ (Ted Bastin, editor), Cambridge University Press, 1971, pp. 151-80, is available online, courtesy of Georg Beyerle and John Baez. Update (25 June 2007): Lubos Motl versus Mark McCutcheon’s book The Final Theory Seeing that there is some alleged evidence that mainstream string theorists are bigoted charlatans, string theorist Dr Lubos Motl, who is soon leaving his Assistant Professorship at Harvard, made me uneasy when he attacked Mark McCutcheon’s book The Final Theory. Motl wrote a blog post attacking McCutcheon’s book by saying that: ‘Mark McCutcheon is a generic arrogant crackpot whose IQ is comparable to chimps.’ Seeing that Motl is a stringer, this kind of abuse coming from him sounds like praise to my ears. Maybe McCutcheon is not so wrong? Anyway, at lunch time today, I was in Colchester town centre and needed to look up a quotation in one of Feynman’s books. Directly beside Feynman’s QED book, on the shelf of Colchester Public Library, was McCutcheon’s chunky book The Final Theory. I found the time to look up what I wanted and to read all the equations in McCutcheon’s book. Motl ignores McCutcheon’s theory entirely, and Motl is being dishonest when claiming: ‘his [McCutcheon’s] unification is based on the assertion that both relativity as well as quantum mechanics is wrong and should be abandoned.’ This sort of deception is easily seen, because it has nothing to do with McCutcheon’s theory! McCutcheon’s The Final Theory is full of boring controversy or error, such as the sort of things Motl quotes, but the core of the theory is completely different and takes up just two pages: 76 and 194. McCutcheon claims there’s no gravity because the Earth’s radius is expanding at an accelerating rate equal to the acceleration of gravity at Earth’s surface, g = 9.8 ms-2. Thus, in one second, Earth’s radius (in McCutcheon’s theory) expands by (1/2)gt2 = 4.9 m. I showed in an earlier post that there is a simple relationship between Hubble’s empirical redshift law for the expansion of the universe (which can’t be explained by tired light ideas and so is a genuine observation) and acceleration: Hubble recession: v = HR = dR/dt, so dt = dR/v, hence outward acceleration a = dv/dt = d[HR]/[dR/v] = vH = RH2 McCutcheon instead defines a ‘universal atomic expansion rate’ on page 76 of The Final Theory which divides the increase in radius of the Earth over a one second interval (4.9 m) into the Earth’s radius (6,378,000 m, or 6.378*106 m). I don’t like the fact he doesn’t specify a formula properly to define his ‘universal atomic expansion rate’. McCutcheon should be clear: he is dividing (1/2)gt2 into radius of Earth, RE, to get his ‘universal atomic expansion rate, XA: XA = (1/2)gt2/RE, which is a dimensionless ratio. On page 77, McCutcheon honestly states: ‘In expansion theory, the gravity of an object or planet is dependent on it size. This is a significant departure from Newton’s theory, in which gravity is dependent on mass.‘ At first glance, this is a crazy theory, requiring Earth (and all the atoms in it, for he makes the case that all masses expand) to expand much faster than the rate of expansion of the universe. However, on page 194, he argues that the outward acceleration of the an atom of radius R is: a = XAR, now the first thing to notice is that acceleration has units of ms-2 and R has units of m. So this equation is false dimensionally if XA = (1/2)gt2/RE. The only way to make a = XAR accurate dimensionally is to change the definition of XA by dropping t2 from the dimensionless ratio (1/2)gt2/RE to the ratio: XA = (1/2)g/RE, which has correct units of s-2. So we end up with this accurate version of McCutcheon’s formula for the outward acceleration of an atom of radius R (we will use the average radius of orbit of the chaotic electron path in the ground state of a hydrogen atom for R, which is 5.29*10-11 m): a = XAR = [(1/2)g/RE]R, which can be equated to Newton’s formula for acceleration due to mass m, which is 1.67*10-27 kg: a = [(1/2)g/RE]R = mG/R2. Hence, McCutcheon on page 194 calculates a value for G by rearranging these equations: G = (1/2)gR3/(REm) =(1/2)*(9.81)*(5.29*10-11)3 /[(6.378*106)*(1.67*10-27)] = 6.82*10-11 m3/(kg*s2). Which is only 2% higher than the measured value of G = 6.673 *10-11 m3/(kg*s2). After getting this result on page 194, McCutcheon remarks on page 195: ‘Recall … that the value for XA was arrived at by measuring a dropped object in relation to a hypothesized expansion of our overall planet, yet here this same value was borrowed and successfully applied to the proposed expansion of the tinest atom. We can compress McCutcheon’s theory: what is he basically saying is the scaling ratio: a = (1/2)g(R/RE) which when set equal to Newton’s law mG/R2, rearranges to give: G = (1/2)gR3/(REm). However, McCutcheon’s own formula is just his guessed scaling law: a = (1/2)g(R/RE). Although this quite accurately scales the acceleration of gravity at Earth’s surface (g at RE) to the acceleration of gravity at the ground state orbit radius of a hydrogen atom (a at R), it is not clear if this is just a coincidence, or if it is really anything to do with McCutcheon’s expanding matter idea. He did not derive the relationship, he just defined it by dividing the increased radius into the Earth’s radius and then using this ratio in another expression which is again defined without a rigorous theory underpinning it. In its present form, it is numerology. Furthermore, the theory is not universal: ithe basic scaling law that McCutcheon obtains does not predict the gravitational attraction of the two balls Cavendish measured; instead it only relates the gravity at Earth’s surface to that at the surface of an atom, and then seems to be guesswork or numerology (although it is an impressively accurate ‘coincidence’). It doesn’t have the universal application of Newton’s law. There may be another reason why a = (1/2)g(R/RE) is a fairly accurate and impressive relationship. Since I regularly oppose censorship based on fact-ignoring consensus and other types of elitist fascism in general (fascism being best defined as the primitive doctrine that ‘might is right’ and who speaks loudest or has the biggest gun is the scientifically correct), it is only correct that I write this blog post to clarify the details that really are interesting. Maybe McCutcheon could make his case better to scientists by putting the derivation and calculation of G on the front cover of his book, instead of a sunset. Possibly he could justify his guesswork idea to crackpot string theorists by some relativistic obfuscation invoking Einstein, such as: ‘According to relativity, it’s just as reasonable to think as the Earth zooming upwards up to hit you when you jump off a cliff, as to think that you are falling downward.’ If he really wants to go down the road of mainstream hype and obfuscation, he could maybe do even better by invoking the popular misrepresentation of Copernicus: ‘According to Copernicus, the observer is at ‘no special place in the universe’, so it is as justifiable to consider the Earth’s surface accelerating upwards to meet you, as vice-versa. Copernicus used a spaceship to travel all throughout the entire universe on a spaceship or a flying carpet to confirm the crackpot modern claim that we are not at a special place in the universe, you know.’ The string theorists would love that kind of thing (i.e., assertions that there is no preferred reference frame, based on lies) seeing that they think spacetime is 10 or 11 dimensional, based on lies. My calculation of G is entirely different, being due to a causal mechanism of graviton radiation, and it has detailed empirical (non-speculative) foundations to it, and a derivation which predicts G in terms of the Hubble parameter and the local density: G = (3/4)H2/(rπe3), plus a lot of other things about cosmology, including the expansion rate of the universe at long distances in 1996 (two years before it was confirmed by Saul Perlmutter’s observations in 1998). However, this is not necessarily incompatible with McCutcheon’s theory. There are such things as mathematical dualities: where completely different calculations are really just different ways of modelling the same thing. McCutcheon’s book is not just the interesting sort of calculation above, sadly. It also contains a large amount of drivel (particularly in the first chapter) about his alleged flaw in the equation: W = Fd or work energy = force applied * distance moved by force in the direction that the force operates. McCutcheon claims that there is a problem with this formula, and that work energy is being used continuously by gravity, violating conservation of energy. On page 14 (2004 edition) he claims falsely: ‘Despite the ongoing energy expended by Earth’s gravity to hold objects down and the moon in orbit, this energy never diminishes in strength…’ The error McCutcheon is making here is that no energy is used up unless gravity is making an object move. So the gravity field is not depleted of a single Joule of energy when an object is simply held in one place by gravity. For orbits, gravity force acts at right angles to the distance the moon is going in its orbit, so gravity is not using up energy in doing work on the moon. If the moon was falling straight down to earth, then yes, the gravitational field would be losing energy to the kinetic energy that the moon would gain as it accelerated. But it isn’t falling: the moon is not moving towards us along the lines of gravitational force; instead it is moving at right angles to those lines of force. McCutcheon does eventually get to this explanation on page 21 of his book (2004 edition). But this just leads him to write several more pages of drivel about the subject: by drivel, I mean philosophy. On a positive note, McCutcheon near the end of the book (pages 297-300 of the 2004 edition) correctly points out that that where two waves of equal amplitude and frequency are superimposed (i.e., travel through one another) exactly out of phase, their waveforms cancel out completely due to ‘destructive interference’. He makes the point that there is an issue for conservation of energy where such destructive interference occurs. For example, Young claimed that destructive interference of light occurs at the dark fringes on the screen in the double-slit experiment. Is it true that two out-of-phase photons really do arrive at the dark fringes, cancelling one another out? Clearly, this would violate conservation of energy! Back in February 1997, when I was editor of Science World magazine (ISSN 1367-6172), I published an article by the late David A. Chalmers on this subject. Chalmers summed the Feynman path integral for the two slits and found that if Young’s explanation was correct, then half of the total energy would be unaccounted for in the dark fringes. The photons are not arriving at the dark fringes. Instead, they arrive in the bright fringes. The interference of radio waves and other phased waves is also known as the Hanbury-Brown-Twiss effect, whereby if you have two radio transmitter antennae, the signal that can be received depends on the distance between them: moving they slightly apart or together changes the relative phase of the transmitted signal from one with respect to the other, cancelling the signal out or reinforcing it. (It depends on the frequencies and amplitude as well: if both transmitters are on the same frequency and have the same output amplitude and radiation power, then perfectly destructive interference if they are exactly out of phase, or perfect reinforcement – constructive interference – if they are exactly in-phase, will occur.) This effect also actually occurs in electricity, replacing Maxwell’s mechanical ‘displacement current’ of vacuum dielectric charges. Feynman quotation The Feynman quotation I located is this: ‘When we look at photons on a large scale – much larger than the distance required for one stopwatch turn – the phenomena that we see are very well approximated by rules such as ‘light travels in straight lines’ because there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the screen), these rules fail – we discover that light doesn’t have to go in straight lines, there are interferences created by two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go [influenced by the randomly occurring fermion pair-production in the strong electric field on small distance scales, according to quantum field theory], each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows to predict where an electron is likely to go.’ – R. P. Feynman, QED, Penguin, London, 1990, pp. 84-5. (Emphasis added in bold.) Compare that to: ‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’ – Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article, not science fiction!) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981. ‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations [between virtual particles in the quantum foam vacuum and real electrons, etc.], as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. Heisenberg quantum mechanics: Poincare chaos applies on the small scale, since the virtual particles of the Dirac sea in the vacuum regularly interact with the electron and upset the orbit all the time, giving wobbly chaotic orbits which are statistically described by the Schroedinger equation – it’s causal, there is no metaphysics involved. The main error is the false propaganda that ‘classical’ physics models contain no inherent uncertainty (dice throwing, probability): chaos emerges even classically from the 3+ body problem, as first shown by Poincare. Anti-causal hype for quantum entanglement: Dr Thomas S. Love of California State University has shown that entangled wavefunction collapse (and related assumptions such as superimposed spin states) are a mathematical fabrication introduced as a result of the discontinuity at the instant of switch-over between time dependent and time independent versions of Schroedinger at time of measurement. Just as the Copenhagen Interpretation was supported by lies (such as von Neumann’s false ‘disproof’ of hidden variables in 1932) and fascism (such as the way Bohm was treated by the mainstream when he disproved von Neumann’s ‘proof’ in the 1950s), string ‘theory’ (it isn’t a theory) is supported by similar tactics which are political in nature and have nothing to do with science: ‘String theory has the remarkable property of predicting gravity.’ – Dr Edward Witten, M-theory originator, Physics Today, April 1996. ‘The critics feel passionately that they are right, and that their viewpoints have been unfairly neglected by the establishment. … They bring into the public arena technical claims that few can properly evaluate. … Responding to this kind of criticism can be very difficult. It is hard to answer unfair charges of élitism without sounding élitist to non-experts. A direct response may just add fuel to controversies.’ – Dr Edward Witten, M-theory originator, Nature, Vol 444, 16 November 2006. ‘Superstring/M-theory is the language in which God wrote the world.’ – Assistant Professor Lubos Motl, Harvard University, string theorist and friend of Edward Witten, quoted by Professor Bert Schroer, (p. 21). ‘The mathematician Leonhard Euler … gravely declared: “Monsieur, (a + bn)/n = x, therefore God exists!” … peals of laughter erupted around the room …’ – ‘… I do feel strongly that this is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation – a fix-up to say “Well, it still might be true”. For example, the theory requires ten dimensions. Well, maybe there’s a way of wrapping up six of the dimensions. Yes, that’s possible mathematically, but why not seven? … In other words, there’s no reason whatsoever in superstring theory that it isn’t eight of the ten dimensions that get wrapped up … So the fact that it might disagree with experiment is very tenuous, it doesn’t produce anything; it has to be excused most of the time. … All these numbers … have no explanations in these string theories – absolutely none!’ – Richard P. Feynman, in Davies & Brown, Superstrings, 1988, pp 194-195. [Quoted by Tony Smith.] Feynman predicted today’s crackpot run world in his 1964 Cornell lectures (broadcast on BBC2 in 1965 and published in his book Character of Physical Law, pp. 171-3): ‘The inexperienced, and crackpots, and people like that, make guesses that are simple, but [with extensive knowledge of the actual facts rather than speculation] you can immediately see that they are wrong, so that does not count. … There will be a degeneration of ideas, just like the degeneration that great explorers feel is occurring when tourists begin moving in on a territory.’ In the same book Feynman states: Sent: 02/01/03 17:47 Subject: Your_manuscript LZ8276 Cook {gravity unification proof} Physical Review Letters does not, in general, publish papers on alternatives to currently accepted theories…. Yours sincerely, Stanley G. Brown, Editor, Physical Review Letters ‘If you are not criticized, you may not be doing much.’ – Donald Rumsfeld. The Standard Model, which Edward Witten has done a lot of useful work on (before he went into string speculation), is the best tested physical theory. Forces result from radiation exchange in spacetime. The big bang matter’s speed is 0-c in spacetime of 0-15 billion years, so outward force F = ma = 1043 N. Newton’s 3rd law implies equal inward force, which from the Standard Model possibilities will be carried by gauge bosons (exchange radiation), predicting current cosmology, gravity and the contraction of general relativity, other forces and particle masses. ‘A fruitful natural philosophy has a double scale or ladder ascendant and descendant; ascending from experiments to axioms and descending from axioms to the invention of new experiments.’ – Novum Organum. This predicts gravity in a quantitative, checkable way, from other constants which are being measured ever more accurately and will therefore result in more delicate tests. As for mechanism of gravity, the dynamics here which predict gravitational strength and various other observable and further checkable aspects, are consistent with LQG and Lunsford’s gravitational-electromagnetic unification in which there are 3 dimensions describing contractable matter (matter contracts due to its properties of gravitation and motion), and 3 expanding time dimensions (the spacetime between matter expands due to the big bang according to Hubble’s law). ‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – Feynman, QED, Penguin, 1990, page 54. That’s wave particle duality explained. The path integrals don’t mean that the photon goes on all possible paths but as Feynman says, only a “small core of nearby space”. The double-slit interference experiment is very simple: the photon has a transverse spatal extent. If that overlaps two slits, then the photon gets diffracted by both slits, displaying interference. This is obfuscated by people claiming that the photon goes everywhere, which is not what Feynman says. It doesn’t take every path: most of the energy is transferred along the classical path, and is near that. Similarly, you find people saying that QFT says that the vacuum is full of loops of annihilation-creation. When you check what QFT says, it actually says that those loops are limited to the region between the IR and UV cutoff. If loops existed everywhere in spacetime, ie below the IR cutoff or beyond 1 fm, then the whole vacuum would be polarized enough to cancel out all real charges. If loops existed beyond the UV cutoff, ie to zero distance from a particle, then the loops would have infinite energy and momenta and the effects of those loops on the field would be infinite, again causing problems. So the vacuum simply isn’t full of annihilation-creation loops (they only extend out to 1 fm around particles). The LQG loops are entirely different (exchange radiation) and cause gravity, not cosmological constant effects. Hence no dark energy mechanism can be attributed to the charge creation effects in the Dirac sea, which exists only close to real particles. ‘By struggling to find a mathematically precise formulation, one often discovers facets of the subject at hand that were not apparent in a more casual treatment. And, when you succeed, rigorous results (”Theorems”) may flow from that effort. ‘But, particularly in more speculative subject, like Quantum Gravity, it’s simply a mistake to think that greater rigour can substitute for physical input. The idea that somehow, by formulating things very precisely and proving rigorous theorems, correct physics will eventually emerge simply misconstrues the role of rigour in Physics.’ – Professor Jacques Distler, blog entry on The Role of Rigour. ‘[Unorthodox approaches] now seem the antithesis of modern science, with consensus and peer review at its very heart. … The sheer number of ideas in circulation means we need tough, sometimes crude ways of sorting…. The principle that new ideas should be verified and reinforced by an intellectual community is one of the pillars of scientific endeavour, but it comes at a cost.’ – Editorial, p5 of the 9 Dec 06 issue of New Scientist. Far easier to say anything else is crackpot. String isn’t, because it’s mainstream, has more people working on it, and has a large number of ideas connecting one another. No ‘lone genius’ can ever come up with anything more mathematically complex, and amazingly technical than string theory ideas, which are the result of decades of research by hundreds of people. Ironically, the core of a particle is probably something like a string, albeit not the M-theory 10/11 dimensional string, just a small loop of energy which acquires mass by coupling to an external mass-giving bosonic field. It isn’t the basic idea of string which is necessarily wrong, but the way the research is done and the idea that by building a very large number of interconnected buildings on quicksand, it will be absurd for disaster to overcome the result which has no solid foundations. In spacetime, you can equally well interpret recession of stars as a variation of velocity with time past as seen from our frame of reference, or a variation of velocity with distance (the traditional ‘tunnel-vision’ due to Hubble). Some people weirdly think Newton had a theory of gravity which predicted G, or that because Witten claimed in Physics Today magazine in 1996 that his stringy M-theory has the remarkable property of “predicting gravity”, he can do it. The editor of Physical Review Letters seemed to suggest this to me when claiming falsely that the facts above leading to a prediction of gravity etc is an “alternative to currently accepted theories”. Where is the theory in string? Where is the theory in M-”theory” which predicts G? It only predicts a spin-2 graviton mode for gravity, and the spin-2 graviton has never been observed. So I disagree with Dr Brown. This isn’t an alternative to a currently accepted theory. It’s tested and validated science, contrasted to currently accepted religious non-theory explaining an unobserved particle by using unobserved extra dimensional guesswork. I’m not saying string should be banned, but I don’t agree that science should be so focussed on stringy guesswork that the hard facts are censored out in consequence!) There is some dark matter in the form of the mass of neutrinos and other radiations which will be attracted around galaxies and affect their rotation, but it is bizarre to try to use discrepancies in false theories as “evidence” for unobserved “dark energy” and “dark matter”, neither of which has been found in any particle physics experiment or detector in history. The “direct evidence of dark matter” seen in photos of distorted images don’t say what the “dark matter” is and we should remember that Ptolemy’s followers were rewarded for claiming direct evidence of the earth centred universe was apparent to everyone who looked at the sky. Science requires evidence, facts, and not faith based religion which ignores or censors out the evidence and the facts. The reason for current popularity of M-theory is precisely that it claims to not be falsifiable, so it acquires a religious or mysterious allure to quacks, just as Ptolemy’s epicycles, phlogiston, caloric, Kelvin’s vortex atom and Maxwell’s mechanical gear box aether did in the past. Dr Peter Woit explains the errors and failures of mainstream string theory in his book Not Even Wrong (Jonathan Cape, London, 2006, especially pp 176-228): using the measured weak SU(2) and electromagnetic U(1) forces, supersymmetry predicts the SU(3) force incorrectly high by 10-15%, when the experimental data is accurate to a standard deviation of about 3%. By claiming to ‘predict’ everything conceivable, it predicts nothing falsifiable at all and is identical to quackery, although string theory might contain some potentially useful spin-offs such as science fiction and some mathematics (similarly, Ptolemy’s epicycles theory helped to advance maths a little, and certainly Maxwell’s mechanical theory of aether led ultimately to a useful mathematical model for electromagnetism; Kelvin’s false vortex atom also led to some ideas about perfect fluids which have been useful in some aspects of the study of turbulence and even general relativity). Even if you somehow discovered gravitons, superpartners, or branes, these would not confirm the particular string theory model anymore than a theory of leprechauns would be confirmed by discovering small people. Science needs quantitative predictions. Dr Imre Lakatos explains the way forward in his article ‘Science and Pseudo-Science’: Really, there is nothing more anyone can do after making a long list of predictions which have been confirmed by new measurements, but are censored out of mainstream publications by the mainstream quacks of stringy elitism. Prof Penrose wrote this depressing conclusion well in 2004 in The Road to Reality so I’ll quote some pertinent bits from the British (Jonathan Cape, 2004) edition: On page 1020 of chapter 34 ‘Where lies the road to reality?’, 34.4 Can a wrong theory be experimentally refuted?, Penrose says: ‘One might have thought that there is no real danger here, because if the direction is wrong then the experiment would disprove it, so that some new direction would be forced upon us. This is the traditional picture of how science progresses. Indeed, the well-known philosopher of science [Sir] Karl Popper provided a reasonable-looking criterion [K. Popper, The Logic of Scientific Discovery, 1934] for the scientific admissability [sic; mind your spelling Sir Penrose or you will be dismissed as a loony: the correct spelling is admissibility] of a proposed theory, namely that it be observationally refutable. But I fear that this is too stringent a criterion, and definitely too idealistic a view of science in this modern world of “big science”.’ On page 1026, Penrose gets down to the business of how science is really done: ‘In the present climate of fundamental research, it would appear to be much harder for individuals to make substantial progress than it had been in Einstein’s day. Teamwork, massive computer calculations, the pursuing of fashionable ideas – these are the activities that we tend to see in current research. Can we expect to see the needed fundamentally new perspectives coming out of such activities? This remains to be seen, but I am left somewhat doubtful about it. Perhaps if the new directions can be more experimentally driven, as was the case with quantum mechanics in the first third of the 20th century, then such a “many-person” approach might work.’ ‘Cargo cult science is defined by Feynman as a situation where a group of people try to be scientists but miss the point. Like writing equations that make no checkable predictions… Of course if the equations are impossible to solve (like due to having a landscape of 10^500 solutions that nobody can handle), it’s impressive, and some believe it. A winning theory is one that sells the most books.’ – Path integrals for gauge boson radiation versus path integrals for real particles, and Weyl’s gauge symmetry principle The previous post plus a re-reading of Professor Zee’s Quantum Field Theory in a Nutshell (Princeton, 2003) suggests a new formulation for quantum gravity, the mechanism and mathematical predictions of which were given two posts ago.The sum over histories for real particles is used to work out the path of least action, such as the path of a photon of light which takes the least time to bounce off a mirror.  You can do the same thing for the path of a real electron, or the path of a drunkard’s walk.  The integral tells you the effective path taken by the particle, or the probability of any given path being taken, from many possible paths. For gauge bosons or vector bosons, i.e., force-mediating radiation, the role of the path integral is no longer to find the probability of a path being taken or the effective path.  Instead, gauge bosons are exchanged over many paths simultaneously.  Hence there are two totally different applications of path integrals we are concerned with: • Applying the path integral for real particles involves evaluating a lot of paths, most of which are not actually taken (the real particle takes only one of those paths, although as Feynman said, it uses a ‘small core of nearby space’ so it can be affected by both of two slits in a screen, provided those slits are close together, within a transverse wavelength or so, so the small core of paths taken overlap both slits). • Applying the path integral for gauge bosons involves evaluating a lot of paths which are all actually being taken, because the extensive force field is composed of lots of gauge bosons being exchanged between charges, really going all over the place (for long-range gravity and electromagnetism). In both cases the path taken by a given real particle or a single gauge boson must be composed of straight lines in between interactions (see Fig. 1 of previous post) because the curvature of general relativity appears to be a classical approximation to a lot of small discrete deflections due to discrete interactions with field quanta (sometimes curves are used in Feynman diagrams for convenience, but according to quantum field theory all mechanisms for curvature actually involve lots of little deflections by the quanta of fields). The calculations of quantum gravity, two posts ago, use geometry to evaluate these straight-line gauge boson paths for gravity and electromagnetism.  Presumably, translating the simplicity of the calculations based on geometry in that post into a path integrals will appeal more to the stringy mainstream.  Loop quantum gravity methods of summing up a lot of interaction graphs will be used to do this.  What is vital are directional asymmetries, which transform a perfect symmetry of gauge boson exchanges in all directions into a force, represented by the geometry of Fig. 1 (below).  One way to convert that geometry into a formula is to consider the inward-outward travelling isotropic graviton exchange radiation by using divergence operator.  I think this can be done easily because there are two useful physical facts which make the geometry simpler even than appears from Fig. 1: first, the shield area x in Fig. 1 is extremely small so the asymmetry cone can not ever have a large sized base for any practical situation; second, by Newton’s proof the gravity inverse square law force from a lot of little particles spread out in the Earth is the same as you get by mathematically assuming that all the little masses (fundamental particles) are not spread throughout a large planet but are all at the centre.  So a path integral formulation for the geometry of Fig. 1 is simple. Fig. 1: Mechanism for quantum gravity (a tiny falling test mass is located in the middle of the universe, which experiences isotropic graviton radiation - not necessarily spin 2 gravitons, preferably spin 1 gravitons which cause attraction by simply pushing things as this allows predictions as wel shall see - from all directions except that where there is an asymmetry produced by the mass which shields that radiation) . By Newton’s 3rd law the outward force of the big bang has an equal inward force, and gravity is equal to the proportion of that inward force covered by the shaded cone in this diagram: (force of gravity) = (total inward force).(cross sectional area of shield projected out to radius R, i.e., the area of the base of the cone marked x, which is the product of the shield’s cross-sectional area and the ratio R2/r2) / (total spherical area with radius R). Fig. 1: Mechanism for quantum gravity (a tiny falling test mass is located in the middle of the universe, which experiences isotropic graviton radiation –  spin 1 gravitons which cause attraction by simply pushing things as this allows predictions as proved in the earlier post – from all directions except that where there is an asymmetry produced by the mass which shields that radiation). By Newton’s 3rd law the outward force of the big bang has an equal inward force, and gravity is equal to the proportion of that inward force covered by the shaded cone in this diagram: (force of gravity) = (total inward force).(cross sectional area of shield projected out to radius R, i.e., the area of the base of the cone marked x, which is the product of the shield’s cross-sectional area and the ratio R2/r2) / (total spherical area with radius R).  (Full proof here.) Weyl’s gauge symmetry principle A symmetry is anything that doesn’t change as the result of a transformation.  For example, the colour of a plastic pen doesn’t change when you rotate it, so the colour is a symmetry of the pen when the transformation type is a rotation.  If you transform the plastic pen by burning it, colour is not a symmetry of the pen (unless the pen was the colour of carbon in the first place). A gauge symmetry is one where scalable quantities (gauges) are involved.  For example, there is a symmetry in the fact that the same amount of energy is required to lift a 1 kg mass up by a height of 1 metre, regardless of the original height of the mass above sea level.  (This example is not completely true, but it is almost true because the fall in gravity acceleration with height is small, as gravity is only 0.3% weaker at the top of the tallest mountain than it is at sea level.) The female mathematician Emmy Noether in 1915 proved a great theorem which states that any continuous symmetry leads to a conservation law, e.g., the symmetry of physical laws (due to these laws remaining the same while time passes) leads to the principle of conservation of energy!  This particularly impressive example of Noether’s theorem does not strictly apply to forces over very long time scales, because, as proved, fundamental force coupling constants (relative charges) increase in direct proportion to the age of the universe.  However, the theorem is increasingly accurate as the time scale involved is reduced and the inaccuracy becomes trivial when the time considered is small compared to the age of the universe. At the end of Quantum Field Theory in a Nutshell (at page 457), Zee points out that Maxwell’s equations unexpectedly contained two hidden symmetries, Lorentz invariance and gauge invariance: ‘two symmetries that, as we now know, literally hold the key to the secrets of the universe.’ He then argues that Maxwell’s long-hand differential equations masked these symmetries and it took Einstein’s genius to uncover them (special relativity for Lorentz invariance, general relativity for the tensor calculus with the repeated-indices summation convention, e.g., mathematical symbol compressions by defining notation which looks something like: Fab = 2dAab = daAb – dbAa).  This is actually a surprisingly good point to make. Zee, judging from what his Quantum Field Theory in a Nutshell book contains, does not seem to be aware how useful Heaviside’s vector calculus is (Heaviside compressed Maxwell’s 20 equations into 4 field equations plus a continuity equation for conservation of charge, while Einstein merely compressed the 4 field equations into 2, a less impressive feat but one leading to less intuitive equations; divergence and curl equations in vector calculus describe simple divergence of radial electric field lines which you can picture, and simple curling of electric or magnetic field lines which again are easy to picture).  In addition, the way relativity comes from Maxwell’s equations is best expressed non-mathematically, just because it is so simple: if you move relative to an electric charge you get a magnetic field, if you don’t move relative to an electric charge you don’t see the magnetic field. Zee adds: ‘it is entirely possible that an insightful reader could find a hitherto unknown symmetry hidden in our well-studied field theories.’ Well, he could start with the insight that U(1) doesn’t exist, as explained in the previous post.  There are no single charged leptons about, only pairs of them.  They are created in pairs, and are annihilated as pairs.  So really you need some form of SU(2) symmetry to replace U(1).  Such a replacement as a bonus predicts gravity and electromagnetism quantitatively, giving the coupling constants for each and the complete mechanism for each force. Just to be absolutely lucid on this, so that there can be no possible confusion: • SU(2) correctly asserts that quarks form quark-antiquark doublets due to the short-range weak force mediated by massive weak gauge bosons • U(1) falsely asserts that leptons do not form doublets due to the long-range electromagnetic force mediated by mass-less electromagnetic gauge bosons. The correct picture to replace SU(2)xU(1) is based on the same principle for SU(2) but a replacement of U(1) by another effect of SU(2): • SU(2) also correctly asserts that leptons form lepton-antilepton doublets (although since the binding force is long-range electromagnetism instead of short-range massive weak gauge bosons, the lepton-antilepton doublets are not confined in a small place because the range over which the electromagnetic force operates is simply far greater than that of the weak force). Solid experimentally validated evidence for this (including mechanisms and predictions of gravity and electromagnetism strengths, etc., from massless SU(2) gauge boson interactions which automatically explain gravity and electromagnetism): here.  Sheldon Glashow’s early expansion of the original Yang-Mills SU(2) gauge interaction symmetry to unify electromagnetism and weak interactions is quoted here.  More technical discussion on the relationship of leptons to quarks implies by the model: here. However, innovation of a checkable sort is now unwelcome in mainstream stringy physics, so maybe Zee was joking, and maybe he secretly doesn’t want any progress (unless of course it comes from mainstream string theory).  This suggestion is made because Zee on the same page (p457) adds that the experimentally-based theory of electromagnetic unification (unification of electricity and magnetism) was a failure to achieve its full potential because those physicists: ‘did not possess the mind-set for symmetry.  The old paradigm “experiments -> action -> symmetry” had to be replaced in fundamental physics by the new paradigm “symmetry -> action -> experiments,” the new paradigm being typified by grand unified theory and later by string theory.’  (Emphasis added.) Problem is, string theory has proved an inedible, stinking turkey (Lunsford both more politely and more memorably calls string ‘a vile and idiotic lie’ which ‘has managed to slough itself along for 20 years, leaving a shiny trail behind it’).  I’ve explained politely why string theory is offensive, insulting, abusive, dictatorial ego-massaging, money-laundering pseudoscience at my domain Zee needs to try reading Paul Feyerabend’s book, Against Method.  Science actually works by taking the route that most agrees with nature, regardless of how unorthodox an idea is, or how crazy it superficially looks to the prejudiced who don’t bother to check it objectively before arriving at a conclusion on its merits; ‘science,’ when it does occasionally take the popular route that is a total and complete moronic failure, e.g., mainstream string, temporarily becomes a religion.  String theorists are like fanatical preachers, trying to dictate to the gullible what nature is like ahead of any evidence, the very error Bohr alleged Einstein was making in 1927.  Actually there is a strong connection between the speculative Copenhagen Interpretation propaganda of Bohr in 1927 (Bohr in fact had no solid evidence for his pet theory of metaphysics, while Einstein had every causal law and mechanism of physics on his side; today we all know from high-energy physics that virtual particles are an experimental physics fact and they cause indeterminancy in a simple mechanical way on small distance scales), and string.  Both rely on exactly the same mixture of lies, hype, coercion, ridicule of factual evidence, etc.  Both are religions.  Neither is a science, and no matter how much physically vacuous mathematical obfuscation they use, it’s failure to cover-up the gross incompetence in basic physics remains as perfectly transparent as the Emperor’s new clothes.  Unfortunately, most people see what they are told to see, so this farce of string theory continues. Feynman diagrams in loop quantum gravity, path integrals, and the relationship of leptons to quarks Fig. 1: Comparison of a Feynman-style diagram for general relativity (smooth curvature of spacetime, i.e., smooth acceleration of an electron by gravitational acceleration) with a Feynman diagram for a graviton causing acceleration by hitting an electron (see previous post for the mechanism and quantitative checked prediction of the strength of gravity).  If you believe string theory, which uses spin-2 gravitons for ‘attraction’ (rather than pushing), you have to imagine the graviton not pushing rightwards to cause the electron to deflect, but somehow pulling from the right hand side: see this previous post for the maths of how the bogus (vacuous, non-predictive) spin-2 graviton idea works in the path integrals formulation of quantum gravity.  (Basically, spin-1 gravitons push, while spin-2 gravitons suck.  So if you want a checkable, predictive, real theory of quantum gravity that pushes forward, check out spin-1 gravitons.  But if you merely want any old theory of quantum gravity that well and truly sucksyou can take your pick from the ‘landscape’ of 10500 stringy theories of mainstream sucking spin-2 gravitons.)  In general relativity, an electron accelerates due to a continuous smooth curvature of spacetime, due to a spacetime ‘continuum’ (spacetime fabric). In mainstream quantum gravity ideas (at least in the Feynman diagram for quantum gravity), an electron accelerates in a gravitational field because of quantized interactions with some sort of graviton radiation (the gravitons are presumed to interact with the mass-giving Higgs field bosons surrounding the electron core).  As explained in the discussion of the stress-energy curvature in the previous post, in addition to the gravity mediators (gravitons) presumably being quantized rather than a continuous or continuum curved spacetime, there is the problem that the sources of fields such as discrete units of matter, come in quantized units in locations of spacetime.  General relativity only produces smooth curvature (the acceleration curve in the left hand diagram of Fig. 1) by smoothing out the true discontinuous (atomic and particulate) nature of matter by the use of an averaged density to represent the ‘source’ of the gravitational field. The curvature of the line in the Feynman diagram for general relativity is therefore due to the smoothness of the source of gravity spacetime, resulting from the way that the presumed source of curvature – the stress-energy tensor in general relativity – averages the discrete, quantized nature of mass-energy per unit volume of space. Quantum field theory is suggestive that the correct Feynman diagram for any interaction is not a continuous, smooth curve, but instead a number of steps due to discrete interactions of the field quanta with the charge (i.e., gravitational mass).  However, the nature of the ‘gravitons’ has not been observed, so there are some uncertainties remaining about their nature.  Fig. 1 (which was inspired – in part – by Fig. 3 in Lee Smolin’s Trouble with Physics) is designed to give a clear idea of what quantum gravity is about and how it is related to general relativity: The previous post predicts gravity and cosmology correctly; the basic mechanism was published (by Electronics World) in October 1996, two years ahead of the discovery that there’s no gravitational retardation.  More important, it predicts gravity quantitatively, and doesn’t use any ad hoc hypotheses, just experimentally validated facts as input.  I’ve used that post to replace the earlier version of the gravity mechanism discussion here, here, etc., to improve clarity. I can’t update the more permanent paper on the CERN document server here because as Tony Smith has pointed out, “… CERN’s Scientific Information Policy Board decided, at its meeting on the 8th October 2004, to close the EXT-series. …”  The only way you can update a paper on the CERN document server is if it a mirror copy of one on arXiv; update the arXiv paper and CERN’s mirror copy will be updated.  This is contrary to scientific ethics whereby the whole point of electronic archives is that corrections and updates should be permissible.  Professor Jacques Distler, who works on string theory and is a member of arXiv’s advisory board, despite being warmly praised by me, still hasn’t even put Lunsford’s published paper on arXiv, which was censored by arXiv despite having been peer-reviewed and published. Path integrals of quantum field theory The path integral for the incorrect spin-2 idea was discussed at the earlier post here, while as stated the correct mechanism with accurate predictions confirming it, is at the post here. Let’s now examine the path integral formulation of quantum field theory in more depth.  Before we go into the maths below, by way of background, Wiki has a useful history of path integrals, mentioning: ‘The path integral formulation was developed in 1948 by Richard Feynman. … This formulation has proved crucial to the subsequent development of theoretical physics, since it provided the basis for the grand synthesis of the 1970s called the renormalization group which unified quantum field theory with statistical mechanics. If we realize that the Schrödinger equation is essentially a diffusion equation with an imaginary diffusion constant, then the path integral is a method for the enumeration of random walks. For this reason path integrals had also been used in the study of Brownian motion and diffusion before they were introduced in quantum mechanics.’ As Fig. 1 shows, according to Feynman, ‘curvature’ is not real and general relativity is just an approximation: in reality, graviton exchange causes accelerations in little jumps.  If you want to get general relativity out of quantum field theory, you have to sum over the histories or interaction graphs for lots of little discrete quantized interactions.  The summation process is what we are about to describe mathematically.  By way of introduction, we can remember the random walk statistics mentioned in the previous post.  If a drunk takes n steps of approximately equal length x in random directions, he or she will travel an average of distance xn1/2 from the starting point, in a random direction!  The reason why the average distance gone is proportional to the square-root of the number of steps is easily understood intuitively because it is due to diffusion theory.  (If this was not the case, there would be no diffusion, because molecules hitting each other at random would just oscillate around a central point without any net movement.)  This result is just a statistical average for a great many drunkard’s walks.  You can derive it statistically, or you can simulate it on a computer, add up the mean distance gone after n steps for lots of random walks, and take the average.  In other words, you take the path integral over all the different possibilities, and this allows you to work out what is most likely to occur. Feynman applied this procedure to the principle of least action.  One simple way to illustrate this is the discussion of how light reflects off a mirror.  Classically, the angle of incidence is equal to the angle of reflection, which is the same as saying that light takes the quickest possible route when reflecting.  If the angle of incidence were not equal to the angle of reflection, then light would obviously take longer to arrive after being deflected than it actually does (i.e., the sum of lengths of the two congruent sides in an isosceles triangle is smaller than the sum of lengths of two dissimilar sides for a trangle with the same altitude line perpendicular to the reflecting surface). The fact that light classically seems always to go where the time taken is least is a specific instance of the more general principle of least action.  Feynman explains this with path integrals in his book QED (Penguin, 1990).  Physically, path integrals are the mathematical summation of all possibilities.  Feynman crucially discovered that all possibilities have the same magnitude but that the phase or effective direction (argument of the the complex number) varies for different paths.  Because each path is a vector, the differences in directions mean that the different histories will partly cancel each other out. To get the probability of event y occurring, you first calculate the amplitude for that event.  Then you calculate the path integral for all possible events including event y.  Then you divide the first probability (that for just event y) into the path integral for all possibilities.  The result of this division is the absolute probability of event y occurring in the probability space of all possible events!  Easy. Feynman found that amplitude for any given history is proportional to eiS/h-bar, and that the probability is proportional to the square of the modulus (positive value) of eiS/h-bar.  Here, S is the action for the history under consideration. What is pretty important to note is that, contrary to some popular hype by people who should know better (Dr John Gribbin being such an example of someone who won’t correct errors in his books when I email the errors), the particle doesn’t actually travel on all of the paths integrated over in a specific interaction!  What happens is just one interaction, and one path.  The other paths in the path integral are considered so that you can work out the probability of a given path occurring, out of all possibilities.  (You can obviously do other things with path integrals as well, but this is one of the simplest things. For example, instead of calculating the probability of a given event history, you can use path integrals to identify the most probable event history, out of the infinite number of possible event histories.  This is just a matter of applying simple calculus!) However, the nature of Feynman’s path integral does allow a little interaction between nearby paths!  This doesn’t happen with brownian diffusion!  It is caused by the phase interference of nearby paths, as Feynman explains very carefully: The Wiki article explains: ‘In the limit of action that is large compared to Planck’s constant h-bar, the path integral is dominated by solutions which are stationary points of the action, since there the amplitudes of similar histories will tend to constructively interfere with one another. Conversely, for paths that are far from being stationary points of the action, the complex phase of the amplitude calculated according to postulate 3 will vary rapidly for similar paths, and amplitudes will tend to cancel. Therefore the important parts of the integral—the significant possibilities—in the limit of large action simply consist of solutions of the Euler-Lagrange equation, and classical mechanics is correctly recovered. ‘Action principles can seem puzzling to the student of physics because of their seemingly teleological quality: instead of predicting the future from initial conditions, one starts with a combination of initial conditions and final conditions and then finds the path in between, as if the system somehow knows where it’s going to go. The path integral is one way of understanding why this works. The system doesn’t have to know in advance where it’s going; the path integral simply calculates the probability amplitude for a given process, and the stationary points of the action mark neighborhoods of the space of histories for which quantum-mechanical interference will yield large probabilities.’ I think this last bit is badly written: interference is only possible in the ‘small core’ paths that the size of the photon or other particle takes.  The paths which are not taken are not eliminated by inferference: they only occur in the path integral so that you know the absolute probability of a given path actually occurring. Similarly, to calculate the probability of dice landing heads up, you need to know how many sides dice have.  So on one throw the probability of one particular side landing facing upwards is 1/6 if there are 6 sides per die.  But the fact that the number 6 goes into the calculation doesn’t mean that the dice actually arrive with every side facing up.  Similarly, a photon doesn’t arrive along routes where there is perfect cancellation!  No energy goes along such routes, so nothing at all physical travels along any of them.  Those routes are only included in the calculation because they were possibilities, not because they were paths taken. In some cases, such as the probability that a photon will be reflected from the front of a block of glass, other factors are involved.  For the block of glass, as Feynman explains, Newton discovered that the probability of reflection depends on the thickness of the block of glass as measured in terms of the wavelength of the light being reflected.  The mechanism here is very simple.  Consider the glass before any photon even approaches it.  A normal block of glass is full of electrons in motion and vibrating atoms.  The thickness of the glass determines the number of wavelengths that can fit into the glass for any given wavelength of vibration.  Some of the vibration frequencies will be cancelled out by interference.  So the vibration frequencies of the electrons at the surface of the glass are modified in accordance to the thickness of the glass, even before the photon approaches the glass.  This is why the exact thickness of the glass determines the precise probability of light of a given frequency being reflected.  It is not determined when the photon hits the electron, because the vibration frequencies of the electron have already been determined by the interference of certain frequencies of vibration in the glass. The natural frequencies of vibration in a block of glass depend on the size of the block of glass!  These natural frequencies then determine the probability that a photon is reflected.  So there is the two-step mechanism behind the dependency of photon reflection probability upon glass thickness.  It’s extremely simple.  Natural frequency effects are very easy to grasp: take a trip on an old school bus, and the windows rattle with substantial amplitude when the engine revolutions reach a particular frequency.  Higher or lower engine frequencies produce less window rattle.  The frequency where the windows shake the most is the natural frequency.  (Obviously for glass reflecting photons, the oscillations we are dealing with are electron oscillations which are much smaller in amplitude and much higher in frequency, and in this case the natural frequencies are determined by the thickness of the glass.) The exact way that the precise thickness of a sheet of glass affects the abilities of electrons on the surface to reflect light easily understood by reference to Schroedinger’s original idea of how stationary orbits arise with a wave picture of an electron.  Schroedinger found that where an integer number of wavelengths of the electron fits into the orbit circumference, there is no interference.  But when only a fractional number of wavelengths would fit into that distance, then interference would be caused.  As a result, only quantized orbits were possible in that model, corresponding to Bohr’s quantum mechanics.  In a sheet of glass, when an integer number of wavelengths of light for a particular frequency of oscillation fit into the thickness of the glass, there is no interference in vibrations at that specific frequency, so it is a natural frequency.  However, when only a fractional number of wavelengths fit into the glass thickness, there is destructive interference in the oscillations.  This influences whether the electrons are resonating in the right way to admit or reflect a photon of a given frequency.  (There is also a random element involved, when considering the probability for individual photons chancing to interact with individual electrons on the surface of the glass in a particular way.) Virtual pair-production can be included in path integrals by treating antimatter (such positrons) as matter (such as electrons) travelling backwards in time (this was one of the conveniences of Feynman diagrams which initially caused Feynman a lot of trouble, but it’s just a mathematical convenience for making calculations).  For more mathematical detail on path integrals, see Richard Feynman and Albert Hibbs, Quantum Mechanics and Path Integrals, as well as excellent briefer introductions such as Christian Grosche, An Introduction into the Feynman Path Integral, and Richard MacKenzie, Path Integral Methods and ApplicationsFor other standard references, scroll down this page.  For Feynman’s problems and hostility from Teller, Bohr, Dirac and Oppenheimer in 1948 to path integrals, see quotations in the comments of the previous post. Feynman was extremely pragmatic.  To him, what matters is the validity of the physical equations and their predictions, not the specific model used to get the equations and predictions.  For example, Feynman said: ‘Maxwell discussed … in terms of a model in which the vacuum was like an elastic … what counts are the equations themselves and not the model used to get them. We may only question whether the equations are true or false … If we take away the model he used to build it, Maxwell’s beautiful edifice stands…’ – Richard P. Feynman, Feynman Lectures on Physics, v3, c18, p2. If you can get the right equations even from a false model, you have done something useful, as Maxwell did.  However, you might still want to search for the correct model, as Feynman explained: Feynman is here referring to the physics of the infinite series of Feynman diagrams with corresponding terms in the perturbative expansion for interactions with virtual particles in the vacuum in quantum field theory: ‘Given any quantum field theory, one can construct its perturbative expansion and (if the theory can be renormalised), for anything we want to calculate, this expansion will give us an infinite sequence of terms. Each of these terms has a graphical representation called a Feynman diagram, and these diagrams get more and more complicated as one goes to higher and higher order terms in the perturbative expansion. There will be some … ‘coupling constant’ … related to the strength of the interactions, and each time we go to the next higher order in the expansion, the terms pick up an extra factor of the coupling constant. For the expansion to be at all useful, the terms must get smaller and smaller fast enough … Whether or not this happens will depend on the value of the coupling constant.’ – P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, p. 182. This perturbative expansion is a simple example of the application of path integrals.  There are several ways that the electron can move, each corresponding to a unique Feynman diagram.  The electron can do along a direct path from spacetime location A to spacetime location B.  Alternatively, it can be deflected by a virtual particle enroute, and travel by a slightly longer path. Another alternative is that if could be deflected by two virtual particles.  There are, of course, an infinite number of other possibilities.  Each has a unique Feynman diagram and to calculate the most probable outcome you need to average them all in accordance with Feynman’s rules. For the case of calculating the magnetic moment of leptons, the original calculation came from Dirac and assumed in effect the simplest Feynman diagram situation: that the electron interacts with a virtual (gauge boson) ‘photon’ from a magnet in the simplest simple way possible.  This is what conributes 98.85% of the total (average) magnetic moment of leptons, according to path integrals for lepton magnetic moments.  The next Feynman diagram is the second highest contributor and accounts for over 1% of interactions.  This correction is the situation evaluated by Schwinger in 1947 and is represented by a Feynman diagram in which a lepton emits a virtual photon before it interacts with the magnet.  After interacting with the magnet, it re-absorbs the virtual photon it emitted earlier.  This is odd because if an electron emits a virtual photon, it briefly (until the virtual photon is recaptured) loses energy.  How, physically, can this Feynman diagram explain how the magnetic moment of the electron be increased by 0.116% as a result of losing the energy of a virtual photon for the duration of the interaction with a magnet?  If this mechanism was the correct story, maybe you’d have a reduced magnetic moment result, not an increase?  Since virtual photons  mediate electromagnetic charge, you might expect them to reduce the charge/magnetism of the electromagnetism by being lost during an interaction.  Obviously, the loss of a non-virtual photon from an electron has no effect on the charge energy at all, it merely decelerates the electron (so kinetic energy and mass are slightly reduced, not electromagnetic charge). There are two possible explanations to this: 1) the Feynman diagram for Schwinger’s correction is physically correct.  The emission of the virtual photon occurs in such a way that the electron gets briefly deflected towards the magnet for the duration of the interaction between electron and magnet.  The reason why the magnetic moment of the electron is increased as a result of this is simply that the virtual ‘photon’ that is exchanged between the magnet and the electron is blue-shifted by the motion of the electron towards the magnet for the duration of the interaction.  After the interaction, the electron re-captures the virtual ‘photon’ and is no-longer moving towards the magnet.  The blue-shift is the opposite of red-shift.  Whereas red-shift reduces the interaction strength between receding charges, blue-shift (due to the approach of charges) increases the interaction strength because the photons have an energy that is directly proportional to their frequency (E = hf).  This mechanism may be correct, and needs further investigation. 2) The other possibility is that there is a pairing between the electron core and a virtual fermion in the vacuum around it which increases the magnetic moment by a factor which depends on the shielding factor of the field from the particle core.  This mechanism was described in the previous post.  It helped inspire the general concept for the mass model discussed in the previous post, which is independent of this magnetic moment mechanism, and makes checkable predictions of all observable lepton and hadron masses. The relationship of leptons to quarks and the perturbative expansion As mentioned in the previous post (and comments number 13, 14, 22, 24, 25, 26, 27, 28 and 31 of that post), the number one priority now is to develop the details of the lepton-quark relationship.  The evidence that quarks are pairs or triads of confined leptons with some symmetry transformations was explained in detail in comment 13 to the previous post and is known as universality.  This was first recognised when the lepton beta decay event muon -> electron + electron antineutrino + muon neutrino was found to have similar detailed properties to the quark beta decay event neutron -> proton + electron + electron antineutrino Nicola Cabibbo used such evidence that quarks are closely related to leptons (I’ve only given one of many examples above) to develop the concept of ‘weak universality, which involves a similarity in the weak interaction coupling strength between different generations of particles.’  As stated in comment 13 of the previous post, I’m interested in the relationship between electric charge Q, weak isospin charge T and weak hypercharge Y: Q = T + Y/2. Where Y = −1 for left-handed leptons (+1 for antileptons) and Y = +1/3 for left-handed quarks (−1/3 for antiquarks).  The minor symmetry transformations which occur when you confine leptons in pairs or triads to form “quarks” with strong (colour) charge and fractional apparent electric charge, are physically caused by the increased strength of the polarized vacuum, and by the ability of the pairs of short-ranged virtual particles in the field to move between the nearby individual leptons, mediating new short-ranged forces which would not occur if the leptons were isolated. The emergence of these new short ranged forces, which appear only when particles are in close proximity, is the cause of the new nuclear charges, and these charges add extra quantum numbers, explaining why the Pauli exclusion principle isn’t violated. (The Pauli exclusion simply says that in a confined system, each particle has a unique set of quantum numbers.)  Peter Woit’s Not Even Wrong summarises what is known in Figure 7.1 on page 93 of Not Even Wrong: ‘The picture shows the SU(3) x SU(2) x U(1) transformation properties of the first three generations of fermions in the standard model (the other two generations behave the same way). ‘Under SU(3), the quarks are triplets and the leptons are invariant. ‘Under SU(2), the [left-handed] particles in the middle row are doublets (and are left-handed Weyl-spinors under Lorentz transformations), the other [right-handed] particles are invariant (and are right-handed Weyl-spinors under Lorentz transformations). ‘Under U(1), the transformation properties of each particle is given by its weak hypercharge Y.’ This makes it easier to understand: the QCD colour force of SU(3) controls triplets of particles (’quarks’), whereas SU(2) controls doublet’s of particles (’quarks’). But the key thing is that the hypercharge Y is different for differently handed quarks of the same type: a right-handed downquark (electric charge -1/3) has a weak hypercharge of -2/3, while a left-handed downquark (same electric charge as the right-handed one, -1/3), has a different weak hypercharge: 1/3 instead of -2/3! The issue of the fine detail in the relationship of leptons and quarks, how the transformation occurs physically and all the details you can predict from the new model suggested in the previous post, is very interesting and, as stated, is the number one priority. For a start, to study the transformation of a lepton into a quark, we will consider the conversion of electrons into downquarks.  First, the conversion of a left-handed electron into a left-handed downquark will be considered, because the weak isospin charge is the same for each (T = -1/2): eL  -> dL The left-handed electron, eL, has a weak hypercharge of Y = -1 and the left-handed downquark, dL, has a weak hypercharge of Y = +1/3.  Therefore, this transformation incurs a fall in observable electric charge by a factor of 3 and an accompanying increase in weak hypercharge by +4/3 units (from -1 to +1/3). Now, if the vacuum shielding mechanism suggested has any heuristic validity, the right-handed electron should transform into a right-handed downquark by way of a similar fall in electric charge by a factor of 3 and accompanying increase in weak hypercharge by +4/3 units: eR -> dR The weak isospin charges are the same for right-handed electrons and right-handed downquarks (T = 0 in each case). The transformation of a right-handed electron to right-handed downquark involves the same reduction in electric charge by a factor of 3 as for left-handed electrons, while the weak hypercharge changes from Y = -2 to Y = -2/3.  This means that the weak hypercharge increases by +4/3 units, just the same amount as occurred with the transformation of a left-handed electron to a left-handed downquark.  So there is a consistency to this model: the shielding of a given amount of electric charge by the polarized vacuum causes a consistent increase in the weak hypercharge. If we ignore for the moment the possibility that antimatter leptons may get transformed into upquarks and just consider matter, then the symmetry transformations required to change right-handed neutrinos into right-handed upquarks, and left-handed neutrions into left-handed upquarks are: vL -> uL vR -> uR The first transformation involves a left-handed neutrino, vL, with Y = -1, Q = 0, and T = 1/2, becoming a left-handed upquark, uL, with Y = 1/3, Q = 2/3, and T = 1/2.  We notice that Y gains 4/3 in the transformation, while Q gains 2/3. The second transformation involves a right-handed neutrino with Y = 0, Q = 0 and T = o  becoming a right-handed upquark with Y = 4/3, Q = 2/3 and T = 0.  We can immediately see that the transformation has again resulted in Y gaining 4/3 while Q gains 2/3.  Hence, the concept that a given change in electric charge is accompanied by a given change in hypercharge remains valid.  So we have accounted for the conversion of the four leptons in one generation of particle physics (two types of handed electrons and two types of handed neutrinos) into the four quarks in the same generation of particle physics (left and right handed versions of two quark flavors). These transformations are obviously not normal reactions at low energy.  The first two make checkable, falsifiable predictions about unification to replace supersymmetry speculation about the unification of running couplings, the relative charges of the electromagnetic, weak and strong forces as a function of either collision energy (e.g., electromagnetic charge increases at higher energy, while strong charge falls) or distance (e.g., electromagnetic charge increases at small distances, while strong charge falls). If we review the symmetry transformations suggested for a generation of leptons into a generation of quarks, eL  -> dL eR -> dR vL -> uL vR -> uR it is clear that the last two reactions are in difficulty, because the conversion of neutrinos into upquarks (in this example of a generation of quarks) is a potential problem for the suggested physical mechanism in the previous (and earlier) posts.  The physical mechanism for the first two of the four transformations is relatively straightforward to picture: try to collide leptons at enormous energy and the overlap of the polarized vacuum veils of polarizable fermions should shield some of the long-range (observable low energy) electric charge, with this shielded energy is used instead in short range weak hypercharge mediated by weak gauge bosons, and colour charges for the strong force. Because we know exactly how much energy is ‘lost’ from the electric charge in the first two transformations due to the increased shared polarized vacuum shield, we can quantitatively check this physical mechanism by setting this lost energy equal to the energy gained in the weak force and seeing if the predictions are accurate.  This mechanism might not apply directly to the last two transformations, since neutrinos do not carry a net electric charge.  It is also necessary to investigate the possibilities for the transformation of positrons into upquarks.  This issue of why there is little antimatter might be resolved if positrons were converted into upquarks at high energy in the big bang by the mechanism suggested for the first two transformations. However, the polarized vacuum shielding mechanism might still apply in some circumstances to neutral particles, depending on the geometry.  Neutrinos may be electrically neutral as observed at low energy or large distances, while actually carrying equal and opposite electric charge.  (Similarly, atoms often appear to be neutral, but if we smash them to pieces we get observable electric charges arise.  The apparent electrical neutrality of atoms is a masking effect of the fact that atoms usually carry equal positive and negative charge, which cancel as seen from a distance.  A photon of light similarly carries positive electric field and negative electric field energy in equal quantities; the two cancel out overall, but the electromagnetic fields of the photon can interact with charges. Charge is only manifested by way of the field created by a charge; since nobody has ever seen the core of a charged particle, only the field.  A confined field of a given charge is therefore indistinguishable from a charge.  The only reason why an electron appears to be a negative charge is because it has a negative electric field around it.  As shown in Fig. 5 of the previous post, there is a modification necessary to the U(1) symmetry of the standard model of particle physics: negative gauge bosons to mediate the fields around negative charges, and positive gauge bosons to mediate the fields around positive charges. So a ‘neutral’ particle which is neutral because it contains of equal amounts of positive and negative electric field, may be able to induce electric polarization of the vacuum for the short ranged (uncancelled) electric field.  The range of this effect is obviously limited to the distance between the centrel of the positive part of the particle and the negative part of the particle.  (In the case of a photon for example, this distance is the wavelength.) If we replace the existing electroweak SU(2)xU(1) symmetry by SU(2)xSU(2), maybe with each SU(2) having a different handedness, then we get four charged bosons (two charged massive bosons for the weak force, and two charged massless bosons for electromagnetism) and two neutral bosons: a massless gravity mediating gauge boson, and a massive weak neutral-current producing gauge boson. Let’s try the transformation of a positron into an upquark.  This has two major advantages over the idea that neutrinos are transformed into upquarks.  First, it explains why we don’t observe much antimatter in nature (tiny amounts arise from radioactive decays involving positron emission, but it quickly annihilates with matter into gamma rays).  In the big bang, if nature was initially symmetric, you would expect as much matter as antimatter.  The transformation of free positrons into confined upquarks would sort out this problem.  Most of the universe is hydrogen, consisting of a proton containing two upquarks and a downquark, plus an orbital electron.  If the upquarks come come from a transformation of positrons while downquarks come from a transformation of electrons, the matter-antimatter balance is resolved. Secondly, the transformation of positrons to upquarks has a simple mechanism by vacuum polarization shielding of the electric charge, causing the electric charge of the positron to drop from +1 unit for a positron to +2/3 units for upquarks.  This occurs because you get two positive upquarks and one downquark in a proton.  The transformation is e+L  -> uL The positron on the left hand side has Y = +1, Q = +1 and T = +1/2.  The upquark on the right hand side has Y = +1/3, Q = +2/3 and T = +1/2.  Hence, there is an decrease of Y by 2/3, while Q decreases by 1/3.  Hence the amount of change of Y is twice that of Q.  This is impressively identical to the the situation in the transformation of electrons into downquarks, where an increase of Q by 2/3 units is accompanied by an increase of Y by twice 2/3, i.e., by 4/3, for the transformation eL  -> dL  There are only two ways that quarks can group: in pairs and in triplets or triads.  The pairs are of quarks sharing the same polarized vacuum are known as mesons, and mesons are the SU(2) symmetry pairs of left-handed quark and left-handed anti-quark, which both experience the weak nuclear force (no right-handed particle can participate in the weak nuclear force, because the right handed neutrino has zero weak hypercharge).  The SU(3) symmetry triplets of quarks are called baryons. Because only left-handed particles experience the weak force (i.e., parity is broken), it is vital to explain why this is so.  This arises from the way the vector bosons gain mass.  In the basic standard model, everything is massless.  Mass is added to the standard model by a separate scalar field (such as that which is speculatively proposed by Philip Anderson and Peter Higgs and called the Higgs field), which gives all the massive particles (including the weak force vector bosons) their mass.  The quanta for the scalar mass field are named ‘Higgs bosons’ but these have never been officially observed, and mainstream speculations do not make predict Higgs boson mass unambiguously. The model for masses in the previous post predicts composite (meson and baryon) particle masses to be due to an integer number of 91 GeV building blocks of mass which couple weakly due to the shielding factor due to the polarized vacuum around a fermion.  The 91 GeV energy equivalent to the rest mass of the uncharged neutral weak gauge boson, the Z. The SU(3), SU(2) and U(1) gauge symmetries of the standard model describe triplets (baryons), doublets (mesons) and single particle cores (leptons), dominated by strong, weak and electromagnetic interactions, respectively.  The problem is located in the electroweak SU(2)xU(1) symmetry.  Most of the papers and books on gauge symmetry focus on the technical details of the mathematical machinery, and simple mechanisms are looked at askance (as is generally the case in quantum mechanics and general relativity).  So you end up learning say, how to drive a car without knowing how the engine works, or you learn how the engine works without any knowledge of the territory which would enable you to plan a useful journey.  This is the way some complex mathematical physics is traditionally taught, mainly to get away from useless speculations: Feynman’s analogy of the chess game is fairly good.  (Deduce some of the rules of the game by watching the game being played, and use these rules to make some accurate predictions about what may happen; without having the complete understanding necessary for confident explanation of what the game is about.  Then make do by teaching the better known predictive rules, which are technical and accurate, but don’t always convey a complete understanding of the big picture.) A serious problem with the U(1) symmetry is that you can’t really ever get single leptons in nature.  They all arise naturally from pair production, so they usually arrive in doublets, contradicting U(1); examples: in beta decay, you get a beta particle and an antineutrino, while in pair production you may get a positron and an electron. This is part of the reason why SU(2) deals with leptons in the model proposed in the previous post.  Whereas pairs of left-handed quarks are confined in close proximity in mesons, a lepton-antilepton pair is not confined in a small space, but it is still a type of doublet and can be treated as such by SU(2) using massless gauge bosons (take the masses away from the Z, W+ and W- weak bosons, and you are left with a massless Z boson that mediates gravity, and massless W+ and W- bosons which mediate electromagnetic forces).  Because a version of SU(2) with massless gauge bosons has infinite range inverse-square law fields, it is ideal for describing the widely separated lepton-antilepton pairs created by pair production, just as SU(2) with massive guage bosons is ideal for describing the short range weak force in left-handed quark-antiquark pairs (mesons). The electroweak chiral symmetry arises because only left-handed particles can interact with massive SU(2) gauge bosons (the weak force), while all particles can interact with massless SU(2) gauge bosons (gravity and electromagnetism).  The reason why this is the case is down to the nature of the way mass is given to SU(2) gauge bosons by a mass-giving Higgs-type field.  Presumably the combined Higgs boson when coupled with a massless weak gauge boson gives a composite particle which only interacts with left-handed particles, while the nature of the massless weak gauge bosons is that in the absence of Higgs bosons they can interact equally with left and right handed particles. To summarise, quarks are probably electron and antielectrons (positrons) with the symmetry transformation modifications you get from close confinement of electrons against the exclusion principle (e.g., such electrons acquire new charges and short range interactions). Downquarks are electrons trapped in mesons (pairs of quarks containing quark-antiquark, bound together by the SU(2) weak nuclear force, so they have short lifetimes and under beta radioactive decay) or baryons, which are triplets of quarks bound by the SU(3) strong nuclear force.  The confinement of electrons in a small reduces their electric charge because they are all close enough in the pair or triplet to share the same overlapping polarized vacuum which shields part of the electric field.  Because this shielding effect is boosted, the electron charge per electron observed at long range is reduced to a fraction.  The idealistic model is 3 electrons confined in close proximity, giving a 3 times strong polarized vacuum, which reduces the observable charge per electron by a factor of 3, giving the e/3 downquark charge.  This is a bit too simplistic of course because in reality you get mainly stable combinations like protons (2 upquark and 1 downquark).  The energy lost from the electric charge, due to the absorption in the polarized vacuum, powers short-ranged nuclear forces which bind the quarks in mesons and hadrons together. Upquarks would seem to be trapped positrons.  This is neat because most of the universe is hydrogen, with one electron in orbit and 2 upquarks plus 1 downquark in the proton nucleus.  So one complete hydrogen atom is formed by 2 electrons and 2 positrons.  This explains the absence of antimatter in the universe: the positrons are all here, but trapped in nuclei as upquarks.  Only particles with left-handed Weyl spin undergo weak force interactions. Possibly the correct electroweak-gravity symmetry group is SU(2)L x SU(2)R, where SU(2)L is a left-handed symmetry and SU(2)R is a right handed one. The left-handed version couples to massive bosons which give mass to particles and vector bosons, creating all the massive particles and weak vector bosons. The right handed version presumably does not couple to massive bosons. The result here is that the right handed version, SU(2)R, produces only mass-less particles, giving the gauge bosons needed for long-range electromagnetic and gravitational forces. If that works in detail, it is a simplification of the SU(2)xU(1) electroweak model, which should make the role of the mass-giving field clearer, and predictions easier. The mainstream SU(2)xU(1) model requires a symmetry-breaking Higgs field which works by giving mass to weak gauge bosons only below a particular energy or beyond a particular distance from a particle core. The weak gauge bosons are supposed to be mass-less above that energy, where electroweak symmetry exists; electroweak symmetry breaking is supposed to occur below the Higgs expectation energy due to the fact that 3 weak gauge bosons acquire mass at low energy, while photons don’t acquire mass at low energy. This SU(2)xU(1) model mimics a lot of correct physics, without being the correct electroweak unification. How far has the idea that weak gauge bosons lose mass above the Higgs expectation value been checked (I don’t think it has been checked at all yet)? Presumably this is linked to ongoing efforts to see evidence for a Higgs boson. The electroweak theory correctly unifies the weak force (dealing with neutrinos, beta decay and the behaviour of mesons) with Maxwell’s equations at low energy and the electroweak unification SU(2)xU(1) predicted the W and Z massive weak gauge bosons detected at CERN in 1983. However, the existence of three massive weak gauge bosons is the same in the proposed replacement for SU(2)xU(1). I think that the suggested replacement of U(1) by another SU(2) makes quite a lot of changes to the untested parts of the standard model (in particular the Higgs mechanism), besides the obvious benefits of introducing gravity and causal electromagnetism. Spherical symmetry of Hubble recession I’d like to thank Bee and others at the Backreaction blog for patiently explaining to me that a statement that radial distance elements are equal for the Hubble recession in all directions around us, H = dv/dr = dv/dx = dv/dy = dv/dz t(age of universe), 1/H = dr/dv = dx/dv = dy /dv = dz/dv dv/H = dr = dx = dy = dx for spherically symmetrical recession of stars around us (in directions x, y, z, where r is the general radial direction that can point any way), appears superficially to be totally ‘wrong’ to people who are only unaccustomed to cosmology where the elementary equations for spherical geometry and metrics in non-symmetric spatial dimensions don’t apply.  Hopefully, ‘critics’ will grasp the point that equation A does not disprove equation B just because you have seen equation A in some textbook, and not equation B. For example, some people repeatedly and falsely claim that H = dv/dr = dv/dx = dv/dy = dv/dz and the resulting equality dr = dx = dy = dx is total rubbish, and is ‘disproved’ by the existence of metrics and non-symmetrical spherical geometrical equations.  They ignore all explanations that this equality of gradient elements has nothing to do with metrics or spherical geometry, and is due to the spherical symmetry of the cosmic expansion we observer around us. Another way to look at H = dv/dr = dv/dx = dv/dy = dv/dz is to remember that 1/H is a way to measure the age of the universe.  If the universe were at critical density and being gravitationally slowed down with no cosmological constant to offset this gravity effect by providing repulsive long range force and an outward acceleration to cancel out the gravitational inward deceleration assumed by the mainstream (i.e., the belief until 1998), then the age of the universe would be (2/3)/H where 2/3 is the compensation factor for gravitational retardation. However, since 1998 there has been good evidence that gravity is not slowing down the expansion; instead there is either something opposing gravity by causing repulsion at immense distance scales and outward acceleration (so-called ‘dark energy’ giving a small positive cosmological constant), or else there is a partial lack of gravity at long distances due to graviton redshift and/or the geometry of a quantum gravity mechanism (depending on whether you are assuming spin-2 gravitons or not), which has substantially more predictive and less ad hoc. since it was predicted via Electronics World Oct. 1996, years before being confirmed by observation (see comment 11 on previous post). Therefore, let’s use 1/H as the age of the universe, time!  Then we find: This proves that dr/dv = dx/dv = dy/dv = dz/dv. Now multiply this out by dv, and what do you get?  You get: dr = dx = dy = dz. As Fig. 2 shows, it is a fact that the Hubble parameter can be expressed as H = dv/dr = dv/dx = dv/dy = dv/dz, where the equality of numerators means that the denominators are similarly equal: dr = dx = dy = dx.  This is fact, not an opinion or guess. Fig 2 - why dr = dx = dy = dx in the Hubble law v/r = H or dv/dr = H Fig. 2: Illustration of the reason why the Hubble law H = dv/dr = dv/dx = dv/dy = dv/dz, where because of the isotropy (i.e. the Hubble law is the same in every direction we look, as far as observational evidence can tell), the numerators in the fractions are all equal to dv so the denominators are all equal to each other too: dr = dx = dy = dx.  Beware everyone, this has nothing whatsoever to do with metrics, with general relativity, or with the general case in spherical geometry (where the origin of coordinates need not in general be the centre of the spherical symmetry)! So if your textbook has a formula which ‘contradicts’ dr = dx = dy = dx or if you think that dr = dx = dy = dx should in your opinion be replaced by a metric with the squares of line elements all added up, or with a general formula for spherical geometry which applies to situations where the recession would be vary with directions, then you are wrong.  As one commentator on this blog has said (I don’t agree with most of it), it is true that new ideas which have not been investigated before often look ‘silly’.  People who do not check the physics and instead just pick out formulae, misunderstand them, and then ridicule them, are not “critics”.  They are not criticising the work, instead they are criticising their own misunderstandings.  So any ridicule and character assassinations resulting should be taken with a large pinch of salt.  It’s best to try to see the funny side when this occurs! One of the very interesting things about dr = dx = dy = dx is what you get for time dimensions because the age of the universe (if there is no gravitational deceleration, as was shown to be the case in 1998) is 1/H, and because we look back in time with increasing distance according to r = x = y = z = ct, it follows that there are equivalent time-like dimensions for each of the spatial dimensions.  This makes spacetime easier to understand and allows a new unification scheme!  The expanding universe has three orthagonal expanding time-like dimensions (we usually refer to astronomical dimensions in time units like ‘lightyears’ anyway, since we are observing the past with increasing distance, due to the travel time of light) in addition to three spacetime dimensions describing matter.  Surely this contradicts general relativity?  No, because all three time dimensions are usually equal, and so can be represented by a single time element, dt, or its square.  To do this, we take dr = dx = dy = dz and convert them all into time-like equivalents by dividing each distance element by c, giving: (dr)/c = (dx)/c = (dy)/c = (dz)/c which can be written as: dtr = dtx = dty = dtz So, because the age of the universe (ascertained by the Hubble parameter) is the same in all directions, all the time dimensions are equal!  This is why we only need one time to describe the expansion of the universe.  If the Hubble expansion rate was found to be different in directions x, y and x, then the age of the universe would appear to be different in different directions.  Fortunately, the age of the universe derived from the Hubble recession seems to be the same (within observational error bars) in all directions: time appears to be isotropic!  This is quite a surprising result as some hostility to this new idea from traditionalists shows. But the three time dimensions which are usually hidden by this isotropy are vitally important!  Replacing the Kaluza-Klein theory, Lunsford has a 6-dimensional unification of electrodynamics and gravitation which has 3 time-like dimensions and appears to be what we need.  It was censored off arXiv after being published in a peer-reviewed physics journal, “Gravitation and Electrodynamics over SO(3,3)”, International Journal of Theoretical Physics, Volume 43, Number 1 / January, 2004, Pages 161-177, which can be downloaded here.  The mass-energy (i.e., matter and radiation) has 3 spacetime which are different from the 3 cosmological spacetime dimensions: cosmological spacetime dimensions are expanding, while the 3 spacetime dimensions are bound together but are contractable in general relativity.  For example, in general relativity the Earth’s radius is contracted by the amount 1.5 millimetres. In addition, as was shown in detail in the previous post, this sorts out ‘dark energy’ and predicts the strength of gravity accurately within experimental data error bars, because when we rewrite the Hubble recession in terms of time rather than distance, we get acceleration which by Newton’s 2nd empirical law of motion (F = ma) implies an outward force of receding matter, which in turn implies by Newton’s 3rd empirical law of motion an inward reaction force which – it turns out – is the mechanism behind gravity: ‘To find out what the acceleration is, we remember that velocity is defined as v = dR/dt, and this rearranges to give dt = dR/v, which can be substituted into the definition of acceleration, a = dv/dt, giving a = dv/(dR/v) = v.dv/dR, into which we can insert Hubble’s empirical law v = HR, giving a = HR.d(HR)/dR = H2R.’ – Herman Minkowski, 1908. Deriving the relationship between the FitzGerald contraction and the gravitational contraction Feynman finds that whereas lengths contract in the direction of motion at velocity v by the ratio (1 – v2/c2)1/2, gravity contracts lengths by the amount (1/3)MG/c2 = 1.5 mm for the contraction of Earth’s radius by gravity. It is of interest that this result can be obtained simply, throwing light on the relationship between the equivalence of mass and energy in ‘special relativity’ (which is at best just an approximation) and the equivalence of inertial mass and gravitational mass in general relativity. To start with, recall Dr Love’s derivation of Kepler’s law from the equivalence of the kinetic energy of a planet to its gravitational potential energy, given in a previous post. This is very simple.  If a body’s average kinetic energy in space (outwide the atmosphere) is such that it has just over the escape velocity, it will eventually escape and will therefore be unable to orbit endlessly.  If it has just under that velocity, it will eventually fall back to Earth and so it will not orbit endlessly, just as is the case if the average velocity is too high.  Like Goldilocks and the porridge, it is very fussy. The average orbital velocity must exactly match the escape velocity – and be neither more nor less than the escape velocity – in order to achieve a stable orbit. Dr Love points out the consequences: a body in orbit must have an average velocity equal to escape velocity v = (2GM/r1/2 which implies that its kinetic energy must be equal to its gravitational potential energy: kinetic energy, E = (1/2) mv 2 = (1/2) m((2GM/r1/2 )2 = mMG/r. This permits him to derive Kepler’s law.  It is also very important because it explains the relationship for stability of orbits: average kinetic energy = gravitational potential energy Einstein’s equivalence of inertial and gravitational mass in E = mc2 then allows us to use this equivalence of inertial kinetic energy and gravitational potential energy derive the equivalence principle of general relativity, which states that the inertial mass is equal to the gravitational mass, at least for orbiting bodies.  Another physically justified argument is that gravitational potential is the gravity energy that would be released in the case of collapse.  If you allowed the object to fall and therefore pick up that gravitational potential energy, the latter energy would be converted into kinetic energy of the object.  This is why the two energies are equivalent.  It’s a rigorous argument! Now test it further.  Take the FitzGerald-Lorentz contraction of length due to inertial motion at velocity, where objects are compressed by the ratio (1 – v2/c2)1/2, using equivalence of average kinetic energy to gravitational potential energy, you can place the escape velocity, v = (2GM/r1/2 into the contraction formula, and expand the result to two terms using the binomial expansion.  You find that the radius of a gravitational mass would be reduced by the amount GM/c2 = 4.5 mm for Earth’s radius which is three times as big as Feynman’s formula for gravitational compression of Earth’s radius.  The factor of three comes from the fact that the FitzGerald-Lorentz contraction is in one dimension only (direction of motion), while the gravitational field lines radiate in three dimensions, so the same amount of contraction is spread over three times as many dimensions, giving a reduction in radius by (1/3)GM/c2 = 1.5 mm!  (There is also a rigorous mathematical discussion of this on the page here if you have the time to scroll down and find it.) Unusually, Feynman makes a confused mess of this effect in his relevant volume of Lectures on Physics, c42 p6, where correctly he gives his equation 42.3 for excess radius being equal to predicted radius minus measured radius (i.e., he claims that the predicted radius is the bigger one in the equation) but then on the same page in the text falsely and confusingly writes: ‘… actual radius exceeded the predicted radius …’ (i.e., he claims in the text that the predicted radius is the smaller). Professor Jacques Distler’s philosophical and mathematical genius ‘A theorem is only as good as the assumptions underlying it. … particularly in more speculative subject, like Quantum Gravity, it’s simply a mistake to think that greater rigour can substitute for physical input. The idea that somehow, by formulating things very precisely and proving rigourous theorems, correct physics will eventually emerge simply misconstrues the role of rigour in Physics.’ – Professor Jacques Distler, Musings blog post on the Role of Rigour. Jacques also summarises the issues for theoretical physics clearly in a comment there: 1. ‘There’s the issue of the theorem itself, and whether the assumptions that went into it are physically-justified. 2. ‘There’s the issue of a certain style of doing Physics which values proving theorems over other ways of arriving at physical knowledge. 3. ‘There’s the rhetorical use to which the (alleged) theorem is put, in arguing for or against some particular approach. In particular, there’s the unreflective notion that a theorem trumps any other sort of evidence.’
fa5c3cbd8da583de
Copyright © 2003 jsd Models and Pictures of Atomic Wavefunctions 1  Introduction There are various ways of modeling and/or depicting atomic wavefunctions. You can use waves in a pool of water, as discussed in section 2. You can also use Chladni patterns and waves on a string, as discussed in section 7. You can also use pictures, including animated pictures and animated scatter plots, as discussed in section 9, section 13, and section 14. Many other folks have done animations over the years, including reference 1. Simple mathematics and simple models represent some (but not all!) of what happens in real atoms, as discussed in section 5 and section 6. The term “orbital” is often used in this context, but it is somewhat ambiguous; for details on this, see section 8.2. For some general background on how to think about quantum states, see reference 2. *   Contents 2  Stationary States in a Pool of Water 2.1  Version 1: Small Scale In general, the bigger the better, as discussed in section 2.2, but this demo works even on a small scale. You can do it as a classroom demo, with one container per student, or perhaps one for every two students. I have a huge number of plastic tubs, 15 cm in diameter and 12 cm high. Originally, they came from the store with 1.5 lbs of crumbled cheese in them. I wash them and save them, because they are super-handy. If you don’t have such things, you can carry out the wave demo using ordinary bowls. Disposable styrofoam soup bowls are adequate. The cheese tubs are nicer, because they are taller and hence less likely to spill. Fill each container about half way. Add a drop of dish detergent (e.g. Dawn or Lemon Joy) and make a few bubbles, so that it is easy to see what the surface is doing.1 For a tub or dish of water, excite the motion by pushing some smallish solid object up and down, displacing some of the water. I have a collection of empty pill bottles that I use for plungers. 2.2  Version 2: Larger Scales The demo in section 2.1 can be scaled up to larger and larger containers. 1. In a lecture situation or video situation, you can put a round flat-bottomed glass cooking dish or serving dish on the overhead projector. 2. Sometimes you can get a “wading pool” designed for toddlers that is the right size for an indoor demonstration. 3. Somewhat sturdier tubs of a similar size are sometimes sold at discount superstores, intended for holding ice+drinks at parties. 4. Robust tubs intended for feeding and watering livestock are sold at farm-supply stores. 5. The most amusing situation is a large round swimming pool with a radially-symmetric depth profile, shallow enough so that you can stand in the middle. As the container gets larger, you may need a larger plunger, but otherwise you can do all the experiments listed in section 2.1. Soap bubbles are not needed for the larger sizes. For the swimming pool, you use yourself as the plunger. The instructions in this case are slightly different: 2.3  Discussion This is a simple experiment, but it is important for a number of reasons. 1. It shows that the word “wavefunction” actually means something. The words and the mathematics refer to real things, real waves. 2. The terms “s” and “p” tell us something about the symmetry of the wavefunction. 3. The usefulness of spherical harmonics is not restricted to the hydrogen atom. Anything that has waves can be written using the spherical harmonics as basis functions. This is not a new idea; it was introduced by Laplace in 1782. In a two-dimensional system such as the water surface, you can construct infinitely many wavefunctions in the |2p⟩ family, but if you have more than two, they will be linearly dependent. For instance, you can readily convince yourself that the |2p+⟩ wavefunction is a superposition of |2px⟩ and |2py⟩ with a particular phase. Similarly the |2p⟩ wavefunction is a superposition of |2px⟩ and |2py⟩ with the opposite phase. |2p+ =  |2px⟩ + i |2py |2p =  |2px⟩ – i |2py You can easily go the other way, writing |2px⟩ as a superposition of |2p+⟩ and |2p⟩. That is to say, the set {|2px⟩, |2py⟩} is not the only possible basis. Another perfectly good basis is the set {|2p+⟩, |2p⟩}. Innumerable other basis sets are possible. This gives us a complete description of the N=2 shell. In two dimensions, there are no other linearly-independent patterns that can be formed with only one node. In three dimensions, there would be one more pattern. The obvious “rectangular” basis in D=3 is {|2s⟩, |2px⟩, |2py⟩, and |2pz⟩}. Another often-useful basis is {|2s⟩, |2p+⟩, |2p⟩, and |2pz⟩}. As always, there are infinitely many other bases. Recall that for ordinary vectors, such as position vectors or momentum vectors, we need two basis vectors to span a two-dimensional space, and three basis vectors to span a three-dimensional space. The wavefunctions in the N=2 family are vectors in an abstract four-dimensional space. This can also be called a function-space and/or a Hilbert space. Terminology: It is common for people to say that in three dimensions, there are only four orbitals in the N=2 shell (namely one s-orbital and three p-orbitals). This is, alas, an abuse of the terminology. There are infinitely many different possible orbitals, i.e. infinitely many possible wavefunctions in the function-space. The thing that we should be counting is not the number of wavefunctions but rather the dimensionality of the function-space. So when somebody says there are “four orbitals” you have to translate: What they really mean is that four basis wavefunctions suffice to span the space. 3  Ripple Tank There is a tremendous amount to be learned by observing what goes on in a ripple tank. Running waves, interference, diffraction, refraction, reflection, et cetera. Figure 1: Ripple Tank Apparatus at Claremont Figure 2: Ripple Tank Interference Pattern at UIUC 4  Waves on a String 4.1  Basic Setup Another good demo involves waves on a string. You want both the tub-of-water demo and the string demo. The provide complementary information, in the following sense: For the string (to be discussed in this section), the abscissa of the wavefunction is one-dimensional while the ordinate is two-dimensional. The ordinate can be described as having two different polarizations.   For the tub of water (as discussed in section 2), the abscissa of the wavefunction is two-dimensional while the ordinate is one-dimensional. The polarization is not interesting and is usually not mentioned. The genuine quantum wavefunction for a single particle has a three-dimensional abscissa (in real space) and a two-dimensional ordinate (in its own somewhat abstract space). Neither the string model nor the tub-of-water model suffices by itself, but by using both models you can piece together a much more complete picture. Technical details: Ordinary household string is not optimal. You want something heavy as well as flexible. The beaded chain that they use for pulling the switch on overhead lamps is one possibility. Helical “telephone cord” is another possibility; the helical design makes it especially flexible. An extra-long Slinky is another possibility; such things are sold just for this purpose. Highly-flexible rope is another possibility. In all cases, longer is better. For simplicity, I will refer to all of these media as “strings” but the word “string” must not be taken literally. It helps to fasten the ends of the string to sturdy supports – perhaps clamp stands or some such – but if you’re in a hurry you can just have students hold the ends. 4.2  Linear Polarization versus Circular Polarization On the string, the polarization vector is two-dimensional. You can start by creating horizontally-polarized waves and contrasting that with vertically-polarized waves. Then you can move on to circularly-polarized waves, such as are seen on a jump-rope. You can launch the waves by hand. If you want to get fancy, you can use an electric egg-beater or a variable-speed electric drill; chuck up some sort of wheel and attach the string off-center. Install a swivel (available from the fishing-supplies store) to decouple the undesired "spin" aka "twist" from the desired "orbital" motion. Choose a rotation rate that is slow enough that people can follow the actual motion with their eyes. A long rope with lots of sag – i.e. no unnecessary tension – will give you relatively many nodes at a relatively low frequency. Figure 3: String Model of |ξ=±1⟩ Standing Wave Figure 3 shows an example of a circularly polarized standing wave on a string. In a one-dimensional situation like this, there is only one quantum number. We will use the spatial frequency ξ to specify the state. This stands in contrast to a three-dimensional atom, where there are three quantum numbers, conventionally N, l, and m. The state in figure 3 is the |ξ=±1⟩ standing wave. It can be constructed as a superposition of |ξ=+1⟩ and |ξ=−1⟩ running waves. It must be emphasized that in the string model, the string is whirling around and around like a jump-rope, not merely up and down; see section 4.4 for an explicit animated depiction of this. The whirling motion gives us a good model of the time-dependence of the phase of the wavefunction, as it rotates in the complex plane. The blue curve shows the wavefunction at an early time, and the black curve shows the same function one quarter-cycle later. Examples: Rotating Ordinate   Examples: Non-Rotating Ordinate The circular polarization of the string makes an important point about the time-dependence of the wavefunctions in an atom, and about the symmetry of the ordinate. Consider a particular point on the string: in a rotating standing wave (as used in a jump-rope game) the chosen point remains at a fixed distance from the axis as it goes around and around. The wavefunction can go around from +X to +Y to −X to −Y and back to +X without ever crossing through zero.   In the tub of water, the polarization vector is uninteresting because it is one-dimensional; a one-dimensional vector looks a lot like a scalar. That is, the water moves from “up” to “down” or equivalently from “plus” to “minus”. Also note that a scalar cannot go from plus to minus without crossing through zero. As such, the water is not a good model of quantum mechanics. Sometimes you can label the lobes of a QM wavefunction as “plus” and “minus” – but sometimes you can’t ... and even if you can such labels are likely to be misunderstood. It is true that at any given time, each lobe of the |ξ=±1⟩ wavefunction in figure 3 is -1 times the other lobe. However, this is not a complete description, and runs the risk of being misunderstood.   In ordinary two-phase house wiring, the black phase is -1 times the red phase and vice versa. Unlike quantum mechanical wavefunctions, the voltages do not go around and around like a jump-rope; each phase simply goes up and down, crossing through zero twice per cycle. Any discussion of the wavefunction in terms of “+” and “−” is not just misleading, it is also very incomplete, because it cannot even begin to describe what is going on in an atomic |2p+⟩ or |2p⟩orbital. For more on this, see section 13. Note: In quantum mechanics – and everywhere else – a two-component vector can be represented as a complex number (and vice versa). You can use either representation, or both, at your convenience. 4.3  Standing Wave versus Running Wave The string can demonstrate standing waves as well as running waves. Figure 4 shows an animation of a running wave packet. The packet can be described as a sinusoidal carrier modulate by a Gaussian envelope. Figure 4: Running Wave : One-Dimensional Ordinate Figure 5 shows a snapshot of the |ξ=+2⟩ running wave on a string. As mentioned in section 4.2, at any particular location, the string is whirling around and around like a jump-rope (not merely up and down). The blue curve shows the wavefunction at an early time, and the black curve shows the same function one quarter-cycle later. See section 4.4 for an explicit animated depiction of the whirling motion. Figure 5: String Model of |ξ=+2⟩ Running Wave Note that unlike in figure 3, there are no nodes in figure 5. The ordinate of the wavefunction never goes to zero. Indeed it never goes anywhere near zero. Also note that unlike in figure 3, there is a definite direction of travel in figure 5. The wave works like an Archimedes screw, carrying energy and momentum in the direction marked “Position” in the figure. The time-dependence of the wavefunction (radians per unit time) tells us about the energy density (ℏω), while the space-dependence (radians per unit length) tells us about the momentum density (ℏk). The ratio dω/dk tells us about the velocity. (Note: as always, the wavenumber is given by k = 2πξ.) Even though the wavefunction in figure 5 has the same magnitude at all times, it is still a running wave, not a standing wave. Such a wave carries a steady flow of energy and momentum. The ordinate is a vector, and looking at its magnitude doesn’t tell you everything you need to know. There are things – such as the momentum operator – that act on the vector as a whole, not on the magnitude. The momentum operator is relevant here, because the things I’m calling running waves (figure 5) have nonzero momentum, while the standing waves (figure 3) do not. For more yet another model of running waves, see section 13. 4.4  Animation Comparing Standing Waves to Running Waves The message of figure 3 and figure 5 can be made easier to grasp with the help of interactive animated computer graphics. In the middle row of the table, note that the |ξ=−1⟩ wavefunction has a negative spatial frequency. It has the symmetry of a left-handed screw, whereas positive frequencies have the symmetry of a right-handed screw. Some people may be able to look at these diagrams and learn learn all there is to learn that way. However, for most people I recommend doing the actual experiment. Get a friend and a rope and do the experiment. Treat these diagrams merely as instructions for what to do and what to look for. Among other things, you will discover that the running wave “feels” different from the standing wave. The person at one end of the running wave is doing positive work, while the person at the other end is doing negative work. The computer animations are a simpler and cleaner than experiments with a real rope. Among other things, the real rope is affected by gravity, by centrifugal force, and by aerodynamic drag in ways that make things more complex than one might like. This one-dimensional model (using rope or computer) is in some ways analogous to what goes on in an ordinary three-dimensional atom ... and in some ways not. The analogy is however indirect and quite abstract, and the details are beyond the scope of the present discussion. In other words, don’t worry about it. More importantly, though, there are one-dimensional systems in nature. For example, in many cases, to a first approximation, a dye molecule can be considered a short one-dimensional electrically-conducting wire. The models presented in this section give an excellent description of the wavefunctions for electrons on such a wire. These are in some sense four-dimensional diagrams: They show the real and imaginary parts of the ordinate of the wavefunction, as a function of one spatial coordinate (x) and as a function of time. Time is represented by time itself, via the animation. Note that the true physics of a hydrogenic atom is six-dimensional; for simplicity we are restricting attention to situations where the spatial y and z dependence is irrelevant. 5  Wave Mechanics If you know a little about wave mechanics, we can use it to shed some additional light on what’s going on. Otherwise you can skip to section 6. Let’s consider what the wavefunction looks like right near the edge of the pool. Using polar coordinates (r, φ) in the plane, we can ask about the wavelength in the azimuthal direction, i.e. the dφ direction, i.e. the direction that goes around the circumference. One complete trip around the circumference must correspond to an integral number of wavelengths; otherwise the wavefunction would not be single-valued, i.e. it would not be a function at all. The defining property of |p⟩ family of wavefunctions is that they have one wavelength around the circumference (while |d⟩ wavefunctions have two, |f⟩ wavefunctions have three, et cetera). The wave equation gives us two solutions (leftward and rightward propagation, or in this case clockwise and counterclockwise) so there must be exactly two |2p⟩ wavefunctions in two dimensions. We know this just by counting, plus an appeal to symmetry. Very roughly speaking, you can think about the angular dependence of the |2p+⟩ and |2p⟩ wavefunctions in terms the Bohr model, i.e. electrons orbiting around the nucleus like planets around the sun. This is wrong in general, but it sometimes serves as a rough first approximation. The approximation gets better and better as we consider waves where more and more wavelengths fit into one trip around the circumference, i.e. as we move up the series |2p⟩, |3d⟩, |4f⟩, et cetera. Note that for each shell (i.e. each principal quantum number) we are talking about the wavefunction with the highest possible angular momentum. These are called Rydberg atoms and have been the subject of intensive study, theoretically and experimentally. Let us now switch attention to the radial direction, i.e. the dR direction. We can understand the |2s⟩ wavefunction as a standing wave, constructed from a radially-outbound running wave that reflects off the edge of the pool and returns as a radially-inbound running wave. Combining the two running waves gives us a standing wave with one node. Switching back now from pools (two dimensions) to atoms (three dimensions), we have identified four possible wavefunctions the form a basis for the N=2 shell: |2s⟩, |2px⟩, |2py⟩, and |2pz⟩. No matter what basis we choose, there cannot be more than four basis functions when N=2. It’s just geometry and symmetry and counting. Then the electron spin gives us double occupation of each orbital. That makes eight. Anything else is linearly dependent, or involves a different shell (i.e. different principal quantum number). This is the fundamental basis for the octet rule as it pertains to individual atoms in the second row of the periodic table, in particular to their ionization potentials and electron affinities. There is something special about having eight electrons around an individual atom. (This must not be taken as an endorsement of anything resembling an octet rule for molecules; see reference 3 for details.) Maybe you’re not convinced. Maybe you think there ought to be another wavefunction just like |2s⟩ but a little bit different, having one node (so it belongs to the N=2 family) but somehow different, having the node in a different place. Well, sorry, it can’t be done in a high-Q resonant system such as this. (See section 12 for details.) If you attempt it, you won’t be able to satisfy the boundary conditions. Recall we said the |2s⟩ wavefunction could be constructed from a running wave that reflects off the wall of the pool and returns. That only works with one very specific wavelength. If you try it with a slightly different wavelength, it will come back with the wrong phase. The phase errors will accumulate with every bounce. Over the long haul you will get a superposition of waves with all possible phases, which adds up to zero. This is physics: basic wave mechanics. Or you could call it mathematics: Sturm-Liouville theory and all that. The previous paragraph pretty much answers the question of why atoms have discrete shells. You can’t have something that is halfway between shell N=2 and shell N=3. If you try, you won’t be able to satisfy the boundary conditions. 6  Beyond Neon The picture described so far works for low-numbered atoms, up to and including the second row of the periodic table. When we consider higher atomic numbers Z (anything beyond neon) and hence higher shells N (the third row of the periodic table and beyond), we need to think much more carefully about the relationship between mathematics and atomic physics. In particular, observation tells us that in terms of physics, i.e. in terms of energy, that shells get filled out of order relative to the naïve mathematical numbering. We are talking about really basic observations here, starting with the existence of transition metals. The physics is as follows: The electron’s kinetic energy depends on the curvature of the wavefunction. A high-N wavefunction in a small region will have lots of curvature, hence lots of kinetic energy. A high-N wavefunction far from the nucleus has an unfavorable potential energy. A high-N wavefunction near the nucleus has an unfavorable kinetic energy. Therefore we expect the small-N shells to fill up first. For high-Z atoms, once the small-N shells are filled up, things get very complicated. Once we start filling the high-N shells, things proceed in a somewhat peculiar order. This produces transition metals among other things. Hund’s rules and all that. The first row is easy: There is only one wavefunction, the |1s⟩ wavefunction. It just sits there. No nodes. No dynamics. Electron spin means we can have two electrons in this orbital. So the first row has two members and ends at helium, Z=2. The second row has eight members and ends when we have filled the N=2 shell (on top of the N=1 shell), namely neon, Z=10. So far so good. Things get quite a bit more interesting when we get to the third row. The observed fact is that the third row has eight members and is is complete at argon, Z=18. Here is where we must explain the difference between a chemistry-shell and a mathematics-shell. The N=3 chemistry-shell (also called valence-shell) is complete when we have filled the |3s⟩ and |3p⟩ wavefunctions (namely argon) ... but at this point the N=3 mathematics-shell is far from complete, because the |3d⟩ wavefunctions haven’t been touched. There are some things we know about math, some things we know about physics, and some things we know about chemistry. The point here is that mathematics by itself will not correctly explain the chemistry when N=3 or beyond. Physics is needed. A closely related point is that jumping up and down in the pool accurately tells you certain things about the atomic wavefunctions, such as the symmetry of the wavefunctions and the dimensionality of the function-space – but it will not accurately tell you the energy thereof. The key to understanding the third row of the periodic table is this: the |3d⟩ electrons have a higher energy than the |3s⟩ and |3p⟩ electrons. The |3d⟩ electrons are members in good standing of the N=3 mathematics-shell, but they don’t2 contribute to the N=3 chemistry-shell, because they are energetically unfavorable. So let’s try to figure out why they have a higher energy. At this point the usual glib explanation is to say that the |3d⟩ wavefunctions have a node at the origin, so the |3d⟩ electrons don’t spend enough time near the nucleus and accordingly have an unfavorable potential energy. The problem is, if you believe that argument, you would predict that beryllium would be a noble gas, because the |2p⟩ wavefunctions also has a node at the origin, so you would think the |2p⟩ electrons would be disfavored3 compared to the |2s⟩ electrons. We need a better argument. We need to consider more than just the node at the origin. We need to consider what happens in the neighborhood of the origin. For a p-wave, if you move away from the origin, you pick up electron amplitude to first order. For a d-wave, you only pick up amplitude to second order. You have to go a lot farther to get significant amplitude. Also note that the nucleus is heavily screened by the electrons in the lower-N shells, so it’s not simply a question of how close you can get to the nucleus, but rather a question of whether you can get inside the inner shells, i.e. inside the screening. You can set up some |3d⟩ wavefunctions in the pool of water. The easiest one has the symmetry + + - - + + - - - - + + - - + + and has two nodes, straight lines that cross in the middle. The water is fairly quiet in a fairly good-size region near the middle. To summarize: the key idea is that the |3d⟩ wavefunctions don’t sufficiently get inside the screening, so they have an unfavorable potential energy. Argon would almost always prefer to be inert than to react using a |3d⟩ wavefunction. The same is essentially true of other third-row atoms ... although the |3d⟩ wavefunctions can’t be dismissed entirely, as discussed in connection with SF4 below. Not only do the |3d⟩ wavefunctions have high energy compared to the |3p⟩ wavefunctions, they even lose out to the |4s⟩ wavefunction in potassium and calcium. But not by much. It is competitive with |4p⟩, which is roughly why the ten fourth-row transition metals are where they are in the periodic table, between calcium (where filling the |4s⟩ subshell is completed) and gallium (where filling the |4p⟩ subshell begins). This placement does not, however, mean that |3d⟩ is necessarily filled before |4p⟩ is begun. Atoms in this part of the table can change their valence by shifting electrons back and forth between |3d⟩ and |4p⟩. 7  Other Mechanical Wave Models You can set up standing waves on a metal plate. For present purposes, it’s appropriate to choose a round flat plate, supported at the center. Excite it by bowing. Make a heavy-duty bow using the frame of hacksaw or pruning saw, plus high-test kernmantel fishing line or weed-trimmer line. Put rosin on the bowstring ... it’s just like bowing a violin. If you put powder on the plate, it will move to the nodes and remain there, making the node pattern visible. This is called a Chladni pattern. 8  Discussion 8.1  Hydrogenic Eigenfunctions In a series of four papers published in 1926, Schrödinger presented the Schrödinger equation, and also solved it to find the stationary states – the energy eigentstates – in the special case of a spherically-symmetric potential. See reference 4. There is a separation of variables, such that the solution can be written as a product Rn(rYlm(θ, φ), where Rn(r) is a purely-radial function and Ylm(θ, φ) is a purely angular function. The angular part is just a spherical harmonic, as discussed in section 14. This solution is very nearly a solution for the electron wavefunction in a hydrogen atom. It is not quite exact, because the spin of the proton and electron introduces a small magnetic interaction that makes the problem not quite spherically symmetric. It is traditional to ignore this slight nonideality and call these solutions the hydrogenic eigenfunctions. There is a general rule (from Sturm-Liouville theory) that says a complete set of eigenfunctions can be used as a basis, and any solution can be written as a superposition of these basis functions. Let’s be clear about one thing: The hydrogenic basis functions are not the only solutions to the Schrödinger equation. They are not even the only possible basis set. You can choose any basis you like. In each basis set, there are countably many basis functions. You can then write uncountably many solutions, each of which is a superposition of basis functions. We are faced with two incompatible ideas: These two descriptions are incompatible in the Heisenberg sense. For any given atom, any attempt to ascertain the position will randomize future measurements of the spectroscopic quantum numbers (n, l, m), and any attempt to ascertain those quantum numbers will randomize the future position. This isn’t as much of a problem as it could be, if we have a large supply of identically-prepared atoms. We can measure one atom, throw it away, measure another atom and throw it away, and so forth. By collecting enough such measurements, we can gradually work out what positions are consistent with which spectroscopic quantum numbers (or vice versa). A program that does this is discussed in section 9. So far, we have explained how the spectroscopic quantum numbers (n, l, m, s) apply to an atom with only a single electron. The remarkable thing is that the same general ideas and much of the terminology can be extended to multi-electron atoms. The resulting wavefunctions won’t be exactly the same, but they will be sufficiently similar that we can use the same terminology. In particular, the solutions will have the same symmetry. As an example, let’s compare the |2s⟩ electron in lithium to the |2s⟩ excited state in hydrogen. The radial part of the wavefunction Rn(r) will differ as to details, but in both cases it will have n−1 nodes (i.e. 1 node, since n=2 in this example). Roughly speaking, this is called the independent electron approximation – but beware there are several different approximations that go by that name. 8.2  Orbitals The word “orbital” is often used, especially in the chemistry literature, but it is somewhat ambiguous. 1. Sometimes “orbital” is slang for “energy level”. This is particularly clear in expressions such as HOMO and LUMO (Highest Occupied Molecular Orbital and Lowest Unoccupied Molecular Orbital). This is awkward because a level is not the same thing as a state, especially if there is degeneracy or near-degeneracy involved. 2. In some situations, “orbital” is essentially synonymous with “wavefunction”. It fully describes whatever is going on with the actual electron(s). 3. Sometimes, in a single-electron atom, “orbital” is used more narrowly, referring to a basis wavefunction. The full atomic wavefunction is, in general, a weighted sum of these basis functions. 4. Sometimes, in a multi-electron atom, “orbital” refers to single-electron basis-forming wavefunctions. The overall atomic wavefunction is, in general, a weighted sum of products of these basis-forming functions, as discussed below, in connection with equation 3. 5. Sometimes “orbital” refers only to basis-forming functions that correspond to stationary states. This would exclude things like sp3 hybrid wavefunctions, except in cases where those happen to be energy eigenfunctions. I don’t recommend such a narrow definition, because the choice of basis is a choice. You can choose whatever basis you like, but others may choose differently. The choice is just as arbitrary in quantum mechanics as it is in introductory vector analysis. Energy eigenfunctions are not the only possible functions ... or even the only possible basis functions. You can use the sp3 functions as basis states if you want, whether or not they are stationary states. 6. Sometimes “orbital” is used even more narrowly, referring only to the original hydrogenic eigenfunctions ... not just the same symmetry, but the exact same functions, copied from the solution for the ideal hydrogen atom. In the rare situations where the distinction matters, you can probably figure it out from context. Note that to solve the equation of motion for a two-electron atom, the solution must be a function of eight variables. We can write something like Ψ(x1, y1, z1, s1, x2, y2, z2, s2), where x, y, and z are external (spatial) variables and s is an internal (spin) variable. In contrast, a single-electron wavefunction such as φ(x, y, z, s) cannot – by itself – solve the equation of motion for a multi-electron atom. By itself, it does not even have the right functional form. By itself, it is not even a basis function for the multi-electron atom, because we cannot write the solution as a sum of single-electron orbitals. That is, we cannot write     Ψ(x1y1z1s1x2y2z2s2) = φ(x1y1z1s1) + ξ(x2y2z2s2) + ⋯     or anything like that. Instead, the simplest thing that makes sense is a sum of products: Ψ(x1y1z1s1x2y2z2s2) =  φ(x1y1z1s1) ξ(x2y2z2s2) + ⋯ To repeat: In equation 3, the basis functions are not single-electron orbitals, but rather products of such orbitals. This sort of sum-of-products representation is tremendously useful, for the following reason: It is relatively easy to draw a picture in two dimensions. Visualizing the structure of a fully three-dimensional object is much more difficult. Visualizing something in an abstract six-dimensional (or eight-dimensional) space is virtually impossible for most people. The physics does not require you to write the wavefunction as a sum of products, but you can understand why people are usually happier talking about single-electron orbitals rather than the full multi-electron wavefunctions. Constructive suggestion: A dye molecule can be roughly approximated as short wire, extending in one dimension only. The one-electron orbitals in such a system are functions of one variable. A two-electron wavefunction can be written as a sum of products, where each term is two-dimensional, i.e. a function of two variables. That’s something we can draw pictures of. An interesting example of this can be found in reference 5. You can’t live in a brick, but you can live in a house made of bricks. Similarly, you cannot solve the multi-electron equation of motion using a one-electron orbital by itself, but you can solve it using a wavefunction made of a sum of products of such orbitals. 9  Animation: Scatter Plot of Electron Probability I cobbled up a javascript applet that collects data from 10,000 simulated atoms to demonstrate how position data can be extracted from hydrogenic eigenfunctions, for specified spectroscopic quantum numbers. At present, the applet only deals with the |1s⟩, |2s⟩, and |2px⟩ basis wavefunctions.4 Push the appropriate Go button. Hmmm, it looks like this browser does not support the HTML5 <canvas> feature. Note that this uses Javascript as opposed to Java, which means there are far fewer security issues. This applet makes somewhat aggressive use of new Javascript language features. If your browser does not support these features, instead use the older stand-alone Java version in reference 6. You can download the source file, read it to verify that the Java cannot possibly do anything nasty, compile it, and then run it. The scale bar in the lower left corner has length a0, where a0 is the Bohr radius, namely a0 =  pi є0 ℏ2 me e2 me c α    0.053 nm    half an Ångstrom The scale bar gradually turns from red to black, serving as a progress meter. Credit: The idea of using an animated scatter plot to show the probability density for an atomic wavefunction is an oldie but a goodie. I got it from a film somebody (possibly PSSC?) made in the 1960s, back when using computers to make educational animations was a lot more exotic than it is now. Chemistry straddles the quantum/classical boundary: With rare exceptions, it is possible and useful to make classical ball-and-stick models of what atoms and/or ion cores are doing.   It is never possible to make a good classical model of what the electrons in an atom are doing. Electrons in this situation are highly quantum mechanical. Electrons weigh 1836 times less than protons, and it matters. It means that electrons will be highly quantum mechanical under conditions where anything heavier than a proton can be considered classical. In particular, the conditions I have in mind involve atomic length-scales, chemical energy-scales, and ordinary non-cryogenic temperatures. Also note that any basis wavefunction other than the |1s⟩ wavefunction will have one or more nodes. A node is a place where the probability density goes to zero. The node in the |2s⟩ basis wavefunction is a sphere of radius 2a0. The node in the |2px⟩ basis wavefunction is the plane located at x=0. It would be particularly hard to make a ball-and-stick model that explains the existence of such nodes. Consider the |2px⟩ for a moment: The electron spends half its time on the left and half its time on the right ... but never crosses the middle. Trying to explain this in terms of particles would violate the Bolzano theorem, because the classical laws of motion tell us the particle’s world line is supposed to be continuous. Position is supposed to be a continuous function of time. Do not confuse quantum mechanical “orbitals” with classical orbits, such as the orbit of the earth around the sun. The earth is classical; electrons in atoms are not classical. Orbitals are not orbits. We can understand nodes in terms of waves. Imagine some water sloshing in a circular disk, as discussed in section 2. The water on the left has energy, and the water on the right has energy, but along the midline the energy density is zero, because the water is stationary there. The astute reader will have noticed that the centers of the |s⟩ wavefunctions are overexposed. That’s partly a reflection of the fact that the probability there is very much higher than the probability farther out, and partly a reflection of the limited dynamic range of human perception. By the time the outlying areas are dense enough to be readily perceptible, the center is necessarily overexposed. If you don’t want the center to be overexposed, you can push the Pause button to stop the simulation early ... at the cost of leaving the outlying areas underexposed and barely perceptible to the human eye. If you want to compare one wavefunction with another, the easiest procedure is to put multiple copies of this document on your screen. These images are not “artist’s impressions”. The probabilities are calculated accurately, directly from the Schrödinger equation. The trick for calculating the dot-positions, given the probabilities, is explained in reference 7. Technical note: The probability plotted by this applet is not the total probability. It is a conditional probability, namely the probability in a thin slice centered on the z=0 plane. Dots falling above or below this slice are not plotted, not accounted for, and not projected onto the plane. Since the three wavefunctions implemented here are all rotationally invariant about the x-axis, i.e. about the contour of constant y=0, z=0, you can imagine rotating the figures about that axis to get an idea of the full three-dimensional distribution. 10  Quantum Mechanics versus Particle Mechanics As mentioned in reference 8, when people talk about the size and shape of an atom, they usually mean the size and shape of the atom’s distribution of electrons. It is usually assumed that the atom is in its ground state, unless otherwise specfied. For an atom in the ground state, or any other stationary state, the spatial distribution of electrons is probabilistic, not deterministic. It is best visualized as a somewhat fluffy cloud. The distribution can be formalized in terms of wavefunctions. Even though they are sometimes called orbitals, you should not assume the wavefunction is analogous to the orbit of a planet going around the sun. The ordinary low-energy atomic states don’t look like that. We will have little to say about the Bohr model of the atom, except to say that it is not a good starting point if you want a modern understanding of quantum mechanics in general or atoms in particular. Actually there are many different mutually-inconsistent ways in which the word “orbital” gets used. Consider for example the |2pz⟩ wavefunction. Suppose we prepare an electron in the |2pz⟩ state and then measure its position. We repeat this many times. The result is that the electron is above the z=0 plane in half the observations, and below the z=0 plane in the other half of the observations. There is zero probability of finding the electron right at z=0. (Not just zero probability, but zero probability density.) This result is incompatible with a classical “particle” model of the electron, for several reasons. In contrast, these observations are consistent with a wave model. Water sloshing in a |2px⟩ pattern has energy density on the +x side of the pool and energy density on the −x side of the pool, but zero energy density along the node at x=0. You should not imagine that this means what waves are “right” or that particles are “wrong”. Quantum mechanics tells us that in reality, there is no such thing as classical waves, and no such thing as classical particles. There is only stuff. All stuff is capable of acting like a wave and acting like a particle. The behavior you see will be wave-like and/or particle-like, depending on how you set up the experiment. In particular, the statement that the electron is in a |2pz⟩ wavefunction is incompatible with the statement that the electron is above (or below) the z=0 plane. By this I mean incompatible in the Heisenberg sense. That is, you can design an experiment to determine that the electron is in the |2pz⟩ wavefunction, and you can design an experiment to determine whether the electron is above the z=0 plane, but you cannot determine both things at the same time. Therefore asking whether/how the |2pz⟩ electron crosses from above to below the z=0 plane is a profoundly wrong question. The question is predicated on incorrect assumptions about the equations of motion. 11  Quantum Mechanics versus Wave Mechanics The demos we’ve been discussing are all macroscopic, involving strictly classical wave mechanics. Consider the contrast: Classical waves were fully understood in the 19th century. Classical waves are useful as models of the atomic wavefunctions.   Quantum mechanics didn’t come along until the 20th century. There is more to quantum mechanics than wavefunctions. Understanding waves is a prerequisite for understanding QM. It is necessary but far from sufficient. There are two types of discreteness involved here. You can think of the two as being mutually perpendicular. Figure 6 shows the modes and occupation numbers that an atom might have. The enumeration of the modes runs vertically, while the quantum occupation numbers run horizontally. wavefunction | quantum occupation number --> | 0 1 spatial | spin |______________________ mode | | 2 p x up | yes 2 p x down | yes 2 p y up | yes 2 p y down | yes etc. | Figure 6: Example: Mode versus Occupation Number Quantum mechanics takes its name from the quantization of the occupation numbers, i.e. the fact that if you design an experiment to measure the occupation number, you will always get an integer. For fermions such as electrons, each wavefunction has an occupation number that is either 0 and 1. For bosons, such as photons in a box (or phonons on a violin string) the occupation numbers can be any integer from zero on up ... but otherwise the boson chart is the same as the fermion chart: the modes of the box (or string) run vertically, while the occupation numbers run horizontally. The business of enumerating the spatial modes is entirely classical. You can tell it’s classical, because it doesn’t require knowing the value of hbar, and it doesn’t tell you anything about hbar. Then, in addition to the spatial part of the wave function, there is another part – spin – which is part of the enumeration of states but is intrinsically nonclassical, i.e. intrinsically quantum-mechanical. Finally, after we have enumerated the modes, the occupation of the modes is intrinsically nonclassical. The occupation numbers for macroscopic objects such as strings are huuuge. You cannot perceive the difference between huuuge and huuuge+1, so for practical purposes the amplitude is not quantized. (And furthermore it’s not quantized even in principle, because the model breaks down due to thermal effects and other complexities we’re not going to discuss.) 12  Nonstationary States The foregoing demos emphasize standing waves. But not all waves are standing waves. Think about your experience with things like jump-ropes, tie-down ropes, extension cords, and so forth. You can flirt one end of a long rope and launch a perfectly fine wave with no definite number of nodes... not a standing wave. Similarly, a duck can sit in the middle of a large pond, bobbing up and down, launching beautiful waves at any frequency whatsoever. The duck neither knows nor cares about the standing-wave modes of the pond. If (!) you are weakly coupled to a high-Q system then you can excite the resonant waves more easily than nonresonant waves. Atoms do in fact have some high-Q modes. This makes spectroscopy interesting. But atoms can do low-Q things as well. There are a couple of lessons here: 13  Running Waves This section expands on the discussion of running waves that began in section 4.3. 13.1  Discussion: Steady Flow The idea of steady flow applies to all of the examples in this section. 13.2  Animation: |2p+⟩ Orbital Figure 7 is an animated diagram that serves as a model of some interesting features of the atomic |2p+⟩ orbital (and similar orbitals). Figure 7: Stationary State == Running Wave : |4p+⟩ Orbital Figure 8 is a simplified version of figure 7. Figure 8: Simplified Stationary State == Running Wave : |2p+⟩ Orbital Let us now discuss how to interpret figure 7 and similar figures. First, some terminology: We refer to the object in figure 8 as an extrusion.   (It is also a torus. A torus is a type of extrusion, but not vice versa.) Any extrusion has a spine. In the figures, we have chosen the spine to be a large circle, encompassing the hole in the donut.   (The spine of a torus is called the major circle.) At each point along the spine, we can speak of the two-dimensional space perpendicular to the spine. The cross-section of the extrusion lies in this plane.   (The cross-section of a torus is called the minor circle.) The reason for preferring the term extrusion (rather than torus) will become obvious in section 13.3. We can represent real-space locations in the atom using spherical coordinates (r, θ, φ). The full abscissa of the wavefunction is (r, θ, φ, t) where t is the time. The real atom is connected to the diagram as follows: θ – Everything you see in the diagram happens in the atom’s equatorial plane, aka the plane of constant θ=0, aka the xy plane. The θ-dependence of the atomic wavefunction is not represented at all in the diagram. Note that we have chosen coordinates so that θ is the latitude, measured up from the equator (not the polar angle, measured down from the pole). r – In figure 7 we sample the atomic wavefunction at three different radii, namely r=0.5, r=1, and r=1.5 in some arbitrary units. The r-value is represented in the diagram by the radius of the circle that defines the spine of the extrusion (i.e. the major circle of the torus.) A continuum of other r values exist in the atom, but they are not represented at all in the diagram. Meanwhile, figure 8 represents r=1 only. φ – The azimuthal coordinate is faithfully represented in the sense that azimuth in the diagram corresponds to azimuth in the atom. Going around the diagram following the spine of the extrusion corresponds to going around the atom following a contour of constant radius r and constant latitude θ=0. t – Time is faithfully represented, in the sense that real time in the animation corresponds to real time in the atom, just slowed down by about 15 orders of magnitude. Atomic frequencies are on the order of hundreds of terahertz, whereas the animation cycles at the rate of half a hertz. We now discuss the ordinate of the wavefunction. We start by choosing some specific point along the spine of the extrusion, corresponding to some specific location in the atom. We go there and construct the cross-sectional plane, perpendicular to the spine at that point. The ordinate is a complex number (or, equivalently, a two-dimensional vector) represented by a point in this cross-sectional plane. Remember that the cross-sectional plane has nothing to do with real space, and nothing to do with the abscissa of the wavefunction. We can use polar coordinates (ρ, β) in the plane, as follows: ρ – In the cross-sectional plane, the distance from the spine represents ρ, the magnitude of the wavefunction at that point. β – The phase of the ordinate is color-coded as shown in figure 9. That is: Figure 9: Color Code In the following, the terms in each column are more-or-less equivalent, and stand in contrast to the terms in the other column. We restrict attention to a single particle. (The wavefunction for a multiparticle system is more complicated.) Real space aka position space. (This can be represented as a three-dimensional vector.)   Probabilty-amplitude space. (This can be represented by a two-dimensional vector, or equivalently by a complex number.) External space.   Internal space. The abscissa of the wavefunction (not including time).   The ordinate of the wavefunction. The wavefunction as a whole is a vector field in the sense that each point in the real, external space has its own separate instance of the abstract, internal space. At each location in real space, the magnitude of the wavefunction is independent of time, because we are talking about a stationary state, namely |2p+⟩. The time-dependence of the ordinate is given by eiωt. That is, the ordinate just goes around and around in a circle in the complex plane. Therefore, if you look at any one place along the spine of the extrusion in figure 8, i.e. any particular φ value, the color-code simply travels around in a circle in the cross-sectional plane. This simple time-dependence may be easier to perceive if you put your fingers in front of the diagram so that all you can see is one small piece peeking through the slit between your fingers. Orient the slit along a direction of constant φ. Another option is to look at figure 12, which (at any given location) has the same kind of time dependence: the color-code simply flows around and around the minor circumference. Next, we consider the φ-dependence at constant time. (This is in contrast to the previous paragraph, which considered the time-dependence at constant φ.) It goes like eimφ, where m is the z-component of the angular momentum, and is ±1 for the |2p±⟩ orbitals. If you want to get a better look at the space-dependence, use the “Stop Animations” feature of your browser. If you save the image into a local file and then browse the file, it makes it easier to restart the animation after stopping. Combining the time-dependence and the space-dependence, we see that the overall probability amplitude goes like ei(mφ−ωt), which is a running wave, running around the spine of the extrusion. This running motion is quite apparent in figure 8. The wavefunction phase β depends on the spatial φ but not r. In contrast, the wavefunction norm ρ depends on the spatial r but not φ, as you can see from the fact that all three extrusions in figure 7 have the same color for any given azimuth at any given time. The wavefunction norm does not go to zero between the given r values; remember the diagram says nothing about r values other than the three values mentioned above. In fact the |2p+⟩ orbital goes to zero along the axis of symmetry and goes to zero at infinity, and is nonzero everywhere else. You can use your imagination to visualize what the wavefunction is doing at other r values. Imagine lots and lots of extrusions, one for each r value. The relationship between the atom’s rotational time-dependence (at a given point) and the helical space-dependence (at a given time) is structurally the same as the structure we see in a barber pole.   However, structure is not the whole story. If you want a somewhat better mechanical analogy, an Archimedes screw or a leadscrew is better than a barber pole, because the screw actually transports something. If you want a much, much better analogy, the helical string modes discussed in section 4.3 genuinely and perceptibly embody energy, momentum, energy-flow, and momentum-flow. The so-called “barber pole illusion” is an illusory flow in the axial direction. It is illusory because the barber pole does not actually transport anything along the axial direction.   The atomic |2p+⟩ wavefunction embodies genuine, non-illusory kinetic energy and genuine momentum in the dφ direction. The apparent motion of the ordinate “upward” on the outer rim of the donut in figure 8 is of no significance. For one thing, I could have reversed the order of both the color-code and the direction of rotation, and this would have produced an apparent “downward” motion with no change in meaning. The only meaning comes from the sequence in which the coded colors appear. Also, remember that the entire extrusion occupies a region of zero volume in real space, namely (r, θ) = (1±0, 0±0), so even if you wrongly imagined it to be moving, it would move zero distance. (The same words apply separately to each of the extrusions in figure 7.)   The motion of the ordinate around the spine of each extrusion is significant. It is an apt representation of the actual flow of energy and momentum in the atomic |2p+⟩ orbital. 13.3  Animation: |2px⟩ Orbital It is instructive to compare the running-wave orbitals discussed in section 13.2 with the corresponding standing-wave orbitals. We start by comparing figure 7 to figure 10. Figure 10: Standing Wave : |4px⟩ Orbital Figure 11 is a simplified version of figure 10. Figure 11: Simplified Standing Wave : |2px⟩ Orbital This is a standing wave. There is no propagation around the spine of the extrusion. If it seems like there might be some propagation in that direction, it is just a misperception due to the perspective view, as you can confirm by looking at the top view shown in figure 12. Figure 12: Standing Wave : |2px⟩ Orbital, Top View The rules for interpreting these diagrams are essentially the same as the rules given in section 13.2, with two exceptions. When we compare the two types of orbital, we note the following contrast, which is correctly represented by the figures: In the |2px⟩ wavefunction, the norm of the wavefunction varies as a function of azimuth, while the phase is independent of azimuth within each lobe. (Each lobe is -1 times the other lobe.) This can be seen in figure 10 and other figures in this section.   In the |2p+⟩ wavefunction, the phase varies as a function of azimuth, while the norm is independent of azimuth. This can be seen in figure 7 and other figures in section 13.2. 13.4  Labeling Things “+” and “–” (Or Not) In diagrams of the atomic |2px⟩ orbital, people sometimes label one lobe “+” and label the other lobe “–”. However, as mentioned in section 4.2, this is somewhat of a dirty trick. It would be OK to say that one lobe is -1 times the other lobe, but the “+” and “–” labels are misleading because they tend to suggest that the wavefunction is greater than zero or less than zero, which would imply that the wavefunction is a real-valued scalar field, which would be quite wrong. Complex numbers (and vectors in two or more dimensions) have the property that they can rotate, changing their value without changing their magnitude.   Real-valued scalars (and real-valued vectors in one dimension) cannot smoothly change without changing their magnitude. In quantum mechanics, the wavefunction is vector-valued (i.e. complex-valued), and it is important that it be so. The stationary states do not go up and down; in fact they go around and around. Even for a standing wave such as the |2px⟩ orbital, the ordinate of the wavefunction goes around and around like a jump-rope.   As mentioned in section 4.2, voltage is a real number. In the two-phase wiring in a house, the red phase simply goes up and down (not around and around), crossing through zero twice each cycle. At each moment in time, the red voltage is either “+” (greater than zero) or “–” (less than zero) ... except at zero crossings. At each moment in time, the black phase is -1 times the red phase. Furthermore, there is no hope of applying such labels to a running wave such as the |2p+⟩ orbital. For starters, the |2p+⟩ orbital has no lobes! This should be clear from figure 8. The idea of giving some region a simple label like “+” requires that the whole region have the same phase (or at least be wholly greater than zero). There are no such regions in figure 8, because any finite-sized region has a continuum of different phases (and there is no such thing as “greater than” in two or more dimensions). 13.5  Additional Remarks 1. Even though the |2p+⟩ wavefunction is a running wave, it is also classified as a stationary state, because it is an eigenfunction of the Hamiltonian. Also, the magnitude of the wave at any particular point in real space is constant in time. This is worth mentioning, and worth a little bit of emphasis, because usually when people think of a running wave they think of a non-stationary state. 2. To see the connection between figure 5 and figure 8, start with figure 5. Imagine that the string is very long compared to the amplitude of the wave motion. Gradually bend the axis of rotation around to close on itself, forming the spine of the extrusion. (You can’t do this with an actual string, but you should be able to do it with a springy rod.) 3. Don’t panic. If you find all this a bit hard to grasp at first, don’t feel too bad about it. What we are trying to do is not easy.   On the other hand, don’t give up. What we are trying to do is hard but entirely doable. We are trying to visualize a six-dimensional object: The abscissa has four dimensions (r, θ, φ, t) and the ordinate has two dimensions (ρ, β). Figure 8 represents four of these, and figure 7 represents five of them. Most people were not born with the ability to visualize abstractions in three dimensions, let alone four, five, or six.   This is a skill that can be learned, and is well worth learning. 14  Spherical Harmonics Here is a perspective view of the first few spherical harmonics. Each row corresponds to a definite l value, from l=0 to l=3. On a given row, there are 2l+1 spheres, one for each m value, in order from m=−l to m=l Figure 13: Spherical Harmonics Note that this perspective uses a viewpoint slightly north of the equator, so it is about halfway between being a side view and a top view. In contrast, the patterns in the pool of water – in particular the |2p+⟩ and |2p⟩ wavefunctions discussed in section 2 and section 5 – correspond to a plain top view, looking straight down the atomic z axis. This diagram uses hue to encode the phase of the wavefunction, using the same color-code as in section 13.2, as explained in connection with figure 9. In addition, where the magnitude of the wavefunction goes to zero, the color goes to white, which is a good representation, in the sense that the phase is undefined at the nodes of the wavefunction, and white has an undefined hue. Each Ylm has l−|m| nodes. Each node is a circle of constant latitude. Note that nodes in the southern hemisphere are not visible from the perspective used in this diagram. We have three different ways of looking at things: Note that all the spheres in figure 13 are rotating in internal space at the same angular rate. That is, if you fixate on a single (non-white) point on any one of the spheres, it will cycle through all the colors, in the same order, the same as any other point on any of the spheres.   The rate of rotation is slower than the rate of rotation in internal space. The rotation rate scales like 1/m ... except when m=0. You can see that the m=0 wavefunctions (along the central column of the triangle) are not rotating at all in real space. The rotation rate in internal space is precisely proportional to the energy, in accordance with the fundamental laws of quantum mechanics. You are not likely to find an atom where all the states have the same energy, independent of (l, m) ... but if you did, figure 13 is what the basis wavefunctions would look like. In figure 13), the spherical harmonic that has (l,m) = (1,1) gives the angular dependence of the |2p+⟩ wavefunction. Similarly the (1,−1) spherical harmonic gives the angular dependence of the |2p⟩ wavefunction. If you add these two together, you get the |2px⟩ wavefunction shown in figure 10. The latter has a node in the YZ plane, which none of the basis functions in figure 13) has; you need to take combinations of the basis functions to get such a node. Also the |2px⟩ wavefunction does not rotate in external space, even though the (1,1) and (1,−1) spherical harmonics do. The following comparisons may help get a feel for the underlying meaning of spherical harmonics, and for how they are used: In one dimension, any function that is periodic with period 2π can be thought of as a function defined on the unit circle.   In two dimensions, we are interested in functions defined on the unit sphere. Any such 1-D function, if it is reasonably-smooth, can be well approximated by a Fourier series, i.e. a weighted sum of sine and cosine functions.   Any such 2-D function, if it is reasonably smooth, can be well approximated by a spherical harmonic series, i.e. a weighted sum of Ylm functions. Sines and cosines are useful as a basis set for functions that are periodic in one dimension.   The spherical harmonics are useful as a basis set for functions that are periodic in two dimensions. In particular, functions that are smooth, slowly varying functions of angle as we go around the circle can be well represented by a sum containing only low-frequency sines and cosines, so we need relatively few terms in the Fourier series.   In particular, functions that are smooth, slowly varying functions of angle as we move around on the surface of a sphere can be well represented by a sum containing only low-order spherical harmonics, so so we need relatively few terms in the spherical harmonic series. Note that there is such a thing as a two-dimensional Fourier series. It applies to systems that have the topology of a torus (aka Born/von-Kármán boundary conditions).   The spherical harmonic series applies to systems that have the topology of a sphere. 15  Spherical Harmonics ± Atoms 15.1  Basis States The electron wavefunction in the vicinity of an atom is usually a slowly-varying function of angle. (Rapidly-varying functions are disfavored because they would have higher energy.) Therefore the electron wavefunction can be written as a sum of spherical harmonics, and usually as a sum with relatively few terms. It should go without saying that atoms are not “really” little spheres with colored markings on them. Figure 13 does not attempt to portray the radial dependence of the atomic wavefunctions. For a full description of the basis wavefunctions, you would need to multiply the angular dependence (as described by the spherical harmonics) by the appropriate function of radius. Some information about this is depicted in reference 10. 15.2  Stationary States We now discuss an even stronger connection that can sometimes be made between the spherical harmonics and real atoms. There are some situations – certainly not all situations – where the stationary states of the atom, i.e. the states of definite energy, have the same symmetry as a single spherical harmonic. Here we are no longer talking about a sum of spherical harmonics; we are now talking about just one particular spherical harmonic. An isolated atom in a magnetic field is an example of such a situation. The detailed shape of the stationary state may not be exactly the same as the spherical harmonic, but the symmetry is the same. Therefore looking at figure 13 and appreciating the symmetry of the various drawings is worth some effort. 16  References Geoff Haselhurst, Karene Howie, “On Truth & Reality The Spherical Standing Wave Structure of Matter (WSM) in Space” John Denker, “Introduction to Quantum Mechanics” John Denker, “How to Draw Molecules” E. Schrödinger, “Quantisierung als Eigenwertproblem. (Erste Mitteilung)” Annalen der Physik 384, (4), 361–376 (January 1926) “Quantisierung als Eigenwertproblem. (Zweite Mitteilung)” Annalen der Physik 384, (6), 489–527 (1926) “Quantisierung als Eigenwertproblem. (Dritte Mitteilung)” Annalen der Physik 385, (13), 437–490 (1926) “Quantisierung als Eigenwertproblem. (Vierte Mitteilung)” Annalen der Physik 386, (18), 109-139 (1926) John Denker "Triplet Below Singlet – Hund’s Multiplicity Rule for Molecules” John Denker, “Stand-alone Java animation to scatter-plot atomic electron probability” John Denker, “Constructing Random Numbers with an Arbitrary Distribution” John Denker, “Introduction to Atoms” John Denker, “Coherent States” 10. “Grand Orbital Table” It is traditional to use lycopodium powder to help visualize the surface, but that just mystifies students. What is this stuff? Where does it come from? What is special about it? Soap bubbles are much less mysterious. The N=3 chemistry-shell is conventionally assumed to have no |3d⟩ contributions, and the periodic table is structured accordingly. This is usually a good approximation. Actually the |2p⟩ electrons are somewhat disfavored relative to the |2s⟩ electrons, and the closed |2s⟩ shell in Be does have chemically-observable consequences, as you will notice if you try to make Be2, which is not much easier than making He2. But let’s not get carried away; you can obtain a chunk of metallic Be and/or BeO a lot more easily than metallic He and/or HeO. Extending the applet to implement additional wavefunctions would be straightforward. Copyright © 2003 jsd
58fbdf06a9c5a8ef
A Motivation for Quantum Computing Quantum mechanics is one of the leading scientific theories describing the rules that govern the universe. It’s discovery and formulation was one of the most important revolutions in the history of mankind, contributing in no small part to the invention of the transistor and the laser. Here at Math ∩ Programming we don’t put too much emphasis on physics or engineering, so it might seem curious to study quantum physics. But as the reader is likely aware, quantum mechanics forms the basis of one of the most interesting models of computing since the Turing machine: the quantum circuit. My goal with this series is to elucidate the algorithmic insights in quantum algorithms, and explain the mathematical formalisms while minimizing the amount of “interpreting” and “debating” and “experimenting” that dominates so much of the discourse by physicists. Indeed, the more I learn about quantum computing the more it’s become clear that the shroud of mystery surrounding quantum topics has a lot to do with their presentation. The people teaching quantum (writing the textbooks, giving the lectures, writing the Wikipedia pages) are almost all purely physicists, and they almost unanimously follow the same path of teaching it. Scott Aaronson (one of the few people who explains quantum in a way I understand) describes the situation superbly. There are two ways to teach quantum mechanics. The first way – which for most physicists today is still the only way – follows the historical order in which the ideas were discovered. So, you start with classical mechanics and electrodynamics, solving lots of grueling differential equations at every step. Then, you learn about the “blackbody paradox” and various strange experimental results, and the great crisis that these things posed for physics. Next, you learn a complicated patchwork of ideas that physicists invented between 1900 and 1926 to try to make the crisis go away. Then, if you’re lucky, after years of study, you finally get around to the central conceptual point: that nature is described not by probabilities (which are always nonnegative), but by numbers called amplitudes that can be positive, negative, or even complex. The second way to teach quantum mechanics eschews a blow-by-blow account of its discovery, and instead starts directly from the conceptual core – namely, a certain generalization of the laws of probability to allow minus signs (and more generally, complex numbers). Once you understand that core, you can then sprinkle in physics to taste, and calculate the spectrum of whatever atom you want. Indeed, the sequence of experiments and debate has historical value. But the mathematics needed to have a basic understanding of quantum mechanics is quite simple, and it is often blurred by physicists in favor of discussing interpretations. To start thinking about quantum mechanics you only need to a healthy dose of linear algebra, and most of it we’ve covered in the three linear algebra primers on this blog. More importantly for computing-minded folks, one only needs a basic understanding of quantum mechanics to understand quantum computing. The position I want to assume on this blog is that we don’t care about whether quantum mechanics is an accurate description of the real world. The real world gave an invaluable inspiration, but at the end of the day the mathematics stands on its own merits. The really interesting question to me is how the quantum computing model compares to classical computing. Most people believe it is strictly stronger in terms of efficiency. And so the murky depths of the quantum swamp must be hiding some fascinating algorithmic ideas. I want to understand those ideas, and explain them up to my own standards of mathematical rigor and lucidity. So let’s begin this process with a discussion of an experiment that motivates most of the ideas we’ll need for quantum computing. Hopefully this will be the last experiment we discuss. Shooting Photons and The Question of Randomness Does the world around us have inherent randomness in it? This is a deep question open to a lot of philosophical debate, but what evidence do we have that there is randomness? Here’s the experiment. You set up a contraption that shoots photons in a straight line, aimed at what’s called a “beam splitter.” A beam splitter seems to have the property that when photons are shot at it, they will be either be reflected at a 90 degree angle or stay in a straight line with probability 1/2. Indeed, if you put little photon receptors at the end of each possible route (straight or up, as below) to measure the number of photons that end at each receptor, you’ll find that on average half of the photons went up and half went straight. The triangle is the photon shooter, and the camera-looking things are receptors. If you accept that the photon shooter is sufficiently good and the beam splitter is not tricking us somehow, then this is evidence that universe has some inherent randomness in it! Moreover, the probability that a photon goes up or straight seems to be independent of what other photons do, so this is evidence that whatever randomness we’re seeing follows the classical laws of probability. Now let’s augment the experiment as follows. First, put two beam splitters on the corners of a square, and mirrors at the other two corners, as below. The thicker black lines are mirrors which always reflect the photons. The thicker black lines are mirrors which always reflect the photons. This is where things get really weird. If you assume that the beam splitter splits photons randomly (as in, according to an independent coin flip), then after the first beam splitter half go up and half go straight, and the same thing would happen after the second beam splitter. So the two receptors should measure half the total number of photons on average. But that’s not what happens. Rather, all the photons go to the top receptor! Somehow the “probability” that the photon goes left or up in the first beam splitter is connected to the probability that it goes left or up in the second. This seems to be a counterexample to the claim that the universe behaves on the principles of independent probability. Obviously there is some deeper mystery at work. Complex Probabilities One interesting explanation is that the beam splitter modifies something intrinsic to the photon, something that carries with it until the next beam splitter. You can imagine the photon is carrying information as it shambles along, but regardless of the interpretation it can’t follow the laws of classical probability. The simplest classical probability explanation would go something like this: There are two states, RIGHT and UP, and we model the state of a photon by a probability distribution (p, q) such that the photon has a probability p of being in state RIGHT a probability q of being in state UP, and like any probability distribution p + q = 1. A photon hence starts in state (1,0), and the process of traveling through the beam splitter is the random choice to switch states. This is modeled by multiplication by a particular so-called stochastic matrix (which just means the rows sum to 1) \displaystyle A = \begin{pmatrix} 1/2 & 1/2 \\ 1/2 & 1/2 \end{pmatrix} Of course, we chose this matrix because when we apply it to (1,0) and (0,1) we get (1/2, 1/2) for both outcomes. By doing the algebra, applying it twice to (1,0) will give the state (1/2, 1/2), and so the chance of ending up in the top receptor is the same as for the right receptor. But as we already know this isn’t what happens in real life, so something is amiss. Here is an alternative explanation that gives a nice preview of quantum mechanics. The idea is that, rather than have the state of the traveling photon be a probability distribution over RIGHT and UP, we have it be a unit vector in a vector space (over \mathbb{C}). That is, now RIGHT and UP are the (basis) unit vectors e_1 = (1,0), e_2 = (0,1), respectively, and a state x is a linear combination c_1 e_1 + c_2 e_2, where we require \left \| x \right \|^2 = |c_1|^2 + |c_2|^2 = 1. And now the “probability” that the photon is in the RIGHT state is the square of the coefficient for that basis vector p_{\text{right}} = |c_1|^2. Likewise, the probability of being in the UP state is p_{\text{up}} = |c_2|^2. This might seem like an innocuous modification — even a pointless one! — but changing the sum (or 1-norm) to the Euclidean sum-of-squares (or the 2-norm) is at the heart of why quantum mechanics is so different. Now rather than have stochastic matrices for state transitions, which are defined they way they are because they preserve probability distributions, we use unitary matrices, which are those complex-valued matrices that preserve the 2-norm. In both cases, we want “valid states” to be transformed into “valid states,” but we just change precisely what we mean by a state, and pick the transformations that preserve that. In fact, as we’ll see later in this series using complex numbers is totally unnecessary. Everything that can be done with complex numbers can be done without them (up to a good enough approximation for computing), but using complex numbers just happens to make things more elegant mathematically. It’s the kind of situation where there are more and better theorems in linear algebra about complex-valued matrices than real valued matrices. But back to our experiment. Now we can hypothesize that the beam splitter corresponds to the following transformation of states: \displaystyle A = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & i \\ i & 1 \end{pmatrix} We’ll talk a lot more about unitary matrices later, so for now the reader can rest assured that this is one. And then how does it transform the initial state x =(1,0)? \displaystyle y = Ax = \frac{1}{\sqrt{2}}(1, i) So at this stage the probability of being in the RIGHT state is 1/2 = (1/\sqrt{2})^2 and the probability of being in state UP is also 1/2 = |i/\sqrt{2}|^2. So far it matches the first experiment. Applying A again, \displaystyle Ay = A^2x = \frac{1}{2}(0, 2i) = (0, i) And the photon is in state UP with probability 1. Stunning. This time Science is impressed by mathematics. Next time we’ll continue this train of thought by generalizing the situation to the appropriate mathematical setting. Then we’ll dive into the quantum circuit model, and start churning out some algorithms. Until then! [Edit: Actually, if you make the model complicated enough, then you can achieve the result using classical probability. The experiment I described above, while it does give evidence that something more complicated is going on, it does not fully rule out classical probability. Mathematically, you can lay out the axioms of quantum mechanics (as we will from the perspective of computing), and mathematically this forces non-classical probability. But to the best of my knowledge there is no experiment or set of experiments that gives decisive proof that all of the axioms are necessary. In my search for such an experiment I asked this question on stackexchange and didn’t understand any of the answers well enough to paraphrase them here. Moreover, if you leave out the axiom that quantum circuit operations are reversible, you can do everything with classical probability. I read this somewhere but now I can’t find the source 😦 One consequence is that I am more firmly entrenched in my view that I only care about quantum mechanics in how it produced quantum computing as a new paradigm in computer science. This paradigm doesn’t need physics at all, and apparently the motivations for the models are still unclear, so we just won’t discuss them any more. Sorry, physics lovers.] 14 thoughts on “A Motivation for Quantum Computing 1. Terrific! There’s always a lot of interesting stuff to dive into, and while figuring everything out, step by step and book by book, can be useful and illuminating, there is simply not enough spare time to learn about everything. Such series that go straight to the point are great to get some basic insight into a subject. Also, your writing is very clear. I’ll be keeping an eye on this one. Thanks! 2. Cool stuff. Very clear. Is there a paper that describes the experiment with the photons? I wonder how the the fact that there is a possibility that the beamer changes something about the information the photon is caring is addressed in the paper. If it is addressed at all. 3. Wouldn’t the simplest explanation of the data be that half of the photons are such as to always bounce off beam splitters and half of them are such as to always pass through beam splitters? Then the beam splitter doesn’t even have to modify the state of the photon. These also seem like more natural intrinsic properties to give the photon than RIGHT and UP because they don’t raise questions like “What happens if you rotate the experiment?” 4. I guess you are assuming that all the photons are intrinsically identical to begin with. That would rule out my suggestion, although it seems an unwarranted assumption. It also seems you are assuming that the photon’s state can only be in two states (at least before you introduce the complex number stuff). If the photons are all in the same state initially then two states are not enough to generate the results of the experiment, but you can do it with three. Call the states A, B, and C, and let A be the initial state. When a beam splitter gets an A photon, half the time it reflects it and makes it a B photon, and half the time it passes it and makes it a C photon. B photons are always reflected and C photons are always passed. 5. Here are two things I found confusing about this presentation. You suggest thinking of the state of a photon as a probability distribution when it seems clear that (before you introduce the complex number stuff) it is meant to be a binary variable. And when you are describing how the beam splitters process photons you don’t separately consider how they change their states and whether they reflect or pass them. (I think the terms RIGHT and UP were meant to be somehow suggestive of how the reflecting/passing works, but I don’t really understand what these terms are supposed to indicate. Probably my earlier comment about rotating the experiment is off base, based on a wrong understanding of these terms.) • Good comments. I’ll try to address them one by one. > all the photons are intrinsically identical to begin with…seems an unwarranted assumption. Really? I have never heard of any theory that distinguishes, say, one standard Helium atom from another. Why would one photon be different from another? I believe in reality the mirrors are polarized, and a photon passing through the mirror will correspond to a change in spin of the photon (the binary states being “polarized” or “not polarized”). I used the terms RIGHT and UP because then I don’t have to talk about spin and polarization, and the fact is it doesn’t matter what you call the states. What matters is the behavior. I like to think about quantum mechanics as an algorithmic mechanism for manipulating abstract states, not a physical process for manipulating objects. > You suggest thinking of the state of a photon as a probability distribution when it seems clear that…it is meant to be a binary variable… [and also] you can do it with three [states instead of two]. You’re right, you can get the behavior by adding more states. I don’t have a good example that I can use to replace it (I will look for one), but I do know that once you add in the axiom that quantum transformations are linear, continuous, and reversible operators, you suddenly lose the ability to model it with classical probability. But it sounds like you already know this? Too bad the answers on stackexchange pretty much went off on a tangent. Irreversible dynamics in quantum mechanics is possible even though reversible dynamics via the Schrödinger equation is fundamental. This is because the restriction of a reversible dynamics in a larger system may not give a reversible dynamics on your subsystem of interest. Reversibility is not important for quantum computing. In measurement-based quantum computing (https://en.wikipedia.org/wiki/One-way_quantum_computer), one does computation equivalent to computation in the circuit model by preparing a multipartite “resource” state and then measuring each qubit, one by one, in a specific pattern and in an adaptive way with subsequent measurements depending on previous results. And there’re more works out there on dissipative quantum computing that I’m not familiar with. 7. The original positivist interpretation of quantum mechanics forces QC people to admit that certain non-empirically verified assumptions hold, such as unitary and psi-ontic wave functions. The “negative probabilities” are in a quasiprobability distribution where the anomaly is created at less than hbar and there are no final negative probabilities. By the way, the wave function of an electron is a complex-coefficient spinor function, which isn’t just a simple amplitude. The wave function can be positive, negative, complex, spinor or vector. Note that mathematicians have found quantum probability to be useless for modeling anything but atomic particles. Just about every field of science attempts to quantify counterfactuals and probability in QM is no more needed than in any other branch of science. Everything in QM follows from the uncertainty principle and that only gives support to the idea that we use probabilities to measure OUR uncertainty! The system could even be chaotic. To get probabilities, you are assuming classical particles but there are no classical particles. Quantum mechanics really predicts the expected values of observables and it does not measure probabilities. Just because simulating quantum states requires a high Turing complexity does not mean the argument runs backwards because the exponential computation on the wave function may well have nothing to do with the physical system of something like an dumb electron. Many in the QC subscribe to the Schrödinger’s cat fallacy. The fallacies here are as persistent as those about EPR. QCs may not be possible at all given that they go beyond current accepted theory of QM. People need to be honest about that. People making money building them are not honest, however. 8. Jeremy hi. I haven’t read all of your articles yet, but since I’ve been a Physics undergrad many years ago, the phrase “complex probabilities” was a bit too much for me. I can even now remember my dear prof. saying that “guys, even if we’re dealing with complex probability amplitudes, if you end up getting complex probabilities when calculating say the mean value of the total Energy, you’ve obviously did something wrong.”. I’m thinking you might have meant something different, but I just wanted to give a sincere feedback. Oh yes, also I haven’t lived many years in an english speaking country, so perhaps I’m not well aware of the technical jargon and the colloquialisms used in the States to describe the wavefunction or the Dirac states (). Cheers and keep it up. You’re a great inspiration, for a Physicist turned into an aspiring Applied Mathematician. ps: now that I think of it, the relationship of Quantum Mechanics as a method of using complex linear algebra to find results for the real world, feels analogous to the use of Complex Analysis to answer questions of the Real Analysis. ps2: I’d really like to read one of your posts in the future, that’d explain the idea behind Hugh Everett’s III Phd thesis [http://goo.gl/hRHxnf (PDF- 4.2MB)]. I don’t think it’s easily linked with Quantum Computing, so it might not interest you. But from skimming into it, I got the idea that it is closer to a Mathematician than to a Physicist. Maybe that’s why it was so unpopular between Physicists back then, and of course because N. Bohr was still alive. Oh yes, and it seems to me that he was deeply influenced by the work of C. Shannon on Information Theory. 9. As a physics graduate, what I want to say is that the historical approach to QM was only taught in some undergraduate introductory classes. Many talented student just skip it and learn the QM from mathematical assumption with experimental picture in mind. reference:Modern Quantum Mechanics by Sakurai 10. This is related to your Edit at the end. Bell’s theorem showed that if QM works the way they think it does, then the correlations you would see in certain experiments couldn’t be explained by a “local” hidden-variable theory. “Local” sort of means that information has to travel at the speed of light or less. (That includes not going backwards in time.) The example was a variation on the original Einstein-Podolsky–Rosen (EPR) paradox: a correlated-particle pair is produced in a middle place, and the two particles go to two distant detectors to (say) the East and West. I think the detectors need to be oriented by true random noise at each end just before detection. The correlation between the two detections follows a curve that’s a trig function of the difference between the two detectors’ orientations at the moment(s) of detection(s). The way I like to summarize it is: the shape of that curve *can’t be explained by any encoding of any amount of information* traveling along with the two particles. The key is that neither particle knows the relative orientation of the two detectors, each one acts locally as if it doesn’t know it, but the correlation between what they do *does* depend on it. Later experiments, especially Aspect’s, showed those curves (as predicted by QM) are the curves that show up. The place I finally found a decent explanation of amplitudes, EPR, Bell, and Aspect was in _Quantum Reality_ by Nick Herbert. Anyway, this-all means ours can’t be local physics plus classical probability; I’m pretty sure that’s what you meant by ruling out classical probability. I would also guess that it would prevent you modeling quantum circuits without (the equivalent of) amplitudes but like you said, that would be getting into physics, and I probably know less physics than you do. Leave a Reply WordPress.com Logo Google photo Twitter picture Facebook photo Connecting to %s
332252f6a2df67b9
2018 Material and Molecular Engineering Font size  SML Register update notification mail Add to favorite lecture list Academic unit or major Undergraduate major in Transdisciplinary Science and Engineering Takahashi Kunio  Hinode Hirofumi  Cross Jeffrey Scott  Matsumoto Yoshihisa  Course component(s) Mode of instruction Day/Period(Room No.) Tue1-2(S514)  Fri1-2(S514)   Course number Academic year Offered quarter Syllabus updated Lecture notes updated Language used Access Index Course description and aims This course aims to teach the basics of quantum mechanics and statistical thermodynamics of atomic interactions, and explain its relation to the material properties of metallic materials, semiconductor, insulator, polymer, ceramics etc. Student learning outcomes After studying this subject, the students should be able to: 1. acquire the basic knowledge of atomic/molecular interactions in engineering materials which is very essential in determining the material properties. 2. apply their knowledge to select material properties, understand how they can be manipulated, and determine what processes that best meet the requirement of an engineering design. Quantum mechanics, material properties, stastitical mechanics, thermodynamics. Competencies that will be developed Class flow Towards the end of classes, students will be asked to make brief summary, group discussion, or, exercise problems related the topics taught in the class. Course schedule/Required learning   Course schedule Required learning Class 1 Basics of quantum mechanics Understand equation of motion, Schrödinger equation Class 2 Isolated hydrogen atom Understand analytic solution of Schrödinger equation under Born-Oppenheimer approximation Class 3 Electron configuration and the line spectrum Understand spectral lines emitted from hydrogen plasma, principle of spectroscopy equipment Class 4 Basics of atomic bonding Understand LCAO concept, Ionic bond, Covalent bond, and Metallic bond Class 5 Comprehensive understanding of materials properties based on the basic of atomic bonding Understand the relation of atomic bonding and materials properties Class 6 Material/molecular structure and properties: Mechanical engineering perspective Understand the concept of mechanical properties of materials Class 7 Material/molecular structure and properties: Chemical engineering perspective Understand the concept chemical properties of materials Class 8 Material/molecular structure and properties: Electrical engineering perspective Understand the concept electrical properties of materials Class 9 Basic of Stastical Mechanics Understand the principle of statistical mechanics in term of the definition of temperature, equilibrium state, etc Class 10 Thermal properties of Material Undestand the usage of statistical mechanics to explain thermal properties of material Class 11 First law of thermodynamics Understand the first law of thermodynamics and its application Class 12 Second and third laws of thermodynamics Understand the second and third laws of thermodynamics and its application Class 13 Heat Transfer Understand the principle of energy transfer Class 14 Mass Transfer Understand the principle of mass transfer Class 15 State Transition Understand the principle of state transition in chemical reaction Callister, W.D. "Materials Science and Engineering: An Introduction", 7th edition, John Wiley and Sons, Inc. (2007). Reference books, course materials, etc. Smith, W. F., " Foundations of Materials Science and Engineering", 4th edition, McGraw-Hill. (2006) Atkins, P., Paula, J. D., "Physical Chemistry", 9th edition, W. H. Freeman and Company. (2010) Assessment criteria and methods Reports and final exam Related courses • ZUQ.T202 熱力学 Bases of Ordinary and Patial differential equations Page Top
bacf3a5ef8afef94
Transitionless quantum driving is a concept that was invented by Berry in 2009. In his article on transitionless quantum driving he showed that it is possible to speed up adiabatic evolution of eigenstates without generating transitions between eigenstates. By introducing an auxiliary Hamiltonian known as a counter-diabatic Hamiltonian, it is possible to drive eigenstates of an arbitrary Hamiltonian exactly, that is no transitions occur between eigenstates The reference to Berry's original article can be found here. To summarize Berry's article: By considering a very arbitrary time-dependent Hamiltonian $\hat{H}_0$, with instantaneous eigenstates and eigenenergies given by $$ \hat{H}_0(t)|n(t)\rangle = E_n(t)|n(t)\rangle \tag{1} $$ We have in the adiabatic approximation that the states driven by $\hat{H}_0(t)$ would be $$ |\psi_n(t)\rangle = e^{i(\theta_n(t)+\gamma_n(t))} |n(t)\rangle \tag{2} $$ where $\theta_n(t)$ is the dynamical phase $$ \theta_n(t)=-\frac{1}{\hbar}\int_0^tE_n(s)ds \tag{3} $$ and $\gamma_n(t)$ is the geometrical (Berry) phase $$ \gamma_n(t)=i\int_0^t\langle n(s)|\partial_sn(s)\rangle ds \tag{4} $$ Berry finds an equation such that transition to other eigenstates do not occur. This means that the adiabatic state becomes the exact solution to the Schrödinger equation $$ i\partial_t|\psi_n(t)\rangle=\hat{H}(t)|\psi_n(t)\rangle \tag{5} $$ Applying the time-derivative operator to the adiabatic state $(2)$, we obtain $$ \hat{H}(t) = \sum_n |n\rangle E_n\langle n| +i\hbar\sum_n(|\partial_tn\rangle\langle n|-\langle n|\partial_tn\rangle|n\rangle\langle n|) = \hat{H}_0(t)+\hat{H}_{CD}(t) \tag{6} $$ where I have suppressed the time-dependence for simplicity $|n(t)\rangle\equiv |n\rangle$. $$ \hat{H}_{CD}(t) = i\hbar\sum_n(|\partial_tn\rangle\langle n|-\langle n|\partial_tn\rangle|n\rangle\langle n|) \tag{7} $$ is known as the counter-diabatic Hamiltonian. The sum goes over all the eigenstates satisfying $(1)$. Equation $(6)$ give us the Hamiltonian $\hat{H}(t)$ that drive the eigenstates $|n(t)\rangle$ of $\hat{H}_0(t)$ exactly, even under diabatic-conditions. Question: In most real experiments, one usually only worries about the evolution of a subset of the Hilbert space, rather than the full Hilbert space. 1. Is it possible to define a Hamiltonian, a state-dependent Hamiltonian, that drive only a specific eigenstate rather then the full Hilbert space of eigenstates as in $(6)$? This specific eigenstate could for example be the ground state. 2. How would it look like? Your Answer Browse other questions tagged or ask your own question.
bd545dba0f9da361
Optimal diabatic dynamics of Majorana-based quantum gates Armin Rahmani Department of Physics and Astronomy and Quantum Matter Institute, University of British Columbia, Vancouver, British Columbia, Canada V6T 1Z4 Department of Physics and Astronomy, Western Washington University, 516 High Street, Bellingham, Washington 98225, USA    Babak Seradjeh Department of Physics, Indiana University, Bloomington, Indiana 47405, USA    Marcel Franz Department of Physics and Astronomy and Quantum Matter Institute, University of British Columbia, Vancouver, British Columbia, Canada V6T 1Z4 May 5, 2020 In topological quantum computing, unitary operations on qubits are performed by adiabatic braiding of non-Abelian quasiparticles, such as Majorana zero modes, and are protected from local environmental perturbations. In the adiabatic regime, with timescales set by the inverse gap of the system, the errors can be made arbitrarily small by performing the process more slowly. To enhance the performance of quantum information processing with Majorana zero modes, we apply the theory of optimal control to the diabatic dynamics of Majorana-based qubits. While we sacrifice complete topological protection, we impose constraints on the optimal protocol to take advantage of the nonlocal nature of topological information and increase the robustness of our gates. By using the Pontryagin’s maximum principle, here we show that robust equivalent gates to perfect adiabatic braiding can be implemented exactly in finite times through optimal pulses. In our implementation, modifications to the device Hamiltonian are avoided. We study the effects of calibration errors and external white and noise and show that our optimally fast pulse shapes are also remarkably robust, and have the potential to enhance the practical performance of Majorana-based information processing by orders of magnitude. 71.10.Pm, 02.30.Yy, 03.67.Lx, 74.40.Gh I Introduction Non-Abelian quasiparticles such as Majorana zero modes (MZMs) provide a promising platform for robust quantum information processing Kitaev (2003); Nayak et al. (2008). Qubits are encoded in the fermion parities of MZM pairs nonlocally and are protected from local environmental perturbations. Quantum gates are implemented as unitary transformations in the degenerate ground-state manifold via adiabatic braiding of the MZM worldlines. There has been remarkable progress in realizing MZMs recently Oreg et al. (2010); Lutchyn et al. (2010); Alicea (2012); Beenakker (2013); Elliott and Franz (2015) and several experimental groups are working toward their braiding Mourik et al. (2012); Das et al. (2012); Churchill et al. (2013); Rokhinson et al. (2012); Deng et al. (2012); Finck et al. (2013); Nadj-Perge et al. (2014); Aasen et al. . One of the challenges for an adiabatic scheme is the finite time of operations, causing inaccuracies in the unitary operations due to diabatic excitations Cheng et al. (2011); Karzig et al. (2013); Amorim et al. (2015). Other sources of error include quasiparticle poisoning Rainis and Loss (2012), the on/off ratio of Coulomb coupling van Heck et al. (2012), and the information decay due to time-dependent perturbations Goldstein and Chamon (2011); Schmidt et al. (2012). While topological protection can certainly defend the system against many environmental perturbation, they do not make the system immune to errors like quasiparticle poisoning, high-frequency noise, and coupling to a finite-temperature heat bath. These errors, in turn, limit the coherence time of the system, making it impossible to completely eliminate the diabatic excitations by sacrificing performance. A few recent studies have addressed the diabatic excitations. One idea is adding counterdiabatic terms Berry (2009); del Campo (2013) to the Hamiltonian of the system Karzig et al. (2015a); Zhang et al. (2015). This scheme requires a reengineering of the devices and may pose experimental challenges. Another idea is to minimize the diabatic excitations by using smoother adiabatic protocols Knapp et al. . While improving the accuracy of the gates, this scheme still requires slower dynamics than the speed limit of the device. A third approach is through the optimal control of the quantum evolution Peirce et al. (1988); Palao and Kosloff (2002); Král et al. (2007); Caneva et al. (2009); Doria et al. (2011); Rahmani and Chamon (2011); Rahmani (2013), in which we relax the requirement of remaining adiabatic during the evolution. Instead, we optimize the time dependence of the Hamiltonian parameters so as to generate the same final state as the perfectly adiabatic dynamics. This approach relies only on optimizing pulse shapes and can be applied to existing experimental setups. It also realizes the characteristic speed limit of the device, resulting in the fastest possible information processing. Optimal control has been applied to the motion of one MZM along a one-dimensional wire Karzig et al. (2015b) but the full optimal creation of the same unitary gates as the adiabatic braiding remains an open question. In this paper, we solve the optimization problem exactly in the context of a simple effective model of MZM braiding. More generally, we address the following key questions: What is the speed limit for generating the same unitary evolution operator as the adiabatic braiding for two MZMs in our device? How robust are these operations to calibration errors and noisy pulses? By relaxing the constraint of adiabaticity during the entire process, we give up strict topological protection. Indeed a fully unconstrained optimal ptotocol, which only minimizes the difference between the evolution operator and a a target unitary operator (corresponding to adiabatic braiding), would not utilize any of the topological features of the MZMs. In our optimal-control approach, we strike a balance between performance and robustness, by imposing constraints that can improve robustness against environmental perturbations, which utilize the nonlocal nature of information stored in pairs of MZMs. We then explicitly examine the effects of various errors on our gates and demonstrate remarkable practical advantages. For example, if we calibrate our gates within 2%, we can outperform an topologically protected adiabatic gate by two orders of magnitude in the operation time. The shorter times of the optimal protocols, expose the system to decoherence sources like white noise and the experimentally important noise for much shorter periods of time, allowing them to generate accurate unitary operations in much shorter times. The remainder of this paper is organized as follows. In Sec. II, we review an effective low-energy model for the braiding of MZMs. In Sec. III, we first formulate the optimal-control problem with a constraint that helps increase the robustness by making use of the nonlocal nature of information stored in pairs on MZMs. We briefly review the Pontryagin’s minimum principle and use to obtain an exact analytical optimal protocol that generated the adiabatic unitary operator exactly in a finite time. Sec. IV is devoted to an in-depth study of the robustness of our optimal protocol. In Sec. IV.1, we first present a general noise model theorigh the Taylor expansion of the dimensionless control parameters “seen” by the system in terms of the dimensionless control parameters we try to impart to the system. The leading error model for a Majorana-based qubit is multiplicative, while we have additive errors for generic nontopological qubits. In Sec. IV.2, we examine the effects of systematic calibration error. The errors (in terms of a measure of distance between the unitary operators) grows linearly (from zero) with the multiplicative error can can perform better that an adiabatic protocol that is two orders of magnitude slower, if the device is calibrated within 2%. In Secs. IV.3 nd IV.4, we consider random time-dependent errors, i.e., noise, in the control parameters. The effect of noise on adiabatic and optimal protocols is generally found to be very similar. We find that for white noise, the fast optimal protocol outperforms all adiabatic protocols considered. For (pink) noise, an adiabatic gate that is 10 times slower may perform better at larger strengths of noise, which produce an error of around 1% in the trace distance between the actual density matrix and the density matrix corresponding to the perfectly adiabatic evolution. We discuss a technique for correcting the errors caused by the limitations of our effective model in Sec. IV.5 and close the paper in Sec. V with a brief summary. Figure 1: (Color online) Optimal diabatic braiding of Majorana zero modes: (a) the 3-step braiding scheme for exchanging and ; (b) the optimal diabatic trajectories in the Bloch sphere for step A (the black star indicates a switching from one axis of precession to another); and (c) the bang-bang optimal protocol for the entire process (with ). Ii Effective model of Majorana braiding We start from a minimal effective model of braiding, which is relevant to the current experimental efforts involving one-dimensional topological superconductors, e.g., in the top-transmon Hassler et al. (2011); van Heck et al. (2012); Hyart et al. (2013). The Hamiltonian can be written in terms of four Majorana fermions as where and . The coupling constant represents the hybridization energy between and . We assume that all can be tuned as a function of time within a range . Defining two Dirac fermions and , we can write the Hamiltonian in the basis as a block-diagonal matrix where are the Pauli matrices. The upper (lower) block has even (odd) fermion parity. The standard adiabatic scheme of braiding a MZM pair proceeds in three steps as depicted in Fig. 1. Starting with and so that and are decoupled, we have two degenerate ground states, namely, and , with opposite fermion parity. In each step, we adiabatically turn on one coupling to its maximum value and turn off another to zero. At the end of the three steps, we return to the initial Hamiltonian, generating a unitary transformation in the ground state manifold. In the basis, up to an unimportant overall phase, we can write , hereafter referred to as the target unitary. Our goal is to generate (up to a phase) the target unitary via diabatic evolution of in a finite total time . The permissible diabatic protocols are bounded functions over the time interval . The shortest time, , for which it is possible to generate the target unitary with a permissible protocol sets the speed limit of the device. Iii Optimal control approach The most general diabatic protocols allow for the hybridization of all the MZMs, which destroys the topological protection. As discussed in Refs. Goldstein and Chamon (2011); Schmidt et al. (2012), adiabatic braiding is not protected against perpetual dynamical perturbations specially if they have high-frequency components. Furthermore, external noise can result in an antiadiabatic behavior Dutta et al. (2016) for very slow ramps (see also our Fig. 3 and its discussion). Moreover, the long time scales required to create accurate gates with the adiabatic evolution, under which the operation enjoys topological protection, may overshoot the coherence time of the system, which is limited by, e.g., quasiparticle poisoning. Topological protection, however, implies robustness to to wide range of local perturbations, and in particular static calibration errors. One approach would be to altogether abandon the benefits of information nonlocality and simply optimize to minimize the difference of the evolution operator and the adiabatic transformation . However, we take a balanced approach which to some extent utilizes the topological nature of the qubits. We constrain the optimal dynamics to track the same three-step dynamics as in the adiabatic scheme, without requiring adiabaticity during each step. For example, throughout step A, we keep and change and in their permissible range. Therefore, remains decoupled and the parity of the fermion cannot be accessed by local environmental perturbations. As the total parity is conserved, the parity of the fermion is also locally inaccessible despite the generation of diabatic excitations during the evolution. Similarly, in step B (C), we keep () and decouple (). This way, step A is protected from local environmental perturbations. If we execute step A perfectly, then we have a decoupled MZM at the beginning and during step B, and step B will be protected as well. The sacrifice to topological protection originates from possible inaccuracies in step A, which can propagate to the next steps. By design, at the end of each step (but not during), the state of the system is optimized to mimic a fully adiabatic evolution. Focusing on step A with , we have . Let us concentrate on one parity sector. The initial state is the ground state for and , i.e., the eigenstate of with eigenvalue , . The target state at the end of step A is the ground state for and , i.e., . Denoting the total time with , we minimize the following functional of : where indicates time ordering. For a given , the optimal protocol yields the smallest possible . As we increase , this minimal decreases and eventually vanishes for a critical time , where the target state is prepared exactly. To compute and the corresponding optimal protocol, we use Pontryagin’s maximum principle Pontryagin (1987). The principle states that for dynamical variables and control functions , evolving with the equations of motion from a given initial conditions to a final set , the optimal controls, , which minimize a cost function (any function of the final values of the dynamical variables), satisfy where are conjugate dynamical variables with equations of motion and and are optimal trajectories corresponding to . Furthermore, the boundary condition for is set by the cost function as As a consequence of Eq. (4), when [and consequently ] are linear functions of the controls , the optimal protocols are “bang-bang:” each of the control functions attain either its minimum or maximum allowed value at any given time (unless the coefficient of a component identically vanishes over a finite interval 111This special scenario does not occur in our system.). In the problem at hand, the real and imaginary parts of the wave function serve as dynamical variables, with equations of motion given by the Schrödinger equation, which is indeed linear in the controls . Also, the cost function (3) depends only on the final wave function. Therefore, of all the permissible functions , the optimal protocols are discontinuous functions that either vanish or attain their maximum allowed value at any given time. We cannot have for optimal control since then the Hamiltonian would vanish and the state would not evolve. Thus, the optimal protocol consists of a sequence of potentially three types of Hamiltonians with sudden switchings between them. Due to the mapping of the Hamiltonian for each parity sector to a spin-, we can visualize the dynamics on the Bloch sphere. If only () is turned on, the quantum state precesses around the () axis in the Bloch sphere. If both couplings are turned on, it precesses around an intermediate axis shown in black in Fig. 1b. We now identify the minimal path corresponding to the critical time . This simultaneously determines the optimal protocol and the minimum required time for an exact state transformation. As seen in Fig. 1b, in the special case with , the protocol is extremely simple. We turn on both couplings to their maximum and a single precession prepares the target state exactly in a time . In the general case, we only need one switching during the process as shown in Fig. 1b. The general form of the optimal protocol in a step that transfers a MZM from leg to leg is as follows. If , we first switch on while keeping , wait for a time , and then switch on for a time . For , due to time-reversal symmetry, the process is the same in reverse. An example of such optimal protocol, combining all three steps, is shown in Fig. 1c. While in steps B and C, , it turns out that for both blocks, the initial state is transformed to the target state by the same protocol. We now explicitly compute the non-Abelian unitary operator generated by the optimal protocols above. Without loss of generality, we consider the case . Using the notation , we can write Despite the complexity of the above unitaries, it can be verified that the evolution operator (generated by the optimal protocols as in Fig. 1c), projected to the ground state manifold, and ), i.e., , equals the target unitary up to an overall phase. Iv errors and robustness iv.1 Error model Since our optimal bang-bang protocols are fine-tuned to the parameters of the device, one should naturally wonder how robust the process is. We consider two types of errors: (i) calibration errors that arise from the absence of precise knowledge about the actual effective Hamiltonian parameters; (ii) random errors due to the imperfect control over the external knobs, e.g., gate voltages, which make the parameters noisy. The errors of type (i) are systematic and can be minimized by careful calibration. The errors of type (ii), on the other hand, generate a different final state every time the experiment is run. We demonstrate that even in the presence of these errors, our scheme presents advantages over the adiabatic methods. We begin by modeling the errors. Generically, attempting to tune a coupling to imparts to the system an effective . The error can be expanded (at any point in time) in as Calibration errors are characterized by time-independent  Karzig et al. , whereas random errors are modeled by noisy . Here we focus on Gaussian white noise with second moment and noise strength as well as (pink) noise, which is expected to be the dominant source of noise in experiments. For the white noise, the spectral noise density defined as the Fourier transform of the correlation function, i.e., is a constant, while for pink noise id decays as For the case of white noise, we compute the noise-averaged density matrix through a numerically exact solution of a Lindblad-type master equation. Due to the correlations in pink noise, the noise-averaged density matrix evolves with an integral equation that is difficult to solve. We therefore resort to discrete Langevin-type numerical simulation, where we generate many discrete realizations of noise, evolve the system for each with the Schrödinger equation, and average the density matrices at the end. For nontopological qubits, the leading error is the additive error . However, for topological qubits, e.g., in the top-transmon, the coupling is generated by the overlap of Majorana wave functions; so in the limit all errors are exponentially small Hassler et al. (2011); van Heck et al. (2012); Hyart et al. (2013). Thus, the additive error is irrelevant for Majorana-based topological qubits and the leading error is the multiplicative . In the following, we present results for both additive and multiplicative errors. However, only the multiplicative error is relevant to topological qubits. iv.2 Calibration errors We first discuss the calibration errors. Evolving the system with a given protocol generates an evolution operator in the ground-state manifold. We quantify the deviation from the target unitary by the distance Zhang (2011) which is independent of the initial state. The target unitary lives in the ground-state manifold and is the projection of the full evolution operator to this manifold. Although may not be unitary after this projection, still provides a sensible measure of distance. Figure 2: (Color online) The effects of calibration errors. The distance of the actual evolution operator to the target unitary as a function of additive (top) and multiplicative (bottom) calibration errors for optimal diabatic protocol at as well as the linear and smooth adiabatic protocol at and . The inset shows vs. for the linear protocol for . For concreteness, we focus on the case , where the optimal protocols are simple. In the adiabatic schemes, each step is done in a time . We consider two types of adiabatic protocols: linear switches and ; and smooth switches and with vanishing slopes at the boundaries of the steps. Here is measured from the beginning of each step. For all of these protocols (optimal, linear, and smooth), the evolution of the system is governed through as in Eq. (7) to leading order (additive and multiplicative , respectively, for generic and topological qubits). For simplicity we take independent of . The optimal protocol for generates the target unitary exactly in a time . The linear and smooth protocols over the same time are completely nonadiabatic (see the inset of Fig. 2). Therefore, instead of a comparison over the same time, we compare the optimal protocol with adiabatic protocols that are at least one order of magnitude slower. In Fig. 2, we show the error as a function of additive and multiplicative calibration error. As expected, there are no advantages for an adiabatic protocol in the nontopological case of additive noise. On the other hand, topological protection gives rise to robust adiabatic protocols in the multiplicative-noise case. For timescales that are an order of magnitude larger, the adiabatic methods are sensitive to the pulse shape and the calibration error . At time scales that are two orders of magnitude larger, the adiabatic method becomes insensitive to and starts to outperform the optimal protocol for errors larger than (note that is dimensionless). Upon further increasing , the robust error of the adiabatic method decreases further. It is undesirable and impractical, however, to keep slowing down the process. The fast optimal protocol, which has a fixed short time , can perform better than any adiabatic gate, upon improved calibration. As seen in Fig. 2, the error for the optimal protocol has a linear dependence on . Figure 3: (Color online) The effects of random noise. The trace distance between the final and the target density matrices for an equal-weight initial superposition of the ground states as a function of the noise strength, , for optimal diabatic, and linear and smooth adiabatic protocols. iv.3 Random white noise We now turn to the noisy couplings. While systematic errors can be potentially corrected by careful calibration, random time-dependent errors pose a greater challenge to both the adiabatic and optimal gates. We start by quantifying the errors due to noise. Noise averaging is essential when dealing with random protocols. Direct averaging of the unitaries, however, creates artificial dephasing due to the unimportant overall factors. Thus, we need a different cost function. We choose to work with the noise-averaged density matrix, . We start from a particular superposition of the ground states as the initial state, , where , yielding the initial density matrix , which is then evolved and averaged over noise to obtain , by solving the master equation Pichler et al. (2013); Rahmani (2015); Dutta et al. (2016) The target state yields the target density matrix . We then quantify the error the trace distance k We consider the leading order with-independent noise, where only and are nonzero, respectively, for the nontopological and topological qubits. Numerically solving for and computing the trace distance for the optimal as well as linear and smooth adiabatic protocols up to indicates that the optimal protocol generally outperforms the adiabatic protocols for both additive and multiplicative noise. For , the optimal protocol produces a vanishing trace distance, which then grows as , while remaining much smaller than the trace distance corresponding to the adiabatic schemes before reaching saturation. Only for the smooth protocol performs slightly better than the optimal protocol for multiplicative noise (as seen in a barely noticeable crossing of the green and blue curves in the bottom panel of Fig. 3). However, this occurs in the regime of relatively large . Interestingly, there is a crossing of adiabatic curves with and in Fig. 3 for both additive and multiplicative noise, beyond which increasing the time scales of the adiabatic protocols reduces their robustness. This anti-adabatic behavior appears analogous to the anti-Kibble-Zurek behavior Dutta et al. (2016). We comment that in real experiments, a weakly coupled bath is always present. If is larger than the relevant coherence time, the errors in Fig. 3 will be cut off, saturating at values determined the temperature of the bath. In this case, the system decoheres and both adiabatic and optimal schemes would fail (as quantum coherence is necessary for quantum information processing). However, the adiabatic schemes are more likely to suffer from the bath-induced decoherence effect due to their longer operation times. iv.4 Pink, noise The advantage of white noise is that it allows for numerically exact calculations through the solution of a local (in time) deterministic master equation [see Eq. (11)] . This limit is relevant under more general conditions than those suggested by its precise mathematical definition, e.g., to the ubiquitous Ornstein-Uhlenbeck process, where the correlations of noise in the time domain decay exponentially. Intuitively, exponentially decaying correlations can be safely cut off after a characteristic correlation time, recovering the white-noise predictions upon temporal rescaling D’Alessio and Rahmani (2013). However, we expect the noise spectra in real experiments to have a frequency dependence Hassler et al. (2011). Before a quantitative analysis of noise, we comment that qualitative similarities between the effects of white noise and other types of colored noise are expected. Noise introduces a rate for the deposition of excess energy, which can be understood by viewing it as a sequence of small quantum quenches. Each quench deposits some energy into the system without a strong dependence on the deterministic part of the Hamiltonian . Whether there are correlations between these quenches (colored noise) or they are completely uncorrelated (white noise) should not qualitatively alter this generic effect. This does not imply, however, that the spectral density of the noise is unimportant. An extreme case is a noise spectrum localized on certain frequencies, which are either resonant or lie outside the bandwidth of the system, respectively enhancing or suppressing the absorption of energy by the system. Such localized noise spectra are not ubiquitous in experiment though. The temporal correlations of noise make it impossible to compute the noise-averaged density matrix by solving a single deterministic differential equation. We therefore take a brute-force approach of direct Langevin-type numerical simulations, where we use the method of Ref. Kasdin (1995) to generate the discrete noise signal. This method applied to noise spectrum, with () corresponding to white (pink) noise. In this section, we only present results for the noise with but we have checked as a benchmark that the method indeed reproduces correct results for the white-noise case. We first divide the total time of the process into intervals of duration . We only consider the multiplicative noise in this section and keep the simplifying assumption . The discretized noisy coupling constants then become with . The discrete noise signal is generated from an uncorrelated zero-mean Guassian signal with a standard deviation of unity ( and ) by using an autoregression model of finite order relating the two signals through Kasdin (1995): where represents the gamma function. We are interested in the limit of so our discrete simulations can provide a reasonable approximation to the continuous process. To this end, we first fix and generate enough realizations that the final errors converge within acceptable error bars. We then increase to achieve also convergence in . Achieving perfect convergence in these calculations is time consuming specially for longer times and larger strength of noise. Nevertheless, by analyzing more than realizations and five different values of (), we were able to reduce the error bars to an acceptable value. The results are shown in Fig. 4. As expected, both the bang-bang optimal protocol and the linear adiabatic protocol are affected by the noise in a qualitatively similar manner to white noise. Due to the suppression of high-frequency modes, the noise has a milder effect than white noise on both of these protocols. These numerical results support our qualitative picture of the effects of noise. The advantages of the optimal protocol survive in the noise regime, where the errors are small. Figure 4: (Color online) The effects of pink noise on both the optimal bang-bang and a linear adiabatic protocol that is 10 times slower. The white-noise data from Fig. 3 is replotted for easy comparison. iv.5 Correcting the errors due to the limitations of the model Our results are obtained in the context of the effective model (1), which is written in terms of low-energy degrees of freedom and has an infinite gap to higher excitations. The optimal protocol involves sharp sudden quenches, which, in a more realistic model with a finite excitation gap, may cause high-energy excitations. In this section, we fix this issue by introducing an alternative cost function (for each of the three steps of the protocol) that penalizes sharp transitions and yields continuous optimal protocols that only take slightly longer than . We introduce a modified optimal-control problem for each of the three steps of the dynamics, where, e.g., in step A, we minimize instead of minimizing, e..g., , with the constraints , , , and (and similarly for steps B and C). The second term penalizes large derivatives in the protocol, turning the sudden jumps into continuous ramps. The weight sets the time scale of the ramps from 0 for to the total time of the step for , in which case we get a simple linear protocol from Euler-Lagrange minimization of . Figure 5: (Color online) The continuous protocol. The plot shows protocols for ramping up in step A with obtained from Monte Carlo optimization by allowing a total time , as well as the corresponding distance to the target state, for various weights . While the Pontryagin’s formalism can also shed light on optimal-control problems with a trajectory-dependent cost function as in Eq. (15), an analytical solution of the constrained problem is challenging. We therefor use direct numerical minimization. Approximating a general protocol with a piece-wise constant protocol with steps, we perform Monte Carlo simulations over the shape of the protocols to minimize for several values of over a total time . The results for ramping up are shown in Fig. 5, indicating a continuous transformation from the bang-bang protocols corresponding to . (The protocols for ramping down in this step are reflected about the center with a similar timescale.) For finite , the sudden jumps are spread over finite time scales. The overall protocol then looks very similar to the bang-bang protocol of Fig. 1(c) except each sudden jump is spread over a time window of length . We need to increase the total time of the operation by the sum of these ramp times to to get a small final error. For example, in Fig. 5, when we add 10% to the time of step A, the protocol with a negligibly small (for ) spreads the jump over a time interval of approximately . We may compare this timescale to that in a more realistic model. For example, in the architecture of Ref. Aasen et al. , the time scales of a quench should be larger than a characteristic time , where is the Josephson and is the charging energies of mesoscopic superconducting islands. As a practical guide, this can be satisfied if . V conclusions In summary, based on Pontryagin’s theorem of optimal control, we proposed the optimal protocols for generating the same unitary operator as the one corresponding to fully adiabatic braiding of MZMs. While not providing full topological protection, our constrained optimal-control approach makes use of the nonlocal nature of the information stored in MZMs to make the system robust against some environmental perturbations. Through tailored diabatic pulse shapes, our scheme can significantly increase the speed of devices such as the top-transmon, without the need for any change to the experimental setup. Such fast accurate operations may defend the system against decoherence effects such as quasiparticle poisoning. The advantages of our method survive in the presence of white and noise and small calibration errors. The robustness can be further enhanced by making the pulses continuous without significantly sacrificing the performance of the device. Our proposed optimal diabatic gates can foster the development of high-performance quantum information processing with MZMs. This work was supported by NSERC (MF and AR), CIfAR (MF), Max Planck-UBC Centre for Quantum Materials (MF and AR), and by the NSF CAREER grant No. DMR-1350663, the BSF grant No. 2014345, as well as the College of Arts and Sciences at Indiana University (BS). We acknowledge support provided by WestGrid ( and Compute Canada Calcul Canada (